doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
[ { "figure_ref": [], "heading": "1.Introduction:", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b3", "b6", "b7", "b7" ], "table_ref": [], "text": "Artificial intelligence (AI) models in healthcare often achieve performance on par with human specialists 1,2 . However, AI models with high performance on the overall population may have performance disparities for specific sub-populations. Discrepancies in AI model performance between sub-populations have been widely demonstrated in many applications. For example, race and sex bias has been reported in AI models developed for medical image disease diagnosis 3 . Patients' self-reported race can be detected from their medical images alone 4 by AI algorithms, and it is known that some algorithms may also have worse performance for historically underserved races (e.g. Hispanic or Black patients).\nSuch biases in healthcare AI-based decision-making tools are a critical issue that must be communicated, understood, and addressed before large-scale adoption. The model facts label was proposed to report clinical AI model performance to end users e.g., clinical staff 5 . However, the overall performance report does not demonstrate the full picture of how, when, and under what circumstances the model works or fails. Mitchell et al. introduce a model card for non-healthcare applications, which includes fairness analysis 6 . Considering the potential critical direct impact of healthcare AI model failures on human lives, including thorough bias analysis in the model card is crucial.\nMoreover, AI models bias analysis often has focused only on investigating inequitable outcomes between different races and sexes 4,7,8 . Such a narrow focus misses other sources of bias and does not consider heterogeneity within members of a specific race/sex subgroup. For example, consider the case of a commercial algorithm that identified Black patients as having fewer healthcare needs than White patients with similar medical conditions. This occurred because the model used healthcare costs as a proxy to health status and Black patients often spent less on healthcare than White patients due to socioeconomic reasons 8 . If this model was debiased simply by correcting predictions for Black patients, the model would still harm low-income patients of all other races since income level cannot be universally attributed to race. Therefore, race is the incorrect bias factors to correct for in this instance." }, { "figure_ref": [], "heading": "2.Theory:", "publication_ref": [], "table_ref": [], "text": "In this work, we suggest that:\n2.1) model fact cards in health applications should be required to report thorough bias analysis outcomes to the end user, 2.2) bias analysis needs to highlight disparities with respect to social sensitive attributes in a broader regime beyond the well-known sex and race to capture the impact of other factors such as socioeconomic status, education, etc. 2.3) bias analyses need to be expanded to non-social factors that can be considered as sensitive attributes. The non-social factors may include (1) anatomic factors (e.g., body habitus, anatomic variants), (2) disease-dependent factors (e.g., disease appearance), (3) instrumental factors (e.g., imaging devices), and (4) data sources. Expanding bias reporting to non-social factors reveals other hidden disparity drivers, and allows clear and accurate communication of model biases and limitations to the end user and policy makers." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Methods:", "publication_ref": [ "b8", "b9" ], "table_ref": [], "text": "We use two demonstrative model cards which report AI models outcome across different performance metrics for both social and non-social factors. Figure 1 shows the model card of an AI model trained on the CheXpert dataset, externally validated on a dataset of 200,000 chest x-rays from a tertiary care center 9 . Figure 2 demonstrates the model card for the imaging abnormality classification in screening mammogram analysis 10 . Model cards often include model details and information on what data the model has been trained on. In addition, for a given metric M∈{Accuracy, F1 score, sensitivity, specificity, AUC, …}, we have reported ΔM = M subgroup -M overall , which demonstrates how much gap the subgroup measure of metric M, M subgroup , is experiencing compared to the overall population, M overall . A positive gap means the model performs in favor of the given subgroup, while the negative gap demonstrates the subgroup is unfavorable." }, { "figure_ref": [], "heading": "4.Results and Discussion:", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "The impact of social factors on disparate AI outcomes", "publication_ref": [ "b2", "b10", "b11", "b3", "b6", "b7", "b12", "b13", "b12", "b12", "b2", "b2", "b13", "b12" ], "table_ref": [], "text": "Equality in model performance across subpopulations of a given sensitive attribute has been the focus of bias analysis 3,11,12 . The impact of social factors such as patient race and sex on disparate outcomes of AI models in health care has been widely demonstrated 4,7,8,13 . However, with much less attention to disparate outcomes of AI models with respect to other social factors such as patients' language (English vs. non-English speaker) 14 , education 13 , income level 13 , age 3 , or insurance type 3 have been demonstrated. For example, in a systematic study of AI-based chest X-ray prediction, Kalantari et al. found that AI models underdiagnosed historically under-served patients, e.g., such as younger patients or patients with Medicaid insurance type who are often lowincome at a higher rate 3 . Also, Zhang et al. perpetuated undesired biases, resulting in performance discrepancies with respect to patients' spoken language (English vs. non-English), ethnicity, and insurance type 14 . Moreover, Pierson et al demonstrate bias with respect to the patients' education and income level in pain and disease severity measure 13 . Such studies demonstrate the importance of widely expanding the social sensitive attributes to include such factors as described. In Figure 1 and2, you can see the disparate outcome of both AI models for chest X-ray and the mammography abnormality detection across sex, age and race. We prefer to report these quantities for all proposed social factors but we have been limited by the data availability for some factors." }, { "figure_ref": [], "heading": "Non-social factors impact disparate AI model outcome", "publication_ref": [ "b8" ], "table_ref": [], "text": "In addition to social factors, there are non-social factors that can contribute to AI model bias which results in disparate outcomes and bias for groups of patients. These non-social factors may have a great impact on end-users' ability to trust the predictions of AI models in their specific practice. For example, if an AI model constantly has lower performance on patients whose image is gathered using a specific imaging device 9 , then the end-user can be informed about this shortcoming in the model card and rely on the outcome of AI model accordingly. Here, we list some non-social factors that we suggest should be considered in the context of AI models for radiology applications." }, { "figure_ref": [ "fig_1" ], "heading": "Anatomic factors:", "publication_ref": [ "b14", "b15", "b16", "b17", "b18", "b19" ], "table_ref": [], "text": "Anatomic variants and prior conditions may cause errors in diagnosis and treatment planning. For example, anatomic variants in the spine may be a source of inaccurate radiotherapy planning 15 . Similarly, anatomic variants can also be a source of inaccuracy in machine learning models for organ or tumor segmentation 16 . Oakden-Rayner et al. found that a classifier for hip fracture detection on frontal X-rays performed worst in cases with abnormal bone or joint appearance, such as Paget´s disease of the bone 17 . Additionally, differences in body habitus may affect model predictions. For instance, increased breast density is an independent risk factor for breast cancer 18 and simultaneously reduces mammography´s sensitivity in breast cancer screening 19 . Disparate AI model performance across breast density has been shown in Fig. 2 which is an illustrative example. Therefore, higher breast density in Asian and Black patients 20 may result in disparities for AI models due to breast density, not the race. In this example, adjusting model predictions based on race to reduce bias would be inappropriate for two reasons: (1) Asian or Black women with low breast density are now at higher risk of false positives, and (2) White women with high breast density will continue to be underdiagnosed. Rather than adjusting for race, predictions should be adjusted based on breast density." }, { "figure_ref": [ "fig_1" ], "heading": "Disease-dependent factors:", "publication_ref": [ "b20", "b20", "b21", "b22", "b23" ], "table_ref": [], "text": "The same disease, but with different expressions, may also affect model performance. COVID-19 pneumonia may have different expressions according to the viral variant, immunization status and phase of the disease (early infection, pulmonary phase or hyperinflammatory phase) 21 . For instance, studies show that in hospitalized patients with COVID-19, CT was more likely to be negative for pneumonia during periods of Omicron versus Delta variant prevalence and that the proportion of patients with an atypical CT pattern was higher in the Omicron variant group than in the Delta variant group. Therefore, models developed to diagnose COVID-19 pneumonia should take into account those variables in order to not underdiagnose people with less severe disease 21 . In Figure 2 disparate outcomes of AI models across disease dependent factors including mass, architectural distortion, calcification and asymmetry has been demonstrated. Therefore, patients' disease labels have an impact on the model performance.\nAnother factor that influences different expressions for the same disease is the patient immunity, whether immunocompetent or immunocompromised. It is known that pulmonary tuberculosis also may present with different chest CT findings in HIV patients in comparison to immunocompetent patients 22 . Even different types of immunodeficiency may lead to variability in disease expression. For instance, the pulmonary manifestations of Pneumocystis jirovecii pneumonia might have different chest CT patterns between HIV-positive patients compared to patients with other reasons of immunosuppression, as in hematooncologic and posttransplant patients and in patients under immunosuppressive drugs due to autoimmune diseases 23 . Those differences, if not recognized, may be a source of disparity in the correct diagnosis of patients that require more complex care.\nVariable tumor appearance may also impact model performance in specific subgroups. For example, pancreatic adenocarcinoma may be isoattenuating in up to 5.4% of cases making them visually indistinguishable from the surrounding pancreatic parenchyma on dynamic CT imaging and difficult to diagnose 24 . Therefore, the consistently lower sensitivity of an AI model for specific tumor appearance could result in disparate outcomes." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Instrumental factors:", "publication_ref": [ "b5", "b8", "b9" ], "table_ref": [], "text": "AI model performance for face detection may vary depending on what cameras are used 6 . This is also the case in medical imaging: Ahluwalia et al. observed that the performance of AI models trained for abnormality classification varied substantially across different imaging devices in radiology 9,10 . For example, as shown in Figure 1, there is a 23% difference in sensitivity and specificity when applied to images taken using the GE Type 1 compared to images taken using a Varian Type 1. Similar disparate outcomes for AI model trained for mammography screening have been observed across imaging devices. (See model card in Figure 2, ∆AUC and ∆F1 score across imaging devices)." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Data source:", "publication_ref": [ "b24", "b2", "b8", "b8" ], "table_ref": [], "text": "Data sources (i.e., the type of hospital and patient population) impact model performance. For instance, as previously shown models trained on the CheXpert 25 dataset, which has more tertiary care centers cases, have less bias 3 than models trained on Chest-Xray14 which is gathered from a hospital that does not do routine procedures 3 . Additionally, model performance differs even across multiple departments within the same hospital. External validation of four AI models trained on four different datasets for the same disease classification task demonstrated reduced sensitivity, but increased specificity, in emergency room patients. The reverse was true for inpatients and ICU patients 9 . (See model card in Figure 1 for ΔM across different departments within the same hospital). Similarly for mammography screening AI models, we can find the disparate outcome of AI models across different departments in the hospital (See ΔM in Figure 2). The disparate outcomes across data sources must be evaluated and communicated to the end user to ensure appropriate, safe, and fair implementation of AI models into clinical care. In this case we analyzed a third-party classifier applied to 200,000 images from a tertiary care center 9 . We plot the difference in performance as measured by multiple metrics to demonstrate disparities. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We propose a framework for considering and communicating a wider range of social and non-social factors in bias analysis, and push for its adoption in AI model fact cards. Analysis and communication of such a wider range of biases identify potential drivers of bias which pave the way to debiasing. Developers and end users may select or consider other factors based on their use case and available data. Additionally, they may re-consider gathering features that are not gathered regularly and we have shown they may impact the AI model performance. We are also aware that the fairness assessment is still performance-based, which means it is still relying on binary outcome and comparison to the ground truth label. This is not the best metrics, since the ground truth is biased. Therefore, having a good grade on such a model card does not guarantee that the algorithm is fair. In the end, the ultimate fairness metric is the impact on downstream clinical outcomes." }, { "figure_ref": [], "heading": "Data availability", "publication_ref": [], "table_ref": [], "text": "The datasets used and/or analysed during the current study available from the corresponding author on reasonable request." }, { "figure_ref": [], "heading": "Code availability", "publication_ref": [], "table_ref": [], "text": "The underlying code for this study and training/validation datasets are not publicly available but may be made available to qualified researchers on reasonable request from the corresponding author." }, { "figure_ref": [], "heading": "Author contributions:", "publication_ref": [], "table_ref": [], "text": "All authors contributed to the creation of this commentary." }, { "figure_ref": [], "heading": "Funding Source Declaration:", "publication_ref": [], "table_ref": [], "text": "LAC is funded by the National Institute of Health through NIBIB R01 EB017205. We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant and Connected Mind Canada First Research Excellence Fund (CFREF) grant to L. S. K. The funder played no role in study design, data collection, analysis and interpretation of data, or the writing of this manuscript." }, { "figure_ref": [], "heading": "Declaration of Interest:", "publication_ref": [], "table_ref": [], "text": "All authors declare no financial or non-financial competing interests." } ]
Clinical AI model reporting cards should be expanded to incorporate a broad bias reporting of both social and non-social factors. Non-social factors consider the role of other factors, such as disease dependent, anatomic, or instrument factors on AI model bias, which are essential to ensure safe deployment.
Benchmarking bias: Expanding clinical AI model card to incorporate bias reporting of social and non-social factors
[ { "figure_caption": "Fig. 11Fig.1 Example of suggested model card showing model performance analysis across social and non-social factors.In this case we analyzed a third-party classifier applied to 200,000 images from a tertiary care center9 . We plot the difference in performance as measured by multiple metrics to demonstrate disparities.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig 2 .2Fig 2. Model card for classification of abnormalities in screening mammography. Statistical analysis indicates that there are both social and non-social factors, such as anatomic factor (breast density), disease-dependent factors (architectural distortion), and instrumental factors, aggravating classification performance, which may lead to failure of the abnormality object detection in mammograms 10 .", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" } ]
Carolina A M Heming; Mohamed Abdalla; Monish Ahluwalia; Linglin Zhang; Hari Trivedi; Minjae Woo; Benjamin Fine; Judy Wawira Gichoya; Anthony Leo; Celi; Laleh Seyyed- Kalantari
[ { "authors": "A Esteva", "journal": "Nature", "ref_id": "b0", "title": "Dermatologist-level classification of skin cancer with deep neural networks", "year": "2017" }, { "authors": "H A Haenssle", "journal": "Ann. Oncol", "ref_id": "b1", "title": "Man against machine reloaded: performance of a marketapproved convolutional neural network in classifying a broad spectrum of skin lesions in comparison with 96 dermatologists working under less artificial conditions", "year": "2020" }, { "authors": "L Seyyed-Kalantari; H Zhang; M B Mcdermott; I Y Chen; M Ghassemi", "journal": "Nat. Med", "ref_id": "b2", "title": "Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations", "year": "2021" }, { "authors": "J W Gichoya", "journal": "Lancet Digit. Health", "ref_id": "b3", "title": "AI recognition of patient race in medical imaging: a modelling study", "year": "2022" }, { "authors": "M P Sendak; M Gao; N Brajer; S Balu", "journal": "NPJ Digit. Med", "ref_id": "b4", "title": "Presenting machine learning model information to clinical end users with model facts labels", "year": "2020" }, { "authors": "M Mitchell", "journal": "", "ref_id": "b5", "title": "Model cards for model reporting", "year": "2019" }, { "authors": "D Cirillo", "journal": "NPJ Digit. Med", "ref_id": "b6", "title": "Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare", "year": "2020" }, { "authors": "Z Obermeyer; B Powers; C Vogeli; S Mullainathan", "journal": "Science", "ref_id": "b7", "title": "Dissecting racial bias in an algorithm used to manage the health of populations", "year": "2019" }, { "authors": "M Ahluwalia", "journal": "Radiol. Artif. Intell", "ref_id": "b8", "title": "The Subgroup Imperative: Chest Radiograph Classifier Generalization Gaps in Patient, Setting, and Pathology Subgroups", "year": "2023" }, { "authors": "L Zhang", "journal": "", "ref_id": "b9", "title": "Multivariate Analysis on Performance Gaps of Artificial Intelligence Models in Screening Mammography", "year": "2023" }, { "authors": "L Seyyed-Kalantari; G Liu; M Mcdermott; I Y Chen; M Ghassemi; Chexclusion", "journal": "World Scientific", "ref_id": "b10", "title": "Fairness gaps in deep chest X-ray classifiers", "year": "2020" }, { "authors": "H Zhang", "journal": "PMLR", "ref_id": "b11", "title": "Improving the fairness of chest x-ray classifiers", "year": "2022" }, { "authors": "E Pierson; D M Cutler; J Leskovec; S Mullainathan; Z Obermeyer", "journal": "Nat. Med", "ref_id": "b12", "title": "An algorithmic approach to reducing unexplained pain disparities in underserved populations", "year": "2021" }, { "authors": "H Zhang; A X Lu; M Abdalla; M Mcdermott; M Ghassemi", "journal": "", "ref_id": "b13", "title": "Hurtful words: quantifying biases in clinical contextual word embeddings", "year": "2020" }, { "authors": "E Ford", "journal": "Med. Phys", "ref_id": "b14", "title": "Strategies for effective physics plan and chart review in radiation therapy: report of AAPM Task Group 275", "year": "2020" }, { "authors": "S Bohlender; I Oksuz; A Mukhopadhyay", "journal": "IEEE Rev. Biomed. Eng", "ref_id": "b15", "title": "A survey on shape-constraint deep learning for medical image segmentation", "year": "2021" }, { "authors": "L Oakden-Rayner", "journal": "Lancet Digit. Health", "ref_id": "b16", "title": "Validation and algorithmic audit of a deep learning system for the detection of proximal femoral fractures in patients in the emergency department: a diagnostic accuracy study", "year": "2022" }, { "authors": "V A Mccormack; I Dos Santos Silva", "journal": "Cancer Epidemiol. Biomarkers Prev", "ref_id": "b17", "title": "Breast density and parenchymal patterns as markers of breast cancer risk: a meta-analysis", "year": "2006" }, { "authors": "C M Checka; J E Chun; F R Schnabel; J Lee; H Toth", "journal": "Am. J. Roentgenol", "ref_id": "b18", "title": "The relationship of mammographic density and age: implications for breast cancer screening", "year": "2012" }, { "authors": "A Y El-Bastawissi; E White; M T Mandelson; S Taplin", "journal": "Ann. Epidemiol", "ref_id": "b19", "title": "Variation in mammographic breast density by race", "year": "2001" }, { "authors": "Y J Jeong", "journal": "Radiology", "ref_id": "b20", "title": "Current and emerging knowledge in COVID-19", "year": "2023" }, { "authors": "J Burrill", "journal": "Radiographics", "ref_id": "b21", "title": "Tuberculosis: a radiologic review", "year": "2007" }, { "authors": "H J Salzer", "journal": "Respiration", "ref_id": "b22", "title": "Clinical, diagnostic, and treatment disparities between HIVinfected and non-HIV-infected immunocompromised patients with Pneumocystis jirovecii pneumonia", "year": "2018" }, { "authors": "J H Kim", "journal": "Radiology", "ref_id": "b23", "title": "Visually isoattenuating pancreatic adenocarcinoma at dynamicenhanced CT: frequency, clinical and pathologic characteristics, and diagnosis at imaging examinations", "year": "2010" }, { "authors": "J Irvin", "journal": "", "ref_id": "b24", "title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison", "year": "2019" } ]
[]
2024-03-31
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b47", "b69", "b67", "b38", "b50", "b40", "b10", "b3", "b10", "b16", "b74" ], "table_ref": [], "text": "Estimating the 6 Degrees of Freedom (6DoF) object pose stands as a fundamental challenge in 3D computer vision. This task plays a pivotal role in numerous real-world applications, including augmented reality [45,48,70], robotic grasping [68,69], and autonomous driving [39,51]. Despite its significance, achieving accurate 6DoF pose estimation remains challenging, particularly in scenarios characterized by homogeneous object textures and heavy occlusion.\nThe advent of deep learning has helped to overcome *Contributed equally, †Corresponding authors Code: https://github.com/lyltc1/HiPose those challenges. A recent branch of RGB-only works [41,50,67] shows promising results for handling occlusion. Despite these advancements, estimating object pose from RGB images alone remains challenging due to the inherent depth ambiguity in monocular images. Analogous to the prediction of 2D-3D correspondence in RGB-only approaches, [5,62] predict sparse 3D-3D correspondences. However, most methods with depth input either do not utilize RGB information [10,11,31] or rely on RGB images only to segment the object from the background [4,11,25,34], thus discarding valuable RGB features. To preserve rich RGB information, [16,17,58,75] proposed novel feature fusion networks to better leverage RGB and depth information but lag behind in public benchmarks such as BOP [54].\nIn contrast, most current state-of-the-art approaches typically obtain an initial pose using RGB-only methods and then apply a computationally expensive, often iterative, pose refinement step using depth information [35,47]. Directly utilizing RGB-D images to estimate the initial pose promises to yield more precise and reliable object pose estimates.\nIn this paper, we aim to fully exploit the detailed information in RGB-D images to estimate accurate object poses without any time-consuming refinement step. Using RGB-D input, we benefit from additional information such as point-to-surface distances. Taking inspiration from the recent work ZebraPose [50], a dense 2D-3D correspondence prediction method, we introduce HiPose, a network that efficiently predicts dense 3D-3D correspondences between the input depth map and the object model. Unlike ZebraPose [50], we process the encoding in a manner that takes better advantage of its coarse-to-fine properties by iteratively removing outliers.\nInstead of solving the pose using the predicted correspondences within the RANSAC framework, as commonly done in conjunction with the Kabsch algorithm [56], we propose a novel and more stable hierarchical correspondence pruning approach, eliminating the need for RANSAC. Specifically, the coarse-level prediction in the hierarchical binary code output is less error-prone, providing a robust initial pose. This coarse pose helps identifying and removing outlier matches based on point-to-surface distance. Subsequently, we apply a finer-level prediction with each iteration, refining our pose prediction and eliminating outliers at finer levels to enhance accuracy as shown in Figure 1.\nOverall, our contributions can be summarized as follows: • We present an approach for estimating object pose that fully exploits RGB-D data, focusing on 3D-3D correspondence matching through hierarchical binary surface encoding. • We introduce a RANSAC-free hierarchical correspondence pruning approach for pose estimation through coarse-to-fine sub-surfaces based outlier filtering." }, { "figure_ref": [], "heading": "• Extensive experiments on LM-O, YCB-V and T-LESS", "publication_ref": [], "table_ref": [], "text": "datasets demonstrate our method's effectiveness. We achieve state-of-the-art results without any additional refinement, making our approach notably faster than alternative methods and suitable for real-time applications." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "We limit our in-depth discussion of related work to the instance-level pose estimation methods based on deep learning where the 3D CAD model of the target object is available during training." }, { "figure_ref": [], "heading": "RGB-only Pose Estimation", "publication_ref": [ "b40", "b58", "b27", "b35", "b41", "b45", "b71", "b7", "b48", "b62" ], "table_ref": [], "text": "Most top-performing RGB object pose estimation methods [7,20,41,50,59, 67] attempt to establish dense 2D-3D correspondences between 2D coordinates in the RGB image and 3D coordinates on the object surface. The 6D pose is then computed by solving the Perspective-n-Point(PnP) problem [28]. Dense correspondence-based methods have been shown to outperform the keypoint-based methods [36,40,42,46,72] and holistic approaches [8,23,49,63] nowadays, as also demonstrated in the BOP challenge results [54]. We draw inspiration from ZebraPose [50], a dense correspondence-based method that employs a coarseto-fine surface encoding to represent correspondences. This approach has demonstrated significant improvements in accuracy, motivating our own idea. Overall, RGB-only methods are still limited in performance due to the absence of geometric information." }, { "figure_ref": [], "heading": "Depth-only and RGB-D Pose Estimation", "publication_ref": [ "b0", "b59", "b25", "b16", "b43", "b56", "b63", "b74", "b16", "b74" ], "table_ref": [], "text": "The development of point cloud processing networks [2, 21, 30, 43] boosted pose estimation methods that exclusively used 3D measurements [1,6,60,61,65,66]. These methods have demonstrated excellent generalization. However, discarding RGB appearance severely limits the performance of these approaches, due to pose ambiguities and exclusion of color features. RGB-D methods attempt to fuse the information of the RGB and Depth modalities. [26,29,33] treat the depth information as an extra channel of RGB images, which are then fed to a CNN-based network. A more effective utilization of RGB and depth images is to extract features from these two modalities individually, then fuse them for pose estimation [16,17,44,57,58,64,73,75]. Such approaches benefit from visual information and geometric information and show higher accuracy [54]. FFB6D [17] designed bidirectional fusion modules to enhance representation of appearance and geometry features. Recently, [75] proposed a transformer-based fusion network based on FFB6D." }, { "figure_ref": [], "heading": "Pose Refinement with Depth Information", "publication_ref": [ "b62" ], "table_ref": [], "text": "An additional pose refinement stage, often in an iterative manner, improves the result significantly. The Iterative Closest Point algorithm (ICP) is typically used as a refinement strategy, making use of depth information to align the estimated object point cloud to the image [52,53,63]. PFA [22] proposes a non-iterative pose refinement strategy by predicting a dense correspondence field between the rendered and real images. CIR [35] iteratively refines both pose and dense correspondence together using a novel differentiable solver layer under a rendering and comparison strategy. However, the rendering is time-consuming." }, { "figure_ref": [], "heading": "Method: HiPose", "publication_ref": [], "table_ref": [], "text": "This section provides a detailed description of the proposed model-based method for 6D object pose estimation.\nInspired by the successful application of binary codes in the RGB-only setting, we extend the method to encode object surfaces in a coarse-to-fine way in the RGB-D setting. Our approach consists of a hierarchical binary surface encoding which is fed into a coarse-to-fine pose solver. The solver achieves rapid pose estimation through several iterations of surface partitioning and outlier rejection without RANSAC or the need for rendering." }, { "figure_ref": [], "heading": "Problem Definition and Notation", "publication_ref": [], "table_ref": [], "text": "For each target object in an RGB-D image (RGB image + 2D depth map), our goal is to estimate the transformation between the predefined object coordinate system and the camera coordinate system. This transformation consists of a rotation matrix R ∈ SO(3) and a translation vector t ∈ R 3 .\nWe are given an object O with a 3D Scan or CAD model mesh, denoted M, consisting of N 3D vertices v i ∈ R 3 , with i being the vertex index. A binary encoding of the N vertices, is a binary code c i of d bits that uniquely corresponds to a vertex v i . We preprocess the object mesh by upsampling so that N = 2 d .\nZebraPose [50] constructs this binary encoding iteratively, by splitting the mesh into parts of equal amount of vertices at each step, and assigning a bit to each group. In the iteration it, it ∈ {0, 1, . . . , d -1} of the surface partition, we have 2 it separate sub-surfaces. Assuming a surface contains L vertices, balanced k-means is used for the partitioning, resulting in two sub-surfaces containing ⌊L/2⌋ and L -⌊L/2⌋ vertices respectively.\nThis procedure creates a hierarchical encoding, meaning that all vertices whose first x bits are equal, belong to the same sub-surfaces of the object mesh surface until the x-th partition. Expressed differently, a binary code c describes a manifold of coarse-to-fine object surfaces S k , k = 0, • • • , d where S 0 is the full object mesh and S d is a single vertex of the mesh." }, { "figure_ref": [], "heading": "Point-to-Surface Correspondences", "publication_ref": [ "b27", "b16" ], "table_ref": [], "text": "Prior work [50] used a binary encoding of surfaces as described in Section 3.1 for pose estimation from RGB images. This was done by training a neural network to estimate a binary code of d bits for every pixel p u,v within a detected object bounding box. This can then be used to establish 2D-3D correspondences between pixels and encoded vertices v i of the object model. These correspondences are presented to a Perspective-n-Point (PnP) solver (e.g. RANSAC+EPnP [28]) to estimate the object pose. Results show that this encoded surface representation is well suited for neural network training [50] since it allows to progressively learn finer details. However, such a method: (1) Is not designed to make use of depth map information, apart from the final pose refinement stage, (2) does not take explicit advantage of the hierarchical nature of the encoded surface prediction, and (3) does not exploit the inherent confidence in predicted surface codes -instead, the continuous binary code estimates in the range [0, 1] are quantized to discrete bit values which discards all confidence information.\nIn contrast, our method-HiPose is designed to take a single RGB-D image as input and extract features from both modalities to predict 3D-3D correspondences. Our network receives as input a cropped Region of Interest (RoI) from a detected object, both, from the RGB image and the depth map as seen in Figure 2. The inputs are processed by two branches of a bidirectional fusion network as in FFB6D [17]. Details on the network architecture and implementation are provided in Section 4.1. The pixels of the depth image are converted to a point cloud. For each 3D point P, our network is trained to predict a binary code ĉ. This code represents a 3D-3D correspondence between the point cloud and the object model. A solution for the pose can be obtained by passing these correspondences to the Kabsch pose estimation algorithm [56].\nThis approach is in line with similar correspondence methods. However, the hierarchical nature of the encoding presents an opportunity for coarse-to-fine processing that has not been exploited yet. This is where our hierarchical point-to-surface approach is applied. As discussed in Section 3.1, a binary encoding represents a manifold of object surfaces depending on how many bits of the encoding are considered. Instead of directly processing the full encoding of d bits, which includes high uncertainty for the last bits, we propose to split the binary encoding into two groups: The first m and the last n bits (d = m + n). Unlike in [50], we utilize the (m + 1) th until the d th bits in an iterative manner, as explained in the following.\nThe first m bits of the encoding yield a 3D point-tosurface correspondence from a point in the point cloud to a surface segment S m on the object model. From the surface S m , a centroid point can be computed, and used as a 3D point on the model to create a 3D-3D correspondence. A coarse pose estimate R0 , t0 can then be obtained from these correspondences. Crucially, the coarse pose estimate is used for outlier pruning as described in Section 3.4. We iteratively repeat this process for bits m + 1 until m + n, in a coarse-to-fine manner, with the surfaces of interest S m+1 until S m+n decreasing in size at each iteration. Simultaneously, the pose estimates at each iteration have gradually higher accuracy and can be used for finer outlier removal." }, { "figure_ref": [], "heading": "Hierarchical Binary Code Decoding", "publication_ref": [], "table_ref": [], "text": "Existing methods such as ZebraPose [50], use estimated binary codes in a straightforward manner. The continuous es- timated code from the neural network is transformed into a binary code by quantizing the values from the range [0, 1] to bit values. Then, the estimated binarized code ĉ has direct correspondence to a vertex code c i . However, this method discards valuable confidence information that is inherent in the predicted code and makes the process highly reliant on the performance of the RANSAC-PnP solver.\nDenoting the direct (non-binarized) prediction output vector as ĉ and its quantized version as ĉ, we propose to compute a bit correctness probability/confidence vector\np c ∈ R d×1 [0,1] as p c = 1 d×1 -|ĉ -ĉ|.(1)\nHiPose introduces a method to leverage this probability information, enabling superior results to be achieved with several rendering-free iterations, thereby improving algorithmic efficiency. Our binary code decoding consists of initial surface selection and sub-surface partitioning.\nInitial surface selection. The blue line in Figure 2 indicates the process of initial surface selection. As the number of iterations increases, each surface is further divided and the corresponding bits become more difficult to learn for the network. Therefore, it is essential to select an appropriate starting point S m from the surface manifold where the pose estimation iterations should begin, or, equivalently select the value m, where the binary encoding will be split.\nThe initial surface selection for every correspondence is based on the bit correctness probability vector p c . The last bit j, for which p j c is higher than the probability threshold, is called the trust bit. We set m def ault as the minimum value of m which limits the maximum initial size of the sub-surfaces and begin our pose estimation iterations at m = max(j, m def ault ). There are two advantages of setting m def ault : Ensuring a certain degree of accuracy in the initial pose estimation and reducing computational complexity.\nSub-surface Partitioning. As shown in Figure 2, the red lines indicate the process of sub-surface partitioning. As the surfaces reduce in size between iterations, correspondences become more challenging to learn and therefore less reliable. Consequently, during each iteration, prior to performing pose estimation on a subdivided surface, we employ our hierarchical correspondence pruning process, which efficiently eliminates outliers within a few iterations, as described in detail in the next Section 3.4. We perform d-m iterations of hierarchical correspondence pruning and a sub-surface S m encoded by m bit can only be partitioned d -m times. If the trust bit j of a prediction is larger than m def ault , this sub-surface S j will retain its size in the former j -m def ault iterations of correspondence pruning. In this way, a prediction with a higher trust bit prioritizes matching a finer correspondence, leading to a more reliable and precise pose in each iteration. For simplicity, we assume that the trust bit j equals m def ault in Section 3.4, thereby not skipping the sub-surface partitioning." }, { "figure_ref": [], "heading": "Hierarchical Correspondence Pruning", "publication_ref": [], "table_ref": [], "text": "Through the point-to-surface matching process, starting from bit m of the estimated code, we can compute the centroid g m of the corresponding surface S m and all corresponding vertices v i . The 3D coordinate of g m is the average of the M vertex coordinates,\ng m = 1 M M i=1 v i .\nThe sub-surface partitioning and pose estimation is repeated n times (from bit m + 1 to bit d). In the it th iteration, it ∈ [0, 1, ..., n -1], we compute the centroid g it of the corresponding sub-surface for every masked point P in our point cloud and estimate a pose [ Rit | tit ] in this step through a Kabsch solver. With this estimated pose, we select inliers using the distance calculated below.\nIn the (it+1) th iteration, we calculate the distance for every correspondence between point P and transformed subsurface S ′ m+it under pose [ Rit | tit ], which will be used as the threshold to distinguish inliers and outliers. Figure 3 is a visual representation of this process. The minimum distance between a point P in the point cloud and transformed surface S ′ m+it is defined as:\nl = min vi || Rit v i + tit -P||(2)\nThe correspondences are distinguished as inliers and out-liers based on the median of the distance l. The different options for distinguishing inliers and outliers are compared in an ablation study, Section 4. Formally, in the general case the pose at iteration it is solved by Kabsch K with an input set of point-to-surface centroid 3D-3D correspondences as:\n[ Rit | tit ] = K({P k , g k m+it }, P k ∈ inliers it(3)\nAfter completing the n iterations of surface partitioning, all point-to-surface correspondences have converged to point-to-point correspondences. Finally, we perform one round of the Kabsch algorithm with all the point-to-point correspondences that were never recognized as outliers in the hierarchical correspondence pruning process to generate the final estimated pose, [ Rn | tn ]." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we start with the implementation, datasets and evaluation metrics. Next, we show ablation studies of our method with different 3D-3D correspondence solvers. Finally, we present the experimental results of our method and compare to recent pose estimation methods from the literature and from the BOP Challenge." }, { "figure_ref": [], "heading": "Implementation Details.", "publication_ref": [ "b16" ], "table_ref": [], "text": "Our approach can be easily integrated into a variety of existing RGB-D networks. This paper utilizes the full flow bidirectional fuse network from FFB6D [17] as a baseline and applies some modifications to create the HiPose network.\nSimilarly to FFB6D, our HiPose network has two branches to deal with images and point clouds, respectively. We feed the HiPose network with 1) a zoomed-in Region of Interest (RoI) image, and 2) a point cloud generated by uniform sampling of a fixed number of pixels from the RoI depth map.\nModifications are also made to the output layers. We replace three output heads with a single head containing the visibility mask and a binary encoding of length d for every randomly selected point. We use L1 loss for both the visible mask and binary encoding. Our training loss is defined as\nLoss = L mask + αL code (4\n)\nwhere α is a weight factor between the mask and the binary encoding losses, set to α = 3 throughout all experiments. Note that for the binary encoding prediction, we only calculate a loss for the points within the predicted visible mask.\nFor the network backbone, we use the recent Con-vNext [38] architecture which builds up on ResNet [15]. ConvNext shows on-par performance to Vision Transformer [9] while retaining the efficiency and simplicity of ResNet.\nWe train our network for 380K iterations with a batch size of 32. We use the Adam [24] " }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b62", "b17" ], "table_ref": [], "text": "We conduct our experiments on the LM-O [3], YCB-V [63] and T-LESS [18] datasets. These datasets collectively encompass a wide range of scenarios, including instances of heavy occlusion, texture deficiency and symmetrical objects. Since annotating real data for object pose can be very time-consuming, we utilize publicly available synthetic physically-based rendered (PBR) images provided by the BOP challenge [54] to demonstrate that our network can be effectively trained using only synthetic data." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b62" ], "table_ref": [], "text": "For the LM-O dataset, we report the ADD(-S) metric which is the most common metric for 6DoF pose among contemporary works. ADD calculates the percentage of object vertices that fall under a distance threshold (object size dependent) when projected to the camera coordinates using the estimated pose vs. using the ground truth pose. In the case of a symmetric object, ADD(-S) differs in that it matches the closest model points (taking symmetry into account) instead of the exact same model points. For the YCB-V dataset, we report the area under curve (AUC) of the ADD(-S) with a maximum threshold of 10cm as described in [63]. Additionally, for both datasets, we report the BOP score metric defined by the BOP Challenge [54]." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b75" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In the following, we perform several ablation studies with the LM-O dataset. We summarize the results in Table Effectiveness of Correspondence Pruning. We first focus on the effectiveness of our proposed hierarchical correspondence pruning. In the experiment (A0) in Table 1, we directly solve the object pose with the Kabsch Algorithm. The promising results highlight the effectiveness of the binary encoding. However, compared to our other experiments reveals that the predicted correspondences from the network are still noisy and contain outliers.\nComparing A1 with A0, the most common method for identifying outliers using RANSAC framework, we observed a 2.47% recall improvement. The results of A1 heavily depend on the choice of hyper-parameters, including the number of correspondences used in each iteration, the number of RANSAC iterations, and the inlier threshold in each iteration. Additionally, random seed variations can also impact the results. In this experiment, we utilized the public RANSAC and Kabsch algorithm from Open3D [76]. Note that we explored multiple parameter combinations and reported the best results among them.\nIn contrast to the RANSAC scheme, our hierarchical correspondence pruning provides stable results analytically, irrespective of the random seed. In experiment A2, we chose the 10 th bit as our initial bit and defined the confidence bit based on predicted logits higher than 0.52 or lower than 0.48. As shown in Table 1, compared to not using any outlier strategy (A0), our approach (A2) improves recall by about 3% while outperforming the best results achievable by RANSAC." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "RGB Input", "publication_ref": [ "b58" ], "table_ref": [ "tab_1" ], "text": "RGB-D Input GDR-Net [59] We also estimate the precision metrics of outlier removal at each iteration in Table 2. A true sample is defined when the distance between estimated coordinates and ground truth fall under a 10mm threshold. The increase in precision confirms the gradual removal of low-quality correspondences. In the following ablation studies, we demonstrate that our approach is also robust to the choice of hyperparameters.\nInfluence of Default Initial Bit. Using smaller initial bits implies matching the point to a relatively coarse surface correspondence. However, our trust-bit-based initial surface selection ensures that each vertex is considered separately and corresponds to its most reliable initial bit.\nAs demonstrated in Figure 4, our proposed design is robust to the choice of initial bit from 5 th bit to 11 th bit. We observe some performance drops when we start from the 12 th bit, and a significant drop when directly using 16 th bit, underscoring the importance of our hierarchical correspondence pruning. By calculating the mean recall across all 8 objects, we noticed that using the 9 th to 11 th bits as the ini-tial bit provides slightly higher results. Considering both accuracy and computational efficiency, we consistently use the 10 th bit as our default initial bit.\nThreshold for the Trust Bit. The initial bit for each point and our inlier identification strategy is closely tied to the choice of the trust bit. By varying the threshold 0.02 used in experiment A2 (predicted logits greater than 0.52 or smaller than 0.48), we alter the trust bit threshold in experiments B0, B1, and B2. As indicated by the experimental results, when the threshold is greater than 0.08, we start to observe a small performance drop. Overall, the results remain quite stable within the threshold range of 0.02 to 0.08. This demonstrates the relative robustness of our approach to the choice of the trust bit.\nCriteria used in Correspondence Pruning. The default criterion for distinguishing inliers and outliers in correspondence pruning is based on the median of distance l defined in Equation 2. We replaced the median criterion for the inlier threshold used in A2 with a mean criterion in C0, resulting in a decrease in average recall. It is comprehensible that utilizing indicators associated with the median produces superior outcomes, given the median's ability to disregard the impact of extreme values.\nEffectiveness of CNN Backbone. To ensure comparability with recent research employing the transformer architecture and ConvNext [38] feature backbone, we default to using ConvNext as our image feature extraction network. Additionally, we provide results of experiment (D0) in Table 1 using ResNet as the feature backbone to offer further insights for comparison with earlier approaches. Results show that the choice of feature backbone only has a marginal effect.\nNaive RGB-D baselines. Using the networks provided by ZebraPose, we back-project 2D pixels into 3D points using the depth map, followed by pose estimation using RANSAC + Kabsch for 3D-3D correspondence(E0). We also implement \"Concat\"(E1), a naive baseline model that learns 2D-3D correspondence, yet solves the pose with 3D-3D correspondences. In the \"Concat\" baseline, we concatenate the RGB and depth channels as input for the CNN. However, the absence of a pretrained CNN model appears to make the results worse. Nonetheless, none of the baselines surpass HiPose, suggesting that direct learning of 3D-3D correspondences is more effective." }, { "figure_ref": [], "heading": "Comparison to State of the Art", "publication_ref": [ "b54", "b18" ], "table_ref": [ "tab_2", "tab_3", "tab_4" ], "text": "In the following, we compare HiPose to the state of the art using various metrics on multiple datasets.\nResults on LM-O. In Table 3, we compare our HiPose with the state-of-the-art methods on the LM-O dataset w.r.t. the ADD(-S) score. We used the 2D detection provided by CDPN [32], which is based on the FCOS [55] detector and trained only with PBR images provided by the BOP challenge. According to the results, HiPose outperforms all other methods by a large margin of 11.9% compared to the best RGB-D method DFTr and 12.7% compared to the best RGB-only method ZebraPose.\nResults on YCB-V. In Table 4, we compare HiPose with the state-of-the-art methods on the YCB-V dataset w.r.t. the AUC of ADD-S and ADD(-S) score. All other methods used real and synthetic images in the training. To ensure a fair comparison, the 2D FCOS detections employed here are trained with both real and synthetic images. According to the results, HiPose again excels beyond all other methods with a significant margin (around 1%) when taking into account that scores on YCB-V are already close to saturation.\nResults on the BOP Benchmark. The BOP benchmark provides a fairer ground for comparisons, offering uniform training datasets and 2D detections for all participating methods and more informative evaluation metrics [19]. We used the default detections provided for the BOP Challenge 2023.\nMost methods in Table 5 rely on a time-consuming pose refinement step, while HiPose estimates accurate object pose directly without any pose refinement. HiPose surpasses the state-of-the-art on LM-O, YCB-V datasets and achieves a very comparable result on the T-LESS dataset. When considering the average recall across the three datasets, HiPose exhibits higher recall compared to all other methods.\nGDRNPP [37] has the most closely aligned results with HiPose, yet HiPose is approximately 40 times faster than GDRNPP with refinement. This demonstrates that HiPose is both accurate and computationally efficient." }, { "figure_ref": [], "heading": "Runtime Analysis", "publication_ref": [], "table_ref": [], "text": "The inference time mainly comprises two components: 1) object detection and 2) object pose estimation. For a fair comparison, we use the same method to calculate inference speed as GDRNPP on an RTX3090 GPU. Our average object pose estimation time across the LM-O, YCB-V, and T-LESS datasets is 0.075 seconds per image. The average 2D detection time with YOLOX [12] on those three datasets is 0.081 seconds. The fast object pose estimation time ensures the real-time applicability of our approach, especially since the costly refinement methods are not necessary." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduced HiPose, an RGB-D based object pose estimation method designed to fully exploit hierarchical surface encoding representations and iteratively establish robust correspondences. In contrast to existing methods, we consider the confidence of every bit prediction and gradually remove outliers. Our method is trained exclusively on synthetic images and outperforms the state-of-the-art in object pose estimation accuracy across several datasets. At the same time, HiPose is considerably faster as time-consuming pose refinements become redundant. Here, H and W represent the height and width of the input cropped RGB image, respectively, and by default, both are set to 256. On the other hand, the point cloud branch maps an input with 9 channels, consisting of point coordinates, color, and normal information, to a feature. The parameter npts is set to 2730, which denotes the number of randomly sampled valid input point clouds." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [ "b16" ], "table_ref": [], "text": "Each of these branches is constructed with interconnected encoders and decoders, serving the fundamental purpose of feature extraction, feature transformation, and feature fusion.\nThe process of feature extraction aims to extract highlevel features and adjust the channel dimension. In accordance with FFB6D [17], RandLA-Net [21] is employed to handle point cloud features. Furthermore, pre-trained ConvNeXt-B [38] and PSPNet [71] models are incorporated into the encoder and decoder blocks.\nFeature transformation refers to the conversion between the features of the RGB branch and the point cloud branch, facilitated through coordinate correspondence. Specifically, as demonstrated in Figure 6, the point cloud branch feature can be generated by aggregating features from the nearest features in the RGB branch. Likewise, the RGB branch feature can be generated by interpolating the feature from the point cloud branch. This enables bidirectional transformation between the features of the RGB branch and the point cloud branch.\nThe process of feature fusion is executed using a Convolutional Neural Network (CNN). The new RGB feature is generated by concatenating the RGB feature with the transformed RGB feature, and the same procedure is applied to the depth feature. Further details regarding the feature fusion process can be observed in Figure 7.\nFinally, a straightforward convolution-based head is employed to predict the visible mask and code for the selected npts points." }, { "figure_ref": [], "heading": "Details of Open3D RANSAC+Kabsch", "publication_ref": [ "b75" ], "table_ref": [], "text": "We use the registration ransac based on correspondence function in Open3D [76] to solve the object pose with the given correspondence. We tuned the number of correspondences in each RANSAC iteration and the number of RANSAC iterations. The results achievable with RANSAC+Kabsch are inferior to those obtained with our hierarchical approach, as showed in Table 7. Evaluate the impact of ICP. We assess the impact of ICP on both HiPose and ZebraPose, both of which are trained solely with pbr images. We observed that once the pose achieves a satisfactory level of accuracy, the incorporation of ICP does not lead to betterment.\nThe Iterative Closest Point algorithm (ICP) is commonly employed as a refinement strategy, leveraging depth information to align the estimated pose. We assess the impact of ICP on both HiPose and ZebraPose, both of which are trained solely with pbr images in Table . 7. For HiPose, we provide a ground truth object mask to facilitate the application of ICP. Surprisingly, ICP fails to yield any enhancements and, in fact, worsens the outcome. In the case of ZebraPose, a substantial improvement in the result is observed. Nevertheless, once the pose achieves a satisfactory level of accuracy, such as employing RANSAC Kabsch (recall greater than 87%), the incorporation of ICP does not lead to betterment. This circumstance may be attributed to insufficient accuracy in the depth map." }, { "figure_ref": [], "heading": "Impact of noisy depth", "publication_ref": [], "table_ref": [], "text": "During training, we augmented the depth maps with Gaussian noise and randomly dropped pixels, to make the network less sensitive to the noise. Coincidentally, the 3 evaluated datasets are captured with different depth sensors, showing that HiPose is robust to different noise levels. We perform additional experiments in map. However, inaccurate measurements do slightly affect performance." }, { "figure_ref": [], "heading": "Details of YCB-V results", "publication_ref": [ "b62" ], "table_ref": [ "tab_8" ], "text": "We summarized the per-object results on the YCB-V dataset [63] in Table 9. As presented in the table, we outperform other approaches on most test objects." }, { "figure_ref": [ "fig_2" ], "heading": "Qualitative Results", "publication_ref": [ "b62", "b17" ], "table_ref": [], "text": "We present quantitative results on the LM-O [3], YCB-V [63], and T-LESS [18] datasets in Figure 8, Figure 9, and Figure 10, respectively. We rendered the object into the image using the estimated pose. It is clear to see that the contour of the rendered object aligns seamlessly with the real object in the image, demonstrating the accuracy of our estimated pose. Furthermore, it is evident that our proposed HiPose performs well with texture-less objects and can handle occlusion effectively. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by STI 2030-Major Projects 2021ZD0201403, in part by NSFC 62088101 Autonomous Intelligent Unmanned Systems and was partially funded by the EU Horizon Europe Framework Program under grant agreement 101058236 (HumanTech) and the German Ministry of Education and Research (BMBF) under Grant Agreements 01IW20002 (SocialWear) and 01IW20009(RACKET)." } ]
In this work, we present a novel dense-correspondence method for 6DoF object pose estimation from a single RGB-D image. While many existing data-driven methods achieve impressive performance, they tend to be time-consuming due to their reliance on rendering-based refinement approaches. To circumvent this limitation, we present HiPose, which establishes 3D-3D correspondences in a coarse-tofine manner with a hierarchical binary surface encoding. Unlike previous dense-correspondence methods, we estimate the correspondence surface by employing point-tosurface matching and iteratively constricting the surface until it becomes a correspondence point while gradually removing outliers. Extensive experiments on public benchmarks LM-O, YCB-V, and T-Less demonstrate that our method surpasses all refinement-free methods and is even on par with expensive refinement-based approaches. Crucially, our approach is computationally efficient and enables real-time critical applications with high accuracy requirements.
HiPose: Hierarchical Binary Surface Encoding and Correspondence Pruning for RGB-D 6DoF Object Pose Estimation
[ { "figure_caption": "Figure 1 .1Figure 1. Illustration of HiPose : (a) For every point cloud with color and normals as inputs, our network outputs a binary code to establish a correspondence to a sub-surface on the object. (b) With the coarse level matching, we estimate an initial pose posem. The additional n bits are used for iterative fine-grained matching and pose estimation and gradual outlier rejection. Note that this process is render-free and RANSAC-free, ensuring fast performance of our algorithm.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .Figure 3 .23Figure 2. Overview : Our framework uses an RGB-D image crop as input and predicts an m + n bits binary code using a full-flow bidirectional fusion network for every point cloud patch on the target object. The first m bit codes point to a relatively coarse surface (blue line), while the final n bit codes are used n times as indicators to perform hierarchical surface partitioning (red lines). Through the iterative process of identifying fine-grained point-to-surface correspondences, the algorithm finally yields an accurately estimated pose. The colored patches on the model represent different surface partitions.", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "6. 1 .1Figure5illustrates the architecture of HiPose. The network comprises two branches, namely the RGB branch and the point cloud branch. In the pre-stage of the RGB branch, a cropped image with dimensions of 3×H×W is transformed into RGB embedding with dimensions of 128×H/4×W/4.Here, H and W represent the height and width of the input cropped RGB image, respectively, and by default, both are set to 256. On the other hand, the point cloud branch maps an input with 9 channels, consisting of point coordinates, color, and normal information, to a feature. The parameter npts is set to 2730, which denotes the number of randomly sampled valid input point clouds.Each of these branches is constructed with interconnected encoders and decoders, serving the fundamental purpose of feature extraction, feature transformation, and feature fusion.The process of feature extraction aims to extract highlevel features and adjust the channel dimension. In accordance with FFB6D[17], RandLA-Net [21] is employed to handle point cloud features. Furthermore, pre-trained ConvNeXt-B [38] and PSPNet [71] models are incorporated into the encoder and decoder blocks.Feature transformation refers to the conversion between the features of the RGB branch and the point cloud branch, facilitated through coordinate correspondence. Specifically, as demonstrated in Figure6, the point cloud branch feature can be generated by aggregating features from the nearest features in the RGB branch. Likewise, the RGB branch feature can be generated by interpolating the feature from the point cloud branch. This enables bidirectional transformation between the features of the RGB branch and the point cloud branch.The process of feature fusion is executed using a Convolutional Neural Network (CNN). The new RGB feature is generated by concatenating the RGB feature with the transformed RGB feature, and the same procedure is applied to the depth feature. Further details regarding the feature fusion process can be observed in Figure7.Finally, a straightforward convolution-based head is employed to predict the visible mask and code for the selected npts points.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Ablation study on LM-O [3]. We conduct several ablation studies, comparing our proposed method to a RANSACbased approach and exploring the impact of hyperparameters on the results. The results are presented in terms of average recall of ADD(-S) in %.", "figure_data": "optimizer with a fixed", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "1, Table 2 and Figure 4. Outlier removal metrics. We evaluate precision at each iteration to validate the outlier removal process.", "figure_data": "Iteration Step1234567finalprecision(%) 66.1 67.4 70.3 73.3 73.8 75.9 75.9 76.110.8 0.9ADD Metric0.7ape drillercan duckcat eggboxglueholepunchermean5 6 7 8 9 10 11 12 13 14 15 16Default Initial Bit", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "ZebraPose [50] PR-GCN [74] FFB6D [17] RCVPose [62] DFTr [75] Ours Comparison with State of the Art on LM-O [3]. We report the Recall of ADD(-S) in % and compare with state of the art. (*) denotes symmetric objects.", "figure_data": "ape46.857.940.247.260.364.178.0can90.895.076.285.292.596.198.9cat40.560.657.045.750.252.287.5driller82.694.883.281.478.295.897.8duck46.964.530.053.952.172.385.3eggbox*54.270.968.270.281.275.380.3glue*75.888.767.060.172.179.394.1holepuncher60.183.097.285.975.286.895.2mean62.276.965.066.270.277.789.6MethodAUC of ADD-SAUC of ADD(-S)RGBSO-Pose [7] GDR-Net [59] ZebraPose [50]90.9 91.6 90.183.9 84.4 85.3PVN3D [16]95.591.8RGB-DRCVPose [62] FFB6D [17] DFTr [75]96.6 96.6 96.795.2 92.7 94.4Ours97.695.5", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison to leading methods of the BOP Challenge [54] that trained on synthetic PBR data only w.r.t. BOP score. (∼) denotes similar to CIR[35]. * averaged over LM-O and YCB-V only as T-LESS results are not provided for this method.", "figure_data": "MethodRefinement LM-O YCB-V T-LESS mean time(sec)FFB6D-CVPR21-PBR-NoRefinement [17]-68.775.8-72.3 *0.19 *RCVPose 3D SingleModel VIVO PBR [62]ICP [47]72.984.370.876.01.33SurfEmb-PBR-RGBD [14]custom [14]76.079.982.879.69.04RADet+PFA-PBR-RGBD [13]PFA [22]79.782.680.280.82.63GDRNPP-PBR-RGBD-MModel [37]∼CIR [35]77.590.685.284.46.37HiPose (ours)-79.990.783.384.60.16", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "# points468102030500 iterations 88.7 89.0 89.0 89.18988.71000 iterations 89.0 89.1 88.9 89.1 89.1 88.81500 iterations 89.0 89.1 89.1 89.1 88.9 88.86.", "figure_id": "tab_5", "figure_label": ".", "figure_type": "table" }, { "figure_caption": "Test RANSAC+Kabsch parameters on LM-O[3]. We tune the number of correspondences in each RANSAC iteration and the number of RANSAC iterations with a maximum correspondence points-pair distance of 2cm. The results are presented with average recall of ADD(-S) in %. According to the table, using 10 correspondences in each RANSAC iteration yields the best results. However, the results achievable with RANSAC+Kabsch are inferior to those obtained with our hierarchical approach.", "figure_data": "6.3. Impact of ICPExperiment SetupADD(-S) in %ZebraPose (Trained only with pbr images)63.5ZebraPose (pbr) + ICP83.9ZebraPose (pbr) + RANSAC Kabsch87.0ZebraPose (pbr) + RANSAC Kabsch + ICP87.0HiPose (ours)89.6HiPose + ICP refinement (with ground truth object mask)89.3", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Table. 8, showing that HiPose is quite robust to missing measurements in the depth Evaluate the impact of noisy depth. When introducing noise or randomly omitting data points in the depth map, HiPose still performs admirably under such circumstances.", "figure_data": "Experiment SetupADD(-S) in %HiPose (ours)89.6Depth with Zero Mean Gaussian Noise with Sigma 0.0189.0Random drop 20% points in Depth Map89.5", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Network Architecture : The network comprises four encoder blocks and four decoder blocks. Each block performs upsampling or downsampling of the input, processes the RGB and point features, and subsequently merges them except the last decoder block. In the RGB image branch, we employ ConvNeXt blocks[38] as the encoders and PSPNet blocks [71] as the decoders. As for the point cloud branch, we utilize modules derived from Randla[21]. Here, 'bsz' refers to the batch size, 'npts' denotes the number of points, and 'H/W' represents the height and width of the image. Comparison with State of the Art on YCB-V. We report the Average Recall w.r.t AUC of ADD(-S) and AUC of ADD-S in % and compare with state of the art. (*) denotes symmetric objects.", "figure_data": "rgb [bsz,3,H,W]xyzRGBnorm [bsz,9,npts]CNNPreStagesrndlaPreStagesdownsampledownsamplergb->pointpoint->rgb", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Yongliang Lin; Yongzhi Su; Praveen Nathan; Sandeep Inuganti; Yan Di; Martin Sundermeyer; Fabian Manhardt; Didier Stricker; Jason Rambach; Yu Zhang
[ { "authors": "Yasuhiro Aoki; Hunter Goforth; Rangaprasad Arun Srivatsan; Simon Lucey", "journal": "", "ref_id": "b0", "title": "Pointnetlk: Robust & efficient point cloud registration using pointnet", "year": "2019" }, { "authors": "Matan Atzmon; Haggai Maron; Yaron Lipman", "journal": "", "ref_id": "b1", "title": "Point convolutional neural networks by extension operators", "year": "2018" }, { "authors": "Eric Brachmann; Frank Michel; Alexander Krull; Michael Ying Yang; Stefan Gumhold; Carsten Rother", "journal": "", "ref_id": "b2", "title": "Uncertainty-driven 6d pose estimation of objects and scenes from a single rgb image", "year": "2016" }, { "authors": "Dingding Cai; Janne Heikkilä; Esa Rahtu", "journal": "", "ref_id": "b3", "title": "Ove6d: Object viewpoint encoding for depth-based 6d object pose estimation", "year": "2022" }, { "authors": "Wei Chen; Xi Jia; Jin Hyung; Jinming Chang; Ales Duan; Leonardis", "journal": "", "ref_id": "b4", "title": "G2l-net: Global to local network for real-time 6d pose estimation with embedding vector features", "year": "2020" }, { "authors": "Zheng Dang; Lizhou Wang; Yu Guo; Mathieu Salzmann", "journal": "Springer", "ref_id": "b5", "title": "Learning-based point cloud registration for 6d object pose estimation in the real world", "year": "2022" }, { "authors": "Yan Di; Fabian Manhardt; Gu Wang; Xiangyang Ji; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b6", "title": "So-pose: Exploiting selfocclusion for direct 6d pose estimation", "year": "2021" }, { "authors": "Thanh-Toan Do; Ming Cai; Trung Pham; Ian Reid", "journal": "", "ref_id": "b7", "title": "Deep-6dpose: Recovering 6d object pose from a single rgb image", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b8", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Bertram Drost; Markus Ulrich; Nassir Navab; Slobodan Ilic", "journal": "Ieee", "ref_id": "b9", "title": "Model globally, match locally: Efficient and robust 3d object recognition", "year": "2010" }, { "authors": "Ge Gao; Mikko Lauri; Xiaolin Hu; Jianwei Zhang; Simone Frintrop", "journal": "IEEE", "ref_id": "b10", "title": "Cloudaae: Learning 6d object pose regression with on-line data synthesis on point clouds", "year": "2021" }, { "authors": "Zheng Ge; Songtao Liu; Feng Wang; Zeming Li; Jian Sun", "journal": "", "ref_id": "b11", "title": "Yolox: Exceeding yolo series in 2021", "year": "2021" }, { "authors": "Yang Hai; Rui Song; Jiaojiao Li; Mathieu Salzmann; Yinlin Hu", "journal": "", "ref_id": "b12", "title": "Rigidity-aware detection for 6d object pose estimation", "year": "2023" }, { "authors": "Laurvig Rasmus; Anders Haugaard; Buch Glent", "journal": "", "ref_id": "b13", "title": "Surfemb: Dense and continuous correspondence distributions for object pose estimation with learnt surface embeddings", "year": "2021" }, { "authors": "X Kaiming He; Shaoqing Zhang; Jian Ren; Sun", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "Yisheng He; Wei Sun; Haibin Huang; Jianran Liu; Haoqiang Fan; Jian Sun", "journal": "", "ref_id": "b15", "title": "Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation", "year": "2020" }, { "authors": "Yisheng He; Haibin Huang; Haoqiang Fan; Qifeng Chen; Jian Sun", "journal": "", "ref_id": "b16", "title": "Ffb6d: A full flow bidirectional fusion network for 6d pose estimation", "year": "2021" }, { "authors": "Tomáš Hodan; Pavel Haluza; Štepán Obdržálek; Jiri Matas; Manolis Lourakis; Xenophon Zabulis", "journal": "IEEE", "ref_id": "b17", "title": "T-less: An rgbd dataset for 6d pose estimation of texture-less objects", "year": "2017" }, { "authors": "Tomas Hodan; Frank Michel; Eric Brachmann; Wadim Kehl; Anders Glentbuch; Dirk Kraft; Bertram Drost; Joel Vidal; Stephan Ihrke; Xenophon Zabulis", "journal": "", "ref_id": "b18", "title": "Bop: Benchmark for 6d object pose estimation", "year": "2018" }, { "authors": "Tomas Hodan; Daniel Barath; Jiri Matas", "journal": "", "ref_id": "b19", "title": "Epos: Estimating 6d pose of objects with symmetries", "year": "2020" }, { "authors": "Qingyong Hu; Bo Yang; Linhai Xie; Stefano Rosa; Yulan Guo; Zhihua Wang; Niki Trigoni; Andrew Markham", "journal": "", "ref_id": "b20", "title": "Randla-net: Efficient semantic segmentation of large-scale point clouds", "year": "2020" }, { "authors": "Yinlin Hu; Pascal Fua; Mathieu Salzmann", "journal": "Springer", "ref_id": "b21", "title": "Perspective flow aggregation for data-limited 6d object pose estimation", "year": "2022" }, { "authors": "Wadim Kehl; Fabian Manhardt; Federico Tombari; Slobodan Ilic; Nassir Navab", "journal": "", "ref_id": "b22", "title": "Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again", "year": "2017" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b23", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Rebecca König; Bertram Drost", "journal": "Springer", "ref_id": "b24", "title": "A hybrid approach for 6dof pose estimation", "year": "2020" }, { "authors": "Jason Ku; Melissa Mozifian; Jungwook Lee; Ali Harakeh; Steven L Waslander", "journal": "IEEE", "ref_id": "b25", "title": "Joint 3d proposal generation and object detection from view aggregation", "year": "2018" }, { "authors": "Yann Labb; ' ; Lucas Manuelli; Arsalan Mousavian; Stephen Tyree; Stan Birchfield; Jonathan Tremblay; Justin Carpentier; Mathieu Aubry; Dieter Fox; Josef Sivic", "journal": "", "ref_id": "b26", "title": "Megapose: 6d pose estimation of novel objects via render & compare", "year": "2022" }, { "authors": "Vincent Lepetit; Francesc Moreno-Noguer; Pascal Fua", "journal": "International journal of computer vision", "ref_id": "b27", "title": "Epnp: An accurate o (n) solution to the pnp problem", "year": "2009" }, { "authors": "Chi Li; Jin Bai; Gregory D Hager", "journal": "", "ref_id": "b28", "title": "A unified framework for multi-view multi-class object pose estimation", "year": "2018" }, { "authors": "Yangyan Li; Rui Bu; Mingchao Sun; Wei Wu; Xinhan Di; Baoquan Chen", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Pointcnn: Convolution on x-transformed points", "year": "2018" }, { "authors": "Zhujun Li; Ioannis Stamos", "journal": "", "ref_id": "b30", "title": "Depth-based 6dof object pose estimation using swin transformer", "year": "2023" }, { "authors": "Zhigang Li; Gu Wang; Xiangyang Ji", "journal": "", "ref_id": "b31", "title": "Cdpn: Coordinates-based disentangled pose network for real-time rgb-based 6-dof object pose estimation", "year": "2019" }, { "authors": "Ming Liang; Bin Yang; Shenlong Wang; Raquel Urtasun", "journal": "", "ref_id": "b32", "title": "Deep continuous fusion for multi-sensor 3d object detection", "year": "2018" }, { "authors": "Xiao Lin; Deming Wang; Guangliang Zhou; Chengju Liu; Qijun Chen", "journal": "", "ref_id": "b33", "title": "Transpose: 6d object pose estimation with geometry-aware transformer", "year": "2023" }, { "authors": "Lahav Lipson; Zachary Teed; Ankit Goyal; Jia Deng", "journal": "", "ref_id": "b34", "title": "Coupled iterative refinement for 6d multi-object pose estimation", "year": "2022" }, { "authors": "Xingyu Liu; Rico Jonschkowski; Anelia Angelova; Kurt Konolige", "journal": "", "ref_id": "b35", "title": "Keypose: Multi-view 3d labeling and keypoint estimation for transparent objects", "year": "2020" }, { "authors": "Xingyu Liu; Ruida Zhang; Chenyangguang Zhang; Bowen Fu; Jiwen Tang; Xiquan Liang; Jingyi Tang; Xiaotian Cheng; Yukang Zhang; Gu Wang; Xiangyang Ji", "journal": "", "ref_id": "b36", "title": "Gdrnpp", "year": "2022" }, { "authors": "Zhuang Liu; Hanzi Mao; Chaozheng Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b37", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Fabian Manhardt; Wadim Kehl; Adrien Gaidon", "journal": "", "ref_id": "b38", "title": "Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape", "year": "2019" }, { "authors": "Markus Oberweger; Rad Mahdi; Vincent Lepetit", "journal": "", "ref_id": "b39", "title": "Making deep heatmaps robust to partial occlusions for 3d object pose estimation", "year": "2018" }, { "authors": "Kiru Park; Timothy Patten; Markus Vincze", "journal": "", "ref_id": "b40", "title": "Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation", "year": "2019" }, { "authors": "Sida Peng; Yuan Liu; Qixing Huang; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b41", "title": "Pvnet: Pixel-wise voting network for 6dof pose estimation", "year": "2019" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b42", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Wei Charles R Qi; Chenxia Liu; Hao Wu; Leonidas J Su; Guibas", "journal": "", "ref_id": "b43", "title": "Frustum pointnets for 3d object detection from rgbd data", "year": "2018" }, { "authors": "Jason Rambach; Alain Pagani; Didier Stricker", "journal": "IEEE", "ref_id": "b44", "title": "poster] augmented things: Enhancing ar applications leveraging the internet of things and universal 3d object tracking", "year": "2017" }, { "authors": "Fred Rothganger; Svetlana Lazebnik; Cordelia Schmid; Jean Ponce", "journal": "International journal of computer vision", "ref_id": "b45", "title": "3d object modeling and recognition using local affine-invariant image descriptors and multi-view spatial constraints", "year": "2006" }, { "authors": "Szymon Rusinkiewicz; Marc Levoy", "journal": "", "ref_id": "b46", "title": "Efficient variants of the icp algorithm", "year": "2001" }, { "authors": "Yongzhi Su; Jason Rambach; Nareg Minaskan; Paul Lesur; Alain Pagani; Didier Stricker", "journal": "IEEE", "ref_id": "b47", "title": "Deep multi-state object pose estimation for augmented reality assembly", "year": "2019" }, { "authors": "Yongzhi Su; Jason Rambach; Alain Pagani; Didier Stricker", "journal": "Sensors", "ref_id": "b48", "title": "Synpo-net-accurate and fast cnn-based 6dof object pose estimation using synthetic training", "year": "2021" }, { "authors": "Yongzhi Su; Mahdi Saleh; Torben Fetzer; Jason Rambach; Nassir Navab; Benjamin Busam; Didier Stricker; Federico Tombari", "journal": "", "ref_id": "b49", "title": "Zebrapose: Coarse to fine surface encoding for 6dof object pose estimation", "year": "2022" }, { "authors": "Yongzhi Su; Yan Di; Guangyao Zhai; Fabian Manhardt; Jason Rambach; Benjamin Busam; Didier Stricker; Federico Tombari", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b50", "title": "Opa-3d: Occlusion-aware pixel-wise aggregation for monocular 3d object detection", "year": "2023" }, { "authors": "Martin Sundermeyer; Zoltan-Csaba Marton; Maximilian Durner; Manuel Brucker; Rudolph Triebel", "journal": "", "ref_id": "b51", "title": "Implicit 3d orientation learning for 6d object detection from rgb images", "year": "2018" }, { "authors": "Martin Sundermeyer; Maximilian Durner; Yen En; Zoltan-Csaba Puang; Narunas Marton; Kai O Vaskevicius; Rudolph Arras; Triebel", "journal": "", "ref_id": "b52", "title": "Multi-path learning for object pose estimation across domains", "year": "2020" }, { "authors": "Martin Sundermeyer; Tomás Hodan; Yann Labbé; Gu Wang; Eric Brachmann; Bertram Drost; Carsten Rother; Juan E Sala Matas", "journal": "", "ref_id": "b53", "title": "Bop challenge 2022 on detection, segmentation and pose estimation of specific rigid objects", "year": "2023" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b54", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Shinji Umeyama", "journal": "IEEE Transactions on Pattern Analysis & Machine Intelligence", "ref_id": "b55", "title": "Least-squares estimation of transformation parameters between two point patterns", "year": "1991" }, { "authors": "Kentaro Wada; Edgar Sucar; Stephen James; Daniel Lenton; Andrew J Davison", "journal": "", "ref_id": "b56", "title": "Morefusion: Multi-object reasoning for 6d pose estimation from volumetric fusion", "year": "2020" }, { "authors": "Chen Wang; Danfei Xu; Yuke Zhu; Roberto Martín-Martín; Cewu Lu; Li Fei-Fei; Silvio Savarese", "journal": "", "ref_id": "b57", "title": "Densefusion: 6d object pose estimation by iterative dense fusion", "year": "2019" }, { "authors": "Gu Wang; Fabian Manhardt; Federico Tombari; Xiangyang Ji", "journal": "", "ref_id": "b58", "title": "Gdr-net: Geometry-guided direct regression network for monocular 6d object pose estimation", "year": "2021" }, { "authors": "Yue Wang; Justin M Solomon", "journal": "", "ref_id": "b59", "title": "Deep closest point: Learning representations for point cloud registration", "year": "2019" }, { "authors": "Yue Wang; Justin M Solomon", "journal": "Advances in neural information processing systems", "ref_id": "b60", "title": "Prnet: Self-supervised learning for partial-to-partial registration", "year": "2019" }, { "authors": "Yangzheng Wu; Mohsen Zand; Ali Etemad; Michael Greenspan", "journal": "Springer", "ref_id": "b61", "title": "Vote from the center: 6 dof pose estimation in rgb-d images by radial keypoint voting", "year": "2022" }, { "authors": "Yu Xiang; Tanner Schmidt; Venkatraman Narayanan; Dieter Fox", "journal": "", "ref_id": "b62", "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", "year": "2018" }, { "authors": "Danfei Xu; Dragomir Anguelov; Ashesh Jain", "journal": "", "ref_id": "b63", "title": "Pointfusion: Deep sensor fusion for 3d bounding box estimation", "year": "2018" }, { "authors": "Jian Zi; Gim Yew; Lee Hee", "journal": "", "ref_id": "b64", "title": "Rpm-net: Robust point matching using learned features", "year": "2020" }, { "authors": "Wentao Yuan; Benjamin Eckart; Kihwan Kim; Varun Jampani; Dieter Fox; Jan Kautz", "journal": "Springer", "ref_id": "b65", "title": "Deepgmr: Learning latent gaussian mixture models for registration", "year": "2020" }, { "authors": "Sergey Zakharov; Ivan Shugurov; Slobodan Ilic", "journal": "", "ref_id": "b66", "title": "Dpod: 6d pose object detector and refiner", "year": "2019" }, { "authors": "Guangyao Zhai; Xiaoni Cai; Dianye Huang; Yan Di; Fabian Manhardt; Federico Tombari; Nassir Navab; Benjamin Busam", "journal": "", "ref_id": "b67", "title": "Sg-bot: Object rearrangement via coarse-tofine robotic imagination on scene graphs", "year": "2023" }, { "authors": "Guangyao Zhai; Dianye Huang; Shun-Cheng Wu; Hyunjun Jung; Yan Di; Fabian Manhardt; Federico Tombari; Nassir Navab; Benjamin Busam", "journal": "IEEE", "ref_id": "b68", "title": "Monograspnet: 6-dof grasping with a single rgb image", "year": "2023" }, { "authors": "Chenyangguang Zhang; Yan Di; Ruida Zhang; Guangyao Zhai; Fabian Manhardt; Federico Tombari; Xiangyang Ji", "journal": "", "ref_id": "b69", "title": "Ddf-ho: Hand-held object reconstruction via conditional directed distance field", "year": "2023" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b70", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "Wanqing Zhao; Shaobo Zhang; Ziyu Guan; Wei Zhao; Jinye Peng; Jianping Fan", "journal": "", "ref_id": "b71", "title": "Learning deep network for detecting 3d object keypoints and 6d poses", "year": "2020" }, { "authors": "Guangliang Zhou; Yi Yan; Deming Wang; Qijun Chen", "journal": "IEEE Transactions on Multimedia", "ref_id": "b72", "title": "A novel depth and color feature fusion framework for 6d object pose estimation", "year": "2020" }, { "authors": "Guangyuan Zhou; Huiqun Wang; Jiaxin Chen; Di Huang", "journal": "", "ref_id": "b73", "title": "Pr-gcn: A deep graph convolutional network with point refinement for 6d pose estimation", "year": "2021" }, { "authors": "Jun Zhou; Kai Chen; Linlin Xu; Qi Dou; Jing Qin", "journal": "", "ref_id": "b74", "title": "Deep fusion transformer network with weighted vector-wise keypoints voting for robust 6d object pose estimation", "year": "2023" }, { "authors": "Qian-Yi Zhou; Jaesik Park; Vladlen Koltun", "journal": "", "ref_id": "b75", "title": "Open3d: A modern library for 3d data processing", "year": "2018" }, { "authors": " Bsz", "journal": "", "ref_id": "b76", "title": "", "year": "1024" } ]
[ { "formula_coordinates": [ 4, 308.86, 509.62, 236.25, 37.63 ], "formula_id": "formula_0", "formula_text": "p c ∈ R d×1 [0,1] as p c = 1 d×1 -|ĉ -ĉ|.(1)" }, { "formula_coordinates": [ 5, 183.12, 491.23, 76.41, 14.56 ], "formula_id": "formula_1", "formula_text": "g m = 1 M M i=1 v i ." }, { "formula_coordinates": [ 5, 111.72, 679.4, 174.64, 16.68 ], "formula_id": "formula_2", "formula_text": "l = min vi || Rit v i + tit -P||(2)" }, { "formula_coordinates": [ 5, 341.07, 144.34, 204.04, 13.17 ], "formula_id": "formula_3", "formula_text": "[ Rit | tit ] = K({P k , g k m+it }, P k ∈ inliers it(3)" }, { "formula_coordinates": [ 5, 375.81, 563.78, 165.43, 9.65 ], "formula_id": "formula_4", "formula_text": "Loss = L mask + αL code (4" }, { "formula_coordinates": [ 5, 541.24, 564.1, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" } ]
10.1503/jpn.150099
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13" ], "table_ref": [], "text": "Mental illnesses deeply impact an individual's thoughts, perceptions, and self-awareness [1]. Schizophrenia is one of the severe mental disorders that substantially burden patients and society. It exhibits positive symptoms like hallucinations (often auditory), delusions such as persecution and passivity, and disrupted thinking. Schizophrenia also involves negative symptoms, including social withdrawal, reduced motivation, and emotional numbness [2]. Schizophrenia is a multifaceted disorder stemming from genetics, environment, or their interplay. Variables such as birth complications, childhood trauma, substance abuse, and migration have been identified as potential risk factors contributing to the development of this condition [3].\nDisruption of the circadian and sleep cycles are commonly seen in people with schizophrenia. These disturbances affect approximately 80% of patients with the disorder [4]. Patients frequently experience disturbed sleep patterns, especially insomnia, even prior to the start of schizophrenia. These patients experience insomnia due to a variety of circumstances, including a lack of regularity and activity during the day, an obsession with commencing and staying asleep, and the emergence of uncomfortable thoughts or hallucinations when trying to fall asleep [5]. Given the significance of day and night activities as crucial classifiers in distinguishing between Schizophrenia patients and controls, the implementation of temporal segmentation can provide valuable insights for the early diagnosis and monitoring of Schizophrenia. American Psychiatric Association's DSM-5 manual is the often utilized for diagnosing Schizophrenia [6]. This method is time-consuming as it requires frequent consultations from mental health professionals. Alternatively, extensive research has been conducted using EEG signal analysis with machine learning and deep learning models for schizophrenia diagnosis [7], [8]. EEG signals are recorded by positioning electrodes on a person's scalp to measure brain activity [9]. However, acquiring these measurements over an extended period, particularly at night, can be challenging. Collecting motor activity data (Actigraphy data) through smartwatches that have motion sensors is feasible over an extended time, which makes further temporal segmentation easier.\nActigraph devices track variables like overall sleep time, interrupted sleep, abrupt awakenings, and efficiency of sleep. These measures have been used to identify various healthrelated outcomes [10]. This actigraphy data has been utilized over the past decade to explore different diagnosis methods arXiv:2311.12590v1 [cs.LG] 21 Nov 2023 for mental illnesses like depression, Parkinson's disease, and Alzheimer's [11].\nThe treatment and diagnosis of mental disorders has traditionally incorporated an examination of sleep-wake patterns [12]. Studies have also explored irregularities within the 24hour circadian rhythms as potential diagnostic markers for mental illnesses [13], [14]. This study primarily focuses on using motor activity data acquired via smartwatches to implement temporal segmentation and determine its effectiveness. This study aims to establish an optimized data segmentation approach, eliminating inefficiencies associated with improper segmentation." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b14", "b15", "b16", "b17", "b18" ], "table_ref": [], "text": "The dataset used in this research was created by Jakobsen et al. [15] They analyzed this dataset with three features and without any segmentation. Fellipe et al. created nine statistical features and segmented the day into three parts for this dataset. They tested this dataset on Five machine learning models along with one deep learning model mainly to improve accuracy and precision [16]. Boeker et al. used Hidden Markov model specifications [17]. Misgar et al. used UMAP features for the analysis of this dataset [18]. Sheha et al. worked on the feature engineering of the dataset of patients with depression and Schizophrenia. They worked on daily and quarter-daily segments and suggested a deeper analysis of shorter time intervals for future work [19]. As far as we are aware, the study highlighting comparative analysis of 8 different time segmentation patterns has not been performed in the scenario of Schizophrenia. We conducted a comparative study to optimize and identify one best-performing time interval." }, { "figure_ref": [], "heading": "III. DATASET DETAILS", "publication_ref": [ "b14" ], "table_ref": [], "text": "This research uses motor activity data collected through the wristwatches called Actiwatch (model AW4) developed by Cambridge Neurotechnology Ltd, England. This device contains a particular sensor (piezoelectric accelerometer) that tracks movement intensity, amount, and duration in different directions (x, y, and z axes). This device recorded movements with a frequency of 32 Hz. It generated an integer value proportional to the movement's intensity in a one-minute interval [15].\nThe dataset has actigraphy data of 22 individuals diagnosed with Schizophrenia. These individuals were under long-term care in a psychiatric ward at Haukeland University Hospital. Among these patients, 19 were males and 3 were females, with an average age of 46.9 years. Their initial hospitalization occurred at an average age of 24.8 years. The diagnosis of these patients was conducted by psychiatrists using the DSM-IV manual. Specifically, 17 patients were diagnosed with paranoid Schizophrenia, while the subtype for 5 patients was not specified. The current DSM-5 manual does not recognize schizophrenia subtypes.\nThe dataset includes data from 32 healthy individuals who served as a control group. None of them had a history of any psychiatric disorder. There were 20 females and 12 males in this group, with 38.2 years as the mean age. The patients and the control group used the actigraphy device for an average of 12.7 days.\nThe PSYKOSE dataset can be accessed through the following link: https://osf.io/dgjzu/ Figure 1. Hourly daytime mean of motor activity for a patient and control. There is no consistent pattern in the mean activities hence the temporal segmentation is needed.\nFigure . 1 displays the daytime mean activity levels of a patient and control for 13 days. There is a variation in the pattern of mean activity, with some days showing higher mean activity for patients while controls exhibit higher activity levels on other days. This fluctuation presents a challenge in the classification process. Temporal segmentation plays a critical role in identifying distinct patterns between the control group and the group of schizophrenia patients, improving the overall accuracy of the classification process." }, { "figure_ref": [], "heading": "IV. DATA SEGMENTATION AND MODELING", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Data cleaning and processing", "publication_ref": [], "table_ref": [], "text": "Prioritizing temporal segmentation was a critical aspect. Only the data available for a full day (24 hours) was considered to assure the temporal pattern's integrity. If the data was only available for a few hours on any particular day, it was discarded. The dataset was structured into distinct columns, namely date, timestamp, and class, where a class value of \"1\" represented patients, and \"0\" indicated control subjects. This column organization helped develop a more streamlined segmentation process." }, { "figure_ref": [], "heading": "B. Segmentation of the data", "publication_ref": [], "table_ref": [], "text": "As shown in Figure . 1, analyzing the data as a whole was not fruitful for classification. Therefore, the data was divided into eight distinct patterns to evaluate classification efficiency. In the case of the two-part segmentation, we based our division on the intensity of motor activity observed in both patients and control subjects. The period from 08:00 to 20:00 exhibited high activity levels and was classified as \"day.\" On the other hand, the period from 20:00 to 08:00 was categorized as \"night\" due to reduced activity associated with sleep. This night segment was particularly crucial for identifying disruptions in patients' sleep patterns. Furthermore, to determine the time intervals for the 3, 4, 6, 8 and 12part segmentation, we precisely analyzed the patterns in the differences in the motor activity data of the patient and control group and symmetrically divided the time intervals." }, { "figure_ref": [], "heading": "C. Feature extraction", "publication_ref": [ "b19" ], "table_ref": [], "text": "Statistical features play a crucial role in quantitatively describing, comparing, and analyzing the motor activity data [20]. They contribute to pattern recognition and dimensionality reduction, which consequently helps construct models and train the data.\nFigure . 2 represents the segmentation of a day in 3,4, and 6 intervals. The minimum values of the data show striking similarities between patient and control groups within each segment. This doesn't contribute to the classification, therefore it is discarded. However, certain statistical features, including mean, median, and maximum value, show variations between patients and controls in these segments. These are included in the analysis.\nBased on the data distribution, we used the following sixteen statistical features for the classification-Mean, Median, Standard Deviation (Std Dev), Proportion of Zeros(Indicates the percentage of zero values in the data), Skewness(Describes the asymmetry of the data distribution), Kurtosis(Gives insights about the shape of the data distribution), Maximum (Max), Median Absolute Deviation(MAD-Quantifies the variability of the data around the median), Interquartile Range(IQR-Range of the middle data values), Coefficient of Variation(CV- The importance of features for the morning and night periods, calculated using LightGBM's gain metric is presented in Figure . 3. The most crucial feature is 'Night-Autocorrelation.' This feature has the highest impact on the model's prediction. Night proportion of zeros and standard deviation also has high significance. This is understandable since it signifies the presence of sleep disturbances in patients, distinguishing them from individuals in the control group. Morning autocorrelation and standard deviation also contribute significantly to the prediction for the model, primarily due to the reduced motivation and diminished movement observed in schizophrenia patients in the morning periods." }, { "figure_ref": [], "heading": "D. Machine Learning Modeling and Evaluation Metrics", "publication_ref": [ "b20", "b21", "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "Selecting machine learning models and metrics is of utmost importance for ensuring accurate classification and clinical relevance. The models discern patterns, relationships, and distinctions essential for classification by processing and analyzing the statistical features. Initially, we had a set of ten statistical features to encapsulate the characteristics of a 24hour day. Upon temporal segmentation into 2, 3, 4, 6, 8 and 12 parts, the feature count correspondingly expanded to 32, 48, 64, 96, 128 and 192, respectively.\nIn our study, we employed seven machine learning models. LightGBM [21] and XGBoost [22] are known for achieving high predictive accuracy, Random Forest for its overfitting prevention [23], Logistic Regression for its interpretability [24], Support Vector Machine, K-nearest neighbours, Decision Trees for their capacity to provide clear insights. We evaluated these models by applying 10-fold cross-validation.\nAs our primary goal is to investigate temporal segmentation, we have strongly emphasized two key metrics: F1-score along with the AUC-ROC (Area Under the Receiver Operating Characteristic Curve). AUC-ROC is critical for measuring the accuracy of our models, enabling us to identify their ability to distinguish between schizophrenia patients and control subjects [25]. On the other hand, the F1 score plays a crucial role in striking a balance between precision and recall [26]. These metrics help us achieve both high accuracy and a wellrounded evaluation of our models' performance as we study the complexities of temporal segmentation." }, { "figure_ref": [], "heading": "E. Hardware and Softwares utilized", "publication_ref": [], "table_ref": [], "text": "In this study, the computations were done using MacBook Air(2017) with macOS Monterey version 12.4. The hardware configuration of the system included 1.8 GHz Dual-Core Intel Core i5 processor, 8 GB RAM and 128 GB SSD storage. The programming language used was Python3 and the code was executed through VScode. Jupyter notebook was utilized to generate the figures." }, { "figure_ref": [], "heading": "V. RESULTS", "publication_ref": [], "table_ref": [], "text": "After the feature engineering and model selection, we generated CSV files for various temporal segmentation scenarios. These scenarios included a full 24-hour day, divisions into 2, 3, 4, 6, 8 and 12 segments within a 24-hour period, and a final scenario with all days of observation (an average of 12.7 days for each patient/control) combined into a single sheet." }, { "figure_ref": [], "heading": "A. Performance of ML Models", "publication_ref": [ "b26", "b27", "b28", "b29" ], "table_ref": [], "text": "Table II displays the AUC ROC and F1 score values resulting from 10-fold cross-validation across various segmentation patterns for the five machine learning models. LightGBM model outperforms the other six ML models in most of the segmentation patterns, consistently yielding higher AUC-ROC and F1 scores. Following LightGBM, XGBoost, and Random When all the days of observation were considered together, the AUC-ROC decreased to 0.93. These results highlight the importance of accounting for circadian rhythms and daily patterns in achieving accurate schizophrenia classification.\nSchizophrenia symptom expression exhibits temporal patterns. Some individuals with Schizophrenia may experience symptoms late at night or early in the morning, causing disruptions in sleep and wakefulness [27], [28]. They may exhibit increased motor activity during periods of agitation or psychosis, which occurs more prominently at certain times [29]. On the other hand, reduced motor activity during specific temporal segments might be associated with depressive symptoms or social withdrawal [30]. Understanding these temporal patterns provides valuable insights into the condition." }, { "figure_ref": [], "heading": "C. Diminishing returns for Fine Granular Segmentation", "publication_ref": [], "table_ref": [], "text": "Fine granular segmentation involves breaking down a given time frame into precise intervals. This approach captures every possible variation in motor activity patterns throughout the different parts of the day, assuming that finer temporal divisions would lead to better classification results. However, our observations reveal that fine granular segmentation may not significantly improve classification results, particularly for Schizophrenia. This is evident in the fact that when we divided the data into morning, afternoon, evening, and night patterns, we achieved similar classification results to when we employed a simpler 2-part day and night segmentation. Schizophrenia symptom expression exhibits temporal patterns but may not be as fine-grained as dividing the day into 4/6/8/12 parts of the day. Instead, these patterns operate in broader time segments, such as morning or night. Furthermore, fine granular segmentation can introduce noise and complexity into the data without necessarily yielding more meaningful insights. We can achieve accurate schizophrenia classification by focusing on practical, interpretable broader temporal patterns like day and night. This simplification enhances the practicality of our classification approach, making it more accessible for real-world applications." }, { "figure_ref": [], "heading": "VI. CONCLUSION AND FUTURE SCOPE", "publication_ref": [], "table_ref": [], "text": "Our study investigates the extent of temporal segmentation required for distinguishing schizophrenic patients from the control group. It becomes evident that finer-grained segmentation does not enhance the AUC-ROC of the classification, as broader day and night divisions yield comparable results to the 3, 4, 6, 8 or 12-part divisions of the day. The statistical features employed are robust enough to capture critical variations during the broader periods, and the LightGBM model outperforms other machine learning models. This simplification streamlines the segmentation process into two parts, speeding up early diagnosis and close monitoring of schizophrenia patients by only focusing on day/night segmentation and not wasting time and efforts on further temporal segmentation. This new study regarding temporal segmentation aims to assist practitioners in creating tools or applications that aid in diagnosis and treatment planning based on temporal motor activity data. Future research could explore the determination of optimal temporal segmentation patterns for individualized diagnosis or developing interpretative models that offer insights into the relationship between specific temporal patterns and symptom expression in Schizophrenia. Using findings from this research, personalized wearable devices can be created to optimize efficiency by focusing on capturing and analyzing motor activity data during specific day and night intervals, enabling effective identification of patterns in sleepwake cycles, all while conserving energy and streamlining data processing. Incorporating multi modal data into the devices like heart rate along with actigraphy could provide more holistic view of mental health conditions." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "We would like to sincerely thank the creators of the 'Psykose' dataset. All the details are available herehttps://datasets.simula.no/psykose/. RA would like to thank DBT NNP (BT/PR40236/BTIS/137/51/2022) for funding." } ]
Schizophrenia is a complicated mental illness characterized by a broad spectrum of symptoms affecting cognition, behavior, and emotion. The task of identifying reliable biomarkers to classify Schizophrenia accurately continues to be a challenge in the field of psychiatry. We investigate the temporal patterns within the motor activity data as a potential key to enhancing the categorization of individuals with Schizophrenia, using the dataset having motor activity recordings of 22 Schizophrenia patients and 32 control subjects. The dataset contains per-minute motor activity measurements collected for an average of 12.7 days in a row for each participant. We dissect each day into segments (Twelve, Eight, six, four, three, and two parts) and evaluate their impact on classification. We employ sixteen statistical features within these temporal segments and train them on Seven machine learning models to get deeper insights. LightGBM model outperforms the other six models. Our results indicate that the temporal segmentation significantly improves the classification, with AUC-ROC = 0.93, F1 score = 0.84( LightGBM-without any segmentation) and AUC-ROC = 0.98, F1 score = 0.93( LightGBM-with segmentation). Distinguishing between diurnal and nocturnal segments amplifies the differences between Schizophrenia patients and controls. However, further subdivisions into smaller time segments do not affect the AUC-ROC significantly. Morning, afternoon, evening, and night partitioning gives similar classification performance to day-night partitioning. These findings are valuable as they indicate that extensive temporal classification beyond distinguishing between day and night does not yield substantial results, offering an efficient approach for further classification, early diagnosis, and monitoring of Schizophrenia.
ChronoPscychosis: Temporal Segmentation and Its Impact on Schizophrenia Classification Using Motor Activity Data
[ { "figure_caption": "Figure 2 .2Figure 2. Segmentation of 24 hours in 3,4 and 6 parts. Segmenting data into distinct parts enhances classification efficiency by recognizing and utilizing the unique patterns within each segment, rather than treating the data as a unified whole.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Importance of features in prediction of the model (LightGBM-10 cross fold)", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure. 44Figure. 4 presents the AUC-ROC values for different temporal patterns. The division of the 24 hour period into segments significantly enhanced the AUC-ROC values. While considering the entire 24-hour cycle, the AUC-ROC was 0.95 in LightGBM model. However, after segmentation into 2, 3, 4, 8 parts, it increased to 0.97. When segmentation was done with 6 and 12 parts, this value was slightly increased to 0.98 but the F1 score is very similar in all 2, 3, 4, 6, 8 and 12 parts segmentation patterns.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. AUC-ROC values for LighGBM and XGboost models for all 8 temporal patterns", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "presents the temporal patterns used in our analysis. Initially, we considered the entire 24-hour day. Subsequently, we systematically divided this time span into 2, 3, 4, 6, 8 and 12 segments to explore the importance of temporal segmentation in the classification process. The segmentation", "figure_data": "Temporal PatternTime Intervals24 hours in 2 Parts08:00 -19:59 and 20:00 -07:5924 hours in 3 Parts00:00 -07:59, 08:00 -15:59, 16:00 -23:5924 hours in 4 Parts00:00 -05:59, 06:00 -11:59, 12:00 -17:59,18:00 -23:5924 hours in 6 Parts00:00 -03:59, 04:00 -07:59, 08:00 -11:59,12:00 -15:59, 16:00 -19:59, 20:00 -23:5924 hours in 8 Parts00:00 -02:59, 03:00 -05:59, 06:00 -08:59,09:00 -11:59, 12:00 -14:59, 15:00 -17:59,18:00-20:59, 21:00-23:5924 hours in 12 Parts00:00 -01:59, 02:00 -03:59, 02:00 -03:59,04:00 -05:59, 06:00 -19:59, 08:00 -09:59,10:00 -11:59, 12:00 -13:59, 14:00 -15:59,16:00 -17:59, 18:00 -19:59, 20:00 -21:59,22:00 -23:5924 hours (Full 1 day)00:00 -24:00All days together12.7 days on averagemethodology involved observation of activity data withinspecific time intervals.", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "ROC AND F1 SCORES FOR DIFFERENT TEMPORAL SEGMENTATION PATTERNS AND MACHINE LEARNING MODELS", "figure_data": "Forest exhibit closely competitive performance across mostscenarios.The highest AUC-ROC values were achieved by the Light-GBM model when the 24-hour segmentation was divided into6 and 12 parts.B. Significance of Temporal SegmentationTemporalModelAUCF1 scoreSegmentationROC12 PartsLightgbm0.980.90XGboost0.970.89Random Forest0.960.88Logistic Regression0.940.85SVM0.950.86K-Nearest Neighbors0.920.84Decision Trees0.810.778 PartsLightgbm0.970.90XGboost0.980.89Random Forest0.960.87Logistic Regression0.930.84SVM0.930.83K-Nearest Neighbors0.900.83Decision Trees0.830.816 PartsLightgbm0.980.90XGboost0.970.89Random Forest0.960.87Logistic Regression0.920.83SVM0.930.83K-Nearest Neighbors0.900.82Decision Trees0.820.794 PartsLightgbm0.970.89XGboost0.970.89Random Forest0.960.86Logistic Regression0.900.80SVM0.920.81K-Nearest Neighbors0.890.79Decision Trees0.840.813 PartsLightgbm0.970.90XGboost0.970.88Random Forest0.970.87Logistic Regression0.920.83SVM0.920.83K-Nearest Neighbors0.880.79Decision Trees0.860.832 PartsLightgbm0.970.90XGboost0.970.88Random Forest0.970.88Logistic Regression0.920.83SVM0.920.83K-Nearest Neighbors0.880.79Decision Trees0.860.84Full DayLightgbm0.950.91XGboost0.950.92Random Forest0.950.89Logistic Regression0.910.86SVM0.890.85K-Nearest Neighbors0.850.84Decision Trees0.820.85All DaysLightgbm0.930.84XGboost0.920.90Random Forest0.920.91Logistic Regression0.870.77SVM0.910.85K-Nearest Neighbors0.900.90Decision Trees0.830.85", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Rajendra Pradnya; Jadhav; Raviprasad Aduri
[ { "authors": "A Malla; R Joober; A Garcia", "journal": "Journal of Psychiatry & Neuroscience: JPN", "ref_id": "b0", "title": "Mental illness is like any other medical illness\": a critical examination of the statement and its impact on patient care and society", "year": "2015" }, { "authors": "M M Picchioni; R M Murray", "journal": "BMJ: British Medical Journal", "ref_id": "b1", "title": "Schizophrenia", "year": "2007" }, { "authors": "S A Stilo; R M Murray", "journal": "Current Psychiatry Reports", "ref_id": "b2", "title": "Non-genetic factors in Schizophrenia", "year": "2019" }, { "authors": "A Ashton; A Jagannath", "journal": "Frontiers in Neuroscience", "ref_id": "b3", "title": "Disrupted sleep and circadian rhythms in Schizophrenia and their interaction with dopamine signaling", "year": "2020" }, { "authors": "R Kaskie; B Graziano; F Ferrarelli", "journal": "Nature and Science of Sleep", "ref_id": "b4", "title": "Schizophrenia and sleep disorders: links, risks, and management challenges", "year": "2017" }, { "authors": "", "journal": "Washington", "ref_id": "b5", "title": "Diagnostic and Statistical Manual of Mental Disorders, Dsm-5", "year": "2014" }, { "authors": "J Sun; R Cao; M Zhou; W Hussain; B Wang; J Xue; J Xiang", "journal": "Scientific Reports", "ref_id": "b6", "title": "A hybrid deep neural network for classification of Schizophrenia using EEG Data", "year": "2021" }, { "authors": "A Shoeibi; D Sadeghi; P Moridian; N Ghassemi; J Heras; R Alizadehsani; A Khadem; Y Kong; S Nahavandi; Y.-D Zhang; J M Gorriz", "journal": "Frontiers in Neuroinformatics", "ref_id": "b7", "title": "Automatic diagnosis of Schizophrenia in EEG signals using CNN-LSTM models", "year": "2021" }, { "authors": "J Heo; K Chung", "journal": "Korean Journal of Clinical Laboratory Science", "ref_id": "b8", "title": "EEG recording method for quantitative analysis", "year": "2019" }, { "authors": "B Bjorvatn; S Pallesen", "journal": "Elsevier", "ref_id": "b9", "title": "Irregular sleep-wake rhythm disorder", "year": "2017" }, { "authors": "W Pan; Y Song; S Kwak; S Yoshida; Y Yamamoto", "journal": "Behavioural Neurology", "ref_id": "b10", "title": "Quantitative evaluation of the use of actigraphy for neurological and psychiatric disorders", "year": "2014" }, { "authors": "K Anderson; Bradley", "journal": "Nature and Science of Sleep", "ref_id": "b11", "title": "Sleep disturbance in mental health problems and neurodegenerative disease", "year": "2013" }, { "authors": "W H Walker; Ii; J C Walton; A C Devries; R J Nelson", "journal": "Translational Psychiatry", "ref_id": "b12", "title": "Circadian rhythm disruption and mental health", "year": "2020" }, { "authors": "L D Asarnow; A M Soehner; A G Harvey", "journal": "Current Opinion in Psychiatry", "ref_id": "b13", "title": "Circadian rhythms and psychiatric illness", "year": "2013" }, { "authors": "P Jakobsen", "journal": "", "ref_id": "b14", "title": "PSYKOSE: A Motor Activity Database of Patients with Schizophrenia", "year": "2020" }, { "authors": "F P Ferreira; A Daly", "journal": "", "ref_id": "b15", "title": "ConvNet and machine learning models with feature engineering using motor activity data for schizophrenia classification", "year": "2022" }, { "authors": "M Boeker; M A Riegler; H L Hammer; P Halvorsen; O B Fasmer; P Jakobsen", "journal": "", "ref_id": "b16", "title": "Diagnosing Schizophrenia from Activity Records using Hidden Markov Model Parameters", "year": "2021" }, { "authors": "M M Misgar; M P S Bhatia", "journal": "", "ref_id": "b17", "title": "Detection of Depression from IoMT Time Series Data using UMAP features", "year": "2022" }, { "authors": "M A Sheha; M S Mabrouk; A A Sharawy", "journal": "IEEE Access", "ref_id": "b18", "title": "Feature Engineering: Toward Identification of Symptom Clusters of Mental Disorders", "year": "2022" }, { "authors": "K Vaishnavi; U Nikhitha Kamath; B Ashwath Rao; N V Subba Reddy", "journal": "Journal of Physics. Conference Series", "ref_id": "b19", "title": "Predicting mental health illness using machine learning algorithms", "year": "2022" }, { "authors": "Guolin Ke; Qi Meng; Thomas Finley; Taifeng Wang; Wei Chen; Weidong Ma; Qiwei Ye; Tie-Yan Liu", "journal": "Curran Associates Inc", "ref_id": "b20", "title": "LightGBM: a highly efficient gradient boosting decision tree", "year": "2017" }, { "authors": "T Chen; C Guestrin", "journal": "", "ref_id": "b21", "title": "XGBoost: A Scalable Tree Boosting System", "year": "2016" }, { "authors": "G Biau", "journal": "Journal of Machine Learning Research: JMLR", "ref_id": "b22", "title": "Analysis of a random forests model", "year": "2012" }, { "authors": "Cui Lv; Di-Rong Chen", "journal": "CSAE", "ref_id": "b23", "title": "Interpretable Functional Logistic Regression", "year": "2018" }, { "authors": "Charles & Ling; Jin & Huang; Harry Zhang", "journal": "", "ref_id": "b24", "title": "AUC: A Better Measure than Accuracy in Comparing Learning Algorithms", "year": "2003" }, { "authors": "S A Hicks; I Strümke; V Thambawita; M Hammou; M A Riegler; P Halvorsen; S Parasa", "journal": "Scientific Reports", "ref_id": "b25", "title": "On evaluation metrics for medical applications of artificial intelligence", "year": "2022" }, { "authors": "V Bromundt; M Köster; A Georgiev-Kill; K Opwis; A Wirz-Justice; G Stoppe; C Cajochen", "journal": "The British Journal of Psychiatry: The Journal of Mental Science", "ref_id": "b26", "title": "Sleep-wake cycles and cognitive functioning in schizophrenia", "year": "2011" }, { "authors": "P Afonso; M L Figueira; T Paiva", "journal": "World J Biol Psychiatry", "ref_id": "b27", "title": "Sleep-wake patterns in schizophrenia patients compared to healthy controls", "year": "2013-01-15" }, { "authors": "M Pompili; G Ducci; A Galluzzo; G Rosso; C Palumbo; D De Berardis", "journal": "International Journal of Environmental Research and Public Health", "ref_id": "b28", "title": "The management of psychomotor agitation associated with schizophrenia or bipolar disorder: A brief review", "year": "2021" }, { "authors": "K Cullen; A Guimaraes; J Wozniak; A Anjum; S Schulz; T White", "journal": "Clinical Schizophrenia & Related Psychoses", "ref_id": "b29", "title": "Trajectories of social withdrawal and cognitive decline in the schizophrenia prodrome", "year": "2011" } ]
[]
2023-11-22
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b10", "b11", "b12", "b13", "b15", "b16", "b20", "b16", "b17", "b21", "b18", "b19", "b20", "b16", "b20", "b16", "b18", "b16", "b20" ], "table_ref": [], "text": "The computer-assisted surgery can improve the quality of interventional healthcare, thereby facilitating patient safety [1]- [3]. In particular, surgical phase recognition [4] is significant for developing systems to monitor surgical procedures [5], schedule surgeons [6], promote surgical team coordination [7], and educate junior surgeons [8]. Compared with offline analysis of surgical videos, online recognition can support decision-making during surgery without using future frames, which is more practical in surgical applications.\nOnline phase recognition of surgical videos is challenging, and has received great research attention and progress [9]- [11]. Earlier works [12] formulated this task as the frameby-frame classification, and used auxiliary annotations of surgical tools for multi-task learning [13]. Meanwhile, some works [14]- [16] utilized 3D convolutions to capture temporal knowledge of surgical videos. To overcome the huge resource consumption of 3D convolution, mainstream methods [17]- [21] first used 2D convolutional neural networks (CNNs) to extract the feature vector of each surgical video frame, and then predicted the surgical phase with the inter-frame temporal relationship aggregated by the long short-term memory (LSTM) [17], temporal convolutions [18], [22], or transformers [19]. On this basis, recent works [20], [21] further improved this multi-stage paradigm of phase recognition by leveraging longrange temporal relation among frame-wise feature vectors.\nHowever, existing works [17]- [21] on surgical phase recognition suffer from two major limitations, including the insufficient visual information of frame-wise feature vectors, and the inadequate supervision knowledge provided by surgical phase labels. First, most surgical workflow studies [17]- [19] first extracted frame-wise feature vectors with 2D networks, and then aggregated these feature vectors for surgical phase prediction. Note that the spatial and temporal information of surgical videos is discarded when 2D networks process frames into feature vectors, thus hindering the subsequent inter-frame modeling. To overcome this bottleneck, we aim to efficiently formulate the surgical actions during feature extraction and provide visual features with spatial and temporal information for sequence modeling and phase prediction. Second, existing works [17]- [21] formulated the phase prediction as a classification task of the current frame, and the supervision information provided by the ordinary loss with one-hot phase labels is inadequate, which makes the training susceptible to over-fitting. To guarantee that networks fully learn surgical knowledge as possible, it is beneficial to conduct reasonable regularization in training. Inspired by this idea, we introduce an auxiliary classifier with a smaller capacity to regularize the phase prediction of the input video sequence.\nTo address these two problems in surgical phase recognition, we propose a Surgical Temporal Action-aware Network with sequence Regularization, named STAR-Net, from the " }, { "figure_ref": [], "heading": "II. METHODOLOGY A. Overview", "publication_ref": [ "b19" ], "table_ref": [], "text": "As illustrated in Fig. 1 (a), our STAR-Net predicts the phase of each frame in surgical videos to achieve online phase recognition. Following previous studies [20], our STAR-Net classifies the current frame x n as one of C surgical phases by taking the current frame and T -1 preceding frames as sequence input {x n-t } T -1 t=0 . By progressively shifting the sequence input over time, the STAR-Net can predict the surgical phase of each frame in the entire video.\nSpecifically, the STAR-Net first utilizes a 2D CNN with the MS-STA module as the backbone to extract visual features with spatial and temporal information of surgical actions. Then, a transformer with spatial and temporal attention blocks efficiently aggregates visual features by exploiting global relationships in spatial and temporal dimensions sequentially. Finally, we introduce the DSR with an auxiliary classifier to mutually regularize sequence predictions produced by the task classifier, thereby facilitating the training of the STAR-Net." }, { "figure_ref": [], "heading": "B. Multi-Scale Surgical Temporal Action for Visual Features", "publication_ref": [ "b17", "b18", "b22" ], "table_ref": [], "text": "Existing studies [18], [19] extracted frame-wise visual information into feature vectors, which lost spatial and temporal information of surgical videos. As a result, the surgical actions in surgical videos are not well represented, thereby leading to inaccurate modeling of the inter-frame relation. To address this problem, we propose the MS-STA module to efficiently model the multi-scale surgical temporal actions during visual extraction of the 2D backbone, which provides visual features with spatial and temporal knowledge for STAR-Net.\nAs shown in Fig. 1 (b), the MS-STA integrates visual features f ∈ R T ×H×W ×D of video sequences with multi-scale temporal information of surgical actions to facilitate surgical phase recognition, where T is the length of the input sequence, and H, W and D are the numbers of height, width and channel dimensions of visual features. In particular, we devise the Temporal Difference (TDiff) operation to capture surgical actions between two adjacent frames, which can be used for longer range surgical actions based on previous operations progressively. In Fig. 1 (c), the input visual features f of TDiff operation are first shifted along the temporal dimension for one frame as delayed features D(f , 1), where the first and the last frame is performed with zero-padding and truncation, respectively. Then, we subtract the delayed features D(f , 1) from the input visual features f elementwise to calculate the surgical action features of each frame relative to the previous adjacent frame, as follows:\na 1 = M(f -D(f , 1)),(1)\nwhere action mask M(•) sets the first frame substraction to 0. Note that the TDiff operation efficiently captures surgical action features for each frame with only one shift operation and element-wise subtraction. In this way, we obtain surgical action features a 1 and delayed features D(f , 1) as the output of TDiff operation.\nWith the delayed features, the MS-STA can further perform the TDiff operation to progressively generate the action features with a longer temporal range, e.g., D(f , 2) = D(D(f , 1), 1). By conducting multiple TDiff operations sequentially in Fig. 1 (b), we concatenate these surgical action features {a k } τ k=1 with multiple temporal scales, where [a 1 , a 2 , • • • , a τ ] ∈ R T ×τ ×H×W ×D and τ denotes the number of temporal scales, and then perform a 3D convolution to integrate the multi-scale temporal features of surgical actions, as follows:\na ms = W ⊛ [a 1 , a 2 , • • • , a τ ],(2)\nwhere a ms ∈ R T ×H×W ×D , W is the parameters of a 3D convolutional layer and ⊛ is the convolution operation. In contrast to the burdensome 3D convolutional networks, we only insert one 3D convolutional layer into the STAR-Net to integrate multi-scale temporal features of surgical actions, which perceive the surgical actions at the computational cost of 2D networks. Finally, we add the multi-scale surgical action features a ms with the input features f as residual learning, which can provide each frame with the knowledge of surgical actions for the surgical phase recognition. Different from TSM [23] that shifted partial channels for temporal information at different layers, our MS-STA can efficiently capture multi-scale temporal information of surgical actions at once, while preserving the channel alignment of visual features, thereby providing surgical action features for phase recognition." }, { "figure_ref": [], "heading": "C. Dual-Classifier Sequence Regularization", "publication_ref": [ "b11", "b17", "b19", "b20" ], "table_ref": [], "text": "With multi-scale surgical action features provided by MS-STA, the STAR-Net can predict the surgical phase with discriminative spatial and temporal features. However, existing works [12], [18], [20], [21] employed ordinary classification loss, e.g., the cross-entropy loss and its variants, to train the network, which cannot provide sufficient supervision for the training. Since the phase label y is a one-hot vector to indicate the correct class, the cross-entropy loss L CE = -C c=1 y c log p c merely produces a single non-zero constraint among these C terms to supervise the network training. As a result, the lack of supervision makes the network prone to over-fitting, and thus restricts the performance of surgical phase recognition.\nTo address the lack of supervision, we devise the Dualclassifier Sequence Regularization (DSR) to regularize sequence predictions by introducing a frame-wise auxiliary classifier, as illustrated in Fig. 1 (d). With the tokens of each frame provided by the transformer in STAR-Net, the task classifier can generate frame-wise phase predictions of the input video sequence, where the predicted probabilities are denoted as p task . Meanwhile, the auxiliary classifier uses the sequence features extracted by the 2D visual backbone, and performs spatial global average pooling to predict the phase probabilities p aux of each frame.\nSince MS-STA provides multi-scale temporal information of surgical actions, the auxiliary classifier can achieve relatively satisfactory prediction for each video frame. Considering that the small number of previous frames in the early sequences E cannot provide sufficient temporal knowledge for the task classifier after the transformer, we adopt the auxiliary classifier with a smaller capacity to regularize the predicted probabilities p task of the task classifier. This provides effective regularization for the training of STAR-Net, thereby avoiding over-fitting. On the other hand, due to the lack of long-range surgical video knowledge, the auxiliary classifier is inferior to the task classifier on the late sequences L, and thus we further improve the auxiliary classifier with the task classifier. In turn, this can promote the learning of the task classifier with an improved auxiliary classifier. Therefore, the objective of our DSR is summarized as follows:\nL DSR = i∈E KL(p (i) task || p(i) aux ) + j∈L KL(p (j) aux || p(j) task ),(3)\nwhere KL is the Kullback-Leibler divergence to measure the distance between two probabilities, and p represents stopping the gradients from p by regarding it as constants. Therefore, the first term in Eq. ( 3) optimizes p task on the early sequences E, while the second term optimizes p aux on the late sequences L. In this way, the DSR can facilitate the training of STAR-Net with the sequence regularization between the task classifier and the auxiliary classifier.\n- (a) (b) (c) (d) (e) (f) (g) (h)" }, { "figure_ref": [], "heading": "D. Training and Inference", "publication_ref": [ "b19", "b20" ], "table_ref": [], "text": "Following the efficient multi-stage training paradigm in previous works [20], [21], we first train the 2D visual backbone with MS-STA using the cross-entropy loss L CE , and generate frame features with spatial and temporal knowledge. Then, we train the transformer with the task and auxiliary classifiers under DSR for surgical phase recognition, as follows:\nL = L CE + λL DSR ,(4)\nwhere the coefficient λ controls the trade-off between L DSR and the cross-entropy loss L CE of phase predictions. In the inference, the well-trained STAR-Net sequentially conducts the 2D visual backbone with MS-STA and the transformer with spatial and temporal attention blocks to extract visual features, and performs the online frame-wise prediction using the task classifier for the surgical video streaming in an endto-end manner." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "III. EXPERIMENT", "publication_ref": [ "b11", "b11", "b11", "b19", "b24", "b25", "b18", "b11", "b19" ], "table_ref": [], "text": "A. Dataset and Implementation Details 1) Gastrectomy Phase Dataset: To evaluate the online phase recognition of surgical videos, we collect a large-scale laparoscopic gastrectomy dataset consisting of 100 surgical videos from different gastric cancer patients, and its data size is 22.1 times1 of the Cholec80 dataset [12]. The surgical videos are recorded with 1, 920 × 1, 080 resolution and 25 frame-per-second (fps). The average length of surgical videos is 2.53 hours. All surgical videos are annotated by two surgeons with expertise in gastric cancer surgery. Each frame of surgical videos is assigned to one out of eight surgical phases, including the preparation, the greater curvature separation, the distal stomach separation, the lesser curvature separation, the pancreas dissection, the proximal stomach separation, the gastrointestinal (GI) tract reconstruction, and the operation ending. We randomly split the dataset at the patient-level, as 70 videos for training and 30 videos for test.\nTo elaborate the collected gastrectomy phase dataset for surgical phase recognition, we show typical examples of eight phases in the gastrectomy surgery in Fig. 2. It is evident that each of these surgical phases carries distinct and specific clinical significance, and together these phases constitute the entire procedures of gastrectomy. Moreover, the proportion of eight phases is illustrated in Fig. 3. It is worth noting that the inherent imbalance of these eight phases makes it more difficult to accurately achieve the online phase recognition.\n2) Cholec80 Dataset: We further perform the comparison on public Cholec80 dataset [12] of laparoscopic cholecystectomy procedures, which contains 80 surgical videos with a resolution of 854×480 or 1, 920×1, 080 at 25 fps. The surgery procedures are divided into seven surgical phases, including the preparation, the calot triangle dissection, the clipping and cutting, the gallbladder dissection, the gallbladder packaging, the cleaning and coagulation, and the gallbladder retraction. We exactly follow the standard splits [12], [20], i.e., the first 40 videos for training and the rest 40 videos for test.\n3) Implementation Details: We compare STAR-Net with state-of-the-arts using PyTorch [25] on a single NVIDIA A100 GPU. In our STAR-Net, we adopt ResNet-18 [26] as the 2D visual backbone for feature extraction, and implement the temporal attention block with causal mask [19] to achieve online recognition without using future frames. For MS-STA, the temporal scale τ is set as 5, and the sequence length T is 20. The coefficient λ of L DSR is set as 1.0, and E and L are set as the 20% -60% and 80% -100% ranges of input video sequences, respectively. All models are optimized in SGD with the batch size of 32. The learning rate is initialized as 1×10 -3 and halved after every 5 epochs.\n4) Evaluation Metrics: We adopt four commonly-used metrics to comprehensively evaluate the performance of surgical phase recognition, including accuracy (AC), precision (PR), recall (RE) and Jaccard (JA). Following the evaluation protocol in previous works [12], [20], we calculate PR, RE and JA in the phase-wise manner, and report the average and standard deviation. The AC represents the percentage of frames correctly classified into ground truth. To perform fair comparisons, the selected state-of-the-art methods are evaluated with the same criteria as the STAR-Net. Note that all experiments are performed in the online mode, where future information is not accessible when estimating the current frame." }, { "figure_ref": [ "fig_4" ], "heading": "B. Comparison on Gastrectomy Dataset 1) Comparison with state-of-the-arts:", "publication_ref": [ "b11", "b16", "b17", "b19", "b20", "b20", "b11", "b16", "b17", "b19", "b20", "b20", "b11", "b11", "b20" ], "table_ref": [ "tab_0", "tab_0" ], "text": "To verify the effectiveness of our STAR-Net, we perform a comprehensive comparison with the state-of-the-art methods [12], [17], [18], [20], [21]. As illustrated in Table I, our STAR-Net achieves the best performance among these methods, with the AC of 89.2% and JA of 73.5%. Noticeably, our STAR-Net outperforms the transformer-based method, Trans-SVNet [21], by a large margin, e.g., 1.5% in AC and 1.6% in JA. In addition, we conduct the t-test of AC among paired test videos, which confirms a significant advantage of our STAR-Net over [12], [17], [18], [20], [21] with P-values < 1 × 10 -5 . These results demonstrate the performance advantage of our STAR-Net over state-of-the-arts on gastrectomy phase recognition.\n2) Ablation Study: As elaborated in Table I, we perform the detailed ablation study to validate the effectiveness, by implementing three ablative baselines of STAR-Net without MS-STA or DSR. Compared with the baseline without both MS-STA and DSR, the MS-STA can bring an improvement of 2.8% in AC and 3.4% in JA, which reveals the impact of surgical actions on the task. Meanwhile, the DSR can also increase the baseline with 1.7% in AC, which validates the sequence regularization of the auxiliary classifier benefits the training of STAR-Net. The ablation experiments indicate that the proposed MS-STA and DSR are crucial to improving the performance of STAR-Net on surgical phase recognition.\n3) Qualitative Results of Phase Recognition: We further qualitatively compare our STAR-Net with Trans-SVNet [21] and PhaseNet [12] by presenting the color-coded ribbon results on gastrectomy dataset. As shown in Fig. 4, our STAR-Net outperforms both PhaseNet [12] and Trans-SVNet [21], and is the closest to ground truth. In this way, these qualitative results confirm the superiority of our STAR-Net in surgical video analysis." }, { "figure_ref": [], "heading": "C. Comparison on Cholec80 Dataset", "publication_ref": [ "b12", "b23", "b12", "b16", "b17", "b19", "b20", "b23", "b11" ], "table_ref": [ "tab_1" ], "text": "To further evaluate the performance of phase recognition, we perform the comparison with more state-of-the-arts [13], [24] on the public Cholec80 benchmark in terms of performance and efficiency. In Table II, our STAR-Net achieves the overwhelming performance with the best AC of 91.2%, PR of 91.6% and JA of 79.5%. Furthermore, our STAR-Net demonstrates superior efficiency in comparison to existing algorithms [13], [17], [18], [20], [21], [24] with the minimal parameters and computation except for the frame-wise 2D CNN [12]. These competitive experimental results confirm the superiority of our STAR-Net on surgical phase recognition." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "D. Qualitative Analysis of Surgical Temporal Action", "publication_ref": [], "table_ref": [], "text": "To analyze the surgical temporal action, we further visualize the multi-scale action features a ms of MS-STA, as shown in Fig. 5. Compared with the current frame, the MS-STA can accurately capture the surgical actions from several previous frames, where multi-scale action features a ms highlight the instrument motions on gastrectomy and Cholec80 datasets. For example, the motion of the ultrasound knife, grasper and hook is perceived by the multi-scale action features of MS-STA in Fig. 5. In this way, the MS-STA provides visual features with the spatial and temporal information of surgical actions for the STAR-Net, thereby facilitating the phase recognition tasks." }, { "figure_ref": [], "heading": "IV. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we propose the STAR-Net to promote online surgical phase recognition efficiently. Specifically, we first devise the MS-STA module to integrate the visual features with the multi-scale temporal knowledge of surgical actions, which enables the STAR-Net to process the surgical video sequence with more abundant surgical information. Moreover, we introduce the DSR to regularize the training of STAR-Net over the frame prediction of video sequences using an auxiliary classifier. Extensive experiments on gastrectomy and cholecystectomy surgical datasets confirm the remarkable advantages of our STAR-Net over state-of-the-art works in terms of performance and efficiency, as well as the perception of surgical temporal actions." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "supported by National Key R&D Program of China (No. 2022ZD0160601), National Natural Science Foundation of China (No. 62276260, Technology Commission (No. D17100006517003), and InnoHK program. Z. Chen and Y. Zhai contribute equally to this work." } ]
To assist surgeons in the operating theatre, surgical phase recognition is critical for developing computer-assisted surgical systems, which requires comprehensive understanding of surgical videos. Although existing studies made great progress, there are still two significant limitations worthy of improvement. First, due to the compromise of resource consumption, framewise visual features are extracted by 2D networks and disregard spatial and temporal knowledge of surgical actions, which hinders subsequent inter-frame modeling for phase prediction. Second, these works simply utilize ordinary classification loss with onehot phase labels to optimize the phase predictions, and cannot fully explore surgical videos under inadequate supervision. To overcome these two limitations, we propose a Surgical Temporal Action-aware Network with sequence Regularization, named STAR-Net, to recognize surgical phases more accurately from input videos. Specifically, we propose an efficient multi-scale surgical temporal action (MS-STA) module, which integrates visual features with spatial and temporal knowledge of surgical actions at the cost of 2D networks. Moreover, we devise the dualclassifier sequence regularization (DSR) to facilitate the training of STAR-Net by the sequence guidance of an auxiliary classifier with a smaller capacity. Our STAR-Net with MS-STA and DSR can exploit visual features of surgical actions with effective regularization, thereby leading to the superior performance of surgical phase recognition. Extensive experiments on a large-scale gastrectomy surgery dataset and the public Cholec80 benchmark prove that our STAR-Net significantly outperforms state-of-thearts of surgical phase recognition.
Surgical Temporal Action-aware Network with Sequence Regularization for Phase Recognition
[ { "figure_caption": "Fig. 1. (a) The overview of the STAR-Net, (b) multi-scale surgical temporal action (MS-STA), (c) temporal difference (TDiff) operation, and (d) dual-classifier sequence regularization (DSR). The MS-STA module is inserted into the 2D visual backbone, which progressively conducts TDiff operations to efficiently capture multi-scale surgical action features. The DSR introduces the mutual regularization between the auxiliary classifier and the task classifier at the early and late sequence respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Typical examples of eight phases in gastrectomy phase dataset. Each surgical phase carries a distinct and specific clinical significance and serves as the necessary procedure of the gastrectomy.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The proportion of eight phases in gastrectomy phase dataset. The inherent imbalance of surgical phases makes online recognition challenging.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Color-coded ribbon comparison of PhaseNet, Trans-SVNet, STAR-Net and ground truth.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Visualization of surgical action features ams of MS-STA in (a) gastrectomy and (b) Cholec80 datasets. The motion of the ultrasound knife, grasper and hook is captured in MS-STA, which provides spatial and temporal information for phase recognition.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "WITH STATE-OF-THE-ARTS ON GASTRECTOMY PHASE DATASET. BEST AND SECOND BEST RESULTS ARE highlighted AND UNDERLINED.", "figure_data": "MethodAC (%)PR (%)RE (%)JA (%)P-valuePhaseNet [12]72.9 ±7.266.5 ±17.670.4 ±5.152.2 ±13.62.8×10 -17SV-RCNet [17]84.3 ±7.679.8 ±9.478.9 ±7.966.1 ±10.61.4×10 -12TeCNO [18]85.4 ±7.180.9 ±9.580.3 ±7.768.0 ±10.91.3×10 -9TMRNet [20]86.8 ±6.285.1 ±7.181.9 ±8.371.8 ±9.32.7×10 -7Trans-SVNet [21]87.7 ±6.085.1 ±6.782.0 ±8.471.9 ±9.42.5×10 -6STAR-Net w/o MS-STA, DSR85.1 ±6.380.5 ±11.681.5 ±5.068.6 ±10.81.6×10 -8STAR-Net w/o MS-STA86.8 ±6.281.8 ±10.382.7 ±6.169.6 ±10.22.6×10 -8STAR-Net w/o DSR87.9 ±6.985.5 ±7.182.3 ±8.572.0 ±9.13.0×10 -6STAR-Net89.2 ±6.186.6 ±6.483.7 ±8.173.5 ±9.0", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "WITH STATE-OF-THE-ARTS ON CHOLEC80 DATASET. BEST AND SECOND BEST RESULTS ARE highlighted AND UNDERLINED.", "figure_data": "MethodAC (%)PR (%)RE (%)JA (%)Param (10 7 ) FLOPs (10 10 )PhaseNet [12] SV-RCNet [17]78.8 ±4.7 85.3 ±7.371.3 ±15.6 80.7 ±7.076.6 ±16.6 83.5 ±7.5--4.23 2.880.07 4.14Current FrameUATD [24]88.6 ±6.786.1 ±6.788.0 ±10.173.7 ±10.22.805.72TeCNO [18]88.6 ±7.886.5 ±7.087.6 ±6.775.1 ±6.92.368.29MTRCNet-CL [13]89.2 ±7.686.9 ±4.388.0 ±6.9-2.984.14TMRNet [20]89.2 ±9.489.7 ±3.589.5 ±4.878.9 ±5.86.3024.86Trans-SVNet [21]90.3 ±7.190.7 ±5.088.8 ±7.479.3 ±6.62.3712.47STAR-Net91.2 ±5.391.6 ±3.489.2 ±9.479.5 ±8.11.683.92Previous FramesCurrent Frame𝒂 !\"ame178", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Zhen Chen; Yuhao Zhai; Jun Zhang; Jinqiao Wang
[ { "authors": "L Maier-Hein; M Eisenmann; D Sarikaya; K März; T Collins; A Malpani; J Fallert; H Feussner; S Giannarou; P Mascagni", "journal": "Med. Image Anal", "ref_id": "b0", "title": "Surgical data science-from concepts toward clinical translation", "year": "2022" }, { "authors": "Z Chen; Q Guo; L K Yeung; D T Chan; Z Lei; H Liu; J Wang", "journal": "Springer", "ref_id": "b1", "title": "Surgical video captioning with mutual-modal concept alignment", "year": "2023" }, { "authors": "Y Zhai; Z Chen; Z Zheng; X Wang; X Yan; X Liu; J Yin; J Wang; J Zhang", "journal": "Int. J. Comput. Assist. Radiol. Surg", "ref_id": "b2", "title": "Artificial intelligence for automatic surgical phase recognition of laparoscopic gastrectomy in gastric cancer", "year": "2023" }, { "authors": "C R Garrow; K.-F Kowalewski; L Li; M Wagner; M W Schmidt; S Engelhardt; D A Hashimoto; H G Kenngott; S Bodenstedt; S Speidel", "journal": "Annals of surgery", "ref_id": "b3", "title": "Machine learning for surgical phase recognition: a systematic review", "year": "2021" }, { "authors": "S S Panesar; M Kliot; R Parrish; J Fernandez-Miranda; Y Cagle; G W Britz", "journal": "Neurosurgery", "ref_id": "b4", "title": "Promises and perils of artificial intelligence in neurosurgery", "year": "2020" }, { "authors": "Z A Abdalkareem; A Amir; M A Al-Betar; P Ekhan; A I Hammouri", "journal": "Health and Technology", "ref_id": "b5", "title": "Healthcare scheduling in optimization context: a review", "year": "2021" }, { "authors": "L R Kennedy-Metz; P Mascagni; A Torralba; R D Dias; P Perona; J A Shah; N Padoy; M A Zenati", "journal": "IEEE Trans. Med. Robot. Bionics", "ref_id": "b6", "title": "Computer vision in the operating room: Opportunities and caveats", "year": "2020" }, { "authors": "A Kirubarajan; D Young; S Khan; N Crasto; M Sobel; D Sussman", "journal": "Journal of Surgical Education", "ref_id": "b7", "title": "Artificial intelligence and surgical education: A systematic scoping review of interventions", "year": "2022" }, { "authors": "F Yi; T Jiang", "journal": "Springer", "ref_id": "b8", "title": "Hard frame detection and online mapping for surgical phase recognition", "year": "2019" }, { "authors": "Y Zhang; S Bano; A.-S Page; J Deprest; D Stoyanov; F Vasconcelos", "journal": "Springer", "ref_id": "b9", "title": "Retrieval of surgical phase transitions using reinforcement learning", "year": "2022" }, { "authors": "X Ding; Z Liu; X Li", "journal": "Springer Nature", "ref_id": "b10", "title": "Free lunch for surgical video understanding by distilling self-supervisions", "year": "2022" }, { "authors": "A P Twinanda; S Shehata; D Mutter; J Marescaux; M De Mathelin; N Padoy", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b11", "title": "Endonet: a deep architecture for recognition tasks on laparoscopic videos", "year": "2016" }, { "authors": "Y Jin; H Li; Q Dou; H Chen; J Qin; C.-W Fu; P.-A Heng", "journal": "Med. Image Anal", "ref_id": "b12", "title": "Multitask recurrent convolutional network with correlation loss for surgical video analysis", "year": "2020" }, { "authors": "I Funke; S Bodenstedt; F Oehme; F Bechtolsheim; J Weitz; S Speidel", "journal": "Springer", "ref_id": "b13", "title": "Using 3d convolutional neural networks to learn spatiotemporal features for automatic surgical gesture recognition in video", "year": "2019" }, { "authors": "B Zhang; A Ghanem; A Simes; H Choi; A Yoo; A Min", "journal": "PMLR", "ref_id": "b14", "title": "Swnet: Surgical workflow recognition with deep convolutional network", "year": "2021" }, { "authors": "B Zhang; A Ghanem; A Simes; H Choi; A Yoo", "journal": "Int. J. Comput. Assist. Radiol. Surg", "ref_id": "b15", "title": "Surgical workflow recognition with 3dcnn for sleeve gastrectomy", "year": "2021" }, { "authors": "Y Jin; Q Dou; H Chen; L Yu; J Qin; C.-W Fu; P.-A Heng", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b16", "title": "Sv-rcnet: workflow recognition from surgical videos using recurrent convolutional network", "year": "2017" }, { "authors": "T Czempiel; M Paschali; M Keicher; W Simson; H Feussner; S T Kim; N Navab", "journal": "Springer", "ref_id": "b17", "title": "Tecno: Surgical phase recognition with multistage temporal convolutional networks", "year": "2020" }, { "authors": "T Czempiel; M Paschali; D Ostler; S T Kim; B Busam; N Navab", "journal": "Springer", "ref_id": "b18", "title": "Opera: Attention-regularized transformers for surgical phase recognition", "year": "2021" }, { "authors": "Y Jin; Y Long; C Chen; Z Zhao; Q Dou; P.-A Heng", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b19", "title": "Temporal memory relation network for workflow recognition from surgical video", "year": "2021" }, { "authors": "X Gao; Y Jin; Y Long; Q Dou; P.-A Heng", "journal": "Springer", "ref_id": "b20", "title": "Trans-svnet: accurate phase recognition from surgical videos via hybrid embedding aggregation transformer", "year": "2021" }, { "authors": "Y A Farha; J Gall", "journal": "", "ref_id": "b21", "title": "Ms-tcn: Multi-stage temporal convolutional network for action segmentation", "year": "2019" }, { "authors": "J Lin; C Gan; S Han", "journal": "", "ref_id": "b22", "title": "Tsm: Temporal shift module for efficient video understanding", "year": "2019" }, { "authors": "X Ding; X Yan; Z Wang; W Zhao; J Zhuang; X Xu; X Li", "journal": "IEEE Trans. Med. Imaging", "ref_id": "b23", "title": "Less is more: Surgical phase recognition from timestamp supervision", "year": "2022" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "", "ref_id": "b24", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b25", "title": "Deep residual learning for image recognition", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 125.82, 237.98, 174.21, 9.68 ], "formula_id": "formula_0", "formula_text": "a 1 = M(f -D(f , 1)),(1)" }, { "formula_coordinates": [ 3, 112.59, 454.34, 187.43, 9.68 ], "formula_id": "formula_1", "formula_text": "a ms = W ⊛ [a 1 , a 2 , • • • , a τ ],(2)" }, { "formula_coordinates": [ 3, 312.97, 656.51, 250.06, 26.22 ], "formula_id": "formula_2", "formula_text": "L DSR = i∈E KL(p (i) task || p(i) aux ) + j∈L KL(p (j) aux || p(j) task ),(3)" }, { "formula_coordinates": [ 4, 95.16, 169.83, 355.49, 110.72 ], "formula_id": "formula_3", "formula_text": "- (a) (b) (c) (d) (e) (f) (g) (h)" }, { "formula_coordinates": [ 4, 133.19, 519.81, 166.83, 9.65 ], "formula_id": "formula_4", "formula_text": "L = L CE + λL DSR ,(4)" } ]
10.2307/3857326
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2" ], "table_ref": [], "text": "The notion of trust itself is a decision [1]. People often say, \"Trust yourself\", which is deciding what to decide. Similarly, the term trustworthy carries the same meaning. We often use these two terms interchangeably, but sometimes, they are quite confusing. Lexically, trust means belief in reliability. Trust is the result of something being perceived as trustworthy. These two terms form a complementary pair. Trustworthy Artificial Intelligence (AI) implies placing our belief in AI systems. The question is how to place our beliefs.\nIn his landmark book on the Peloponnesian War, Thucydides [2] argued that the vital difference between Sparta (winner) and Athens (loser) is the leadership quality that is 1.) Ability to process a massive amount of information, 2.) Quickly decide what to decide, and 3.) Carry action with a resolution. If we use AI/ML systems for strategic decision-making, two characteristics of leadership quality (1 & 2) are precisely the issue of Trustworthy AI. However, determining how much trust to place in a result generated by AI/ML can be daunting for many applications, such as financial investments, retail marketing, corporate planning, business strategies, public policy, and even health research. The challenge is how to frame TAI.\nPerhaps Kissinger et al. [3] provided some clues for the solutions. They argued, \"The AI, then, did not reach conclusions by reasoning as humans reason; it reached conclusions by applying the model it developed.\" In other words, the essence of AI/ML is to reverse the logic of human reasoning tradition. Instead of telling a machine what reasoning rules are, we tell the machine what we like. We can refer to it as a learning process. It consists of three essential components: 1.) representation space (models for values), 2.) loss function on data (data for evaluation), and 3.) optimizer (algorithms for selection). Determining how much trust for the AI/ML result is actually placing our trust in these components (See Fig. 1)." }, { "figure_ref": [], "heading": "Fig. 1. Trustworthy AI Framework from a Strategic Decision-Making Perspective", "publication_ref": [ "b3", "b4", "b6", "b5" ], "table_ref": [], "text": "If we move to the following components' level, twelve TAI properties underpin these components. Wing [4] proposed that models (M) and a system's environment (E) or data should be satisfied (⊨) with a list of properties (P). Some researchers [5] have proposed actionable properties for TAI, such as moral operation, representation model, responsibility, and awareness of their morals. Others [7] classified TAI properties into three categories: technical, ethical and other requirements. We propose twelve TAI properties: justice, explainable/interpretability, transparency, fairness, availability, usability, security/privacy, accountability, robustness, reproducibility, reliability, and accuracy. They are organized into three groups (value, evaluation, and selection) loosely coupled with three learning components: representation space, loss function on data, and optimizer (See Fig. 1).\nThis framework demonstrates the relationship among TAI properties (P), learning components (M, D, and A), and TAI context or environments (E). For example, the representation space mainly aligns with the class of ethical value properties: justice, explainable/interpretability, transparency, and fairness. It is a box for machine learning programs to search for rules within the box. We want the learning result to satisfy our value systems. Therefore, the representation space has to meet the value properties, often ethical values or beliefs, even faith. When we design a loss function on data, a dataset must satisfy the TAI properties of accuracy, usability, security/privacy, accountability and data governance. [6] Likewise, the optimizer component should satisfy the properties of robustness, reproducibility, reliability and accountability. Overall, the decision context (E), data (D), selection algorithms (A), and representation space (M) should satisfy the TAI properties represented in Equation 1." }, { "figure_ref": [], "heading": "𝐸, 𝐷, 𝐴, 𝑀 ⊨ 𝑃", "publication_ref": [], "table_ref": [], "text": "Generally, we may include some TAI properties and exclude others in a particular decision-making context. The decision context (E) and data (D) decide which TAI property (P) should be included and which one should be excluded in the detailed AI/ML process." }, { "figure_ref": [], "heading": "Research Question", "publication_ref": [ "b11" ], "table_ref": [], "text": "Suppose we want to make a strategic investment decision regarding credit default swaps (CDS) for the technology sector in the financial derivative market (TAI context or environment E). The research question is, \"What kind of model (M), dataset (D), and algorithms (A) will satisfy the listed TAI properties (P)?\" Simply put, \"How can we rely on the AI/ML result for an investment decision?\nWhy does this matter? If we remember the 2008 financial crisis, we know that the CDS were one of the primary sources for the 2008 crisis [12]. To a certain extent, the consequences of the 2008 crisis still impact global economics today. Moreover, we intend to generalize the TAI framework for a broad context of strategic decisionmaking applications." }, { "figure_ref": [], "heading": "Research Method", "publication_ref": [], "table_ref": [], "text": "In order to solve this problem, we adopted the quantitative and qualitative research methods for this study. ] estimation for some important features is to interpret the details of the model for its transparency. By leveraging the research methods, we made the following contributions." }, { "figure_ref": [], "heading": "Main Contributions", "publication_ref": [], "table_ref": [], "text": "We articulate a novel framework to handle many TAI issues, especially through explanation, interpretation, transparency, robustness, reproducibility, and accuracy. It is built upon the machine-learning components: 1.) representation space, 2.) loss function on data, and 3.) optimizer. We identified twelve TAI properties and grouped them into three categories. Each category is loosely coupled with each ML component.\nThe framework allows us to address various TAI issues systematically.\nWe use GBM, Xgbm, and transformer models for the context of CDS prediction to demonstrate the new way of approaching the TAI issue. This study mainly focuses on the TAI's explanation, interpretation, and transparency properties via five techniques: VI, PDP, ICE plots, LIME, and SHAP.\nWe adopted Xgboost and transformer to build a predictive model for this decision context. Our experimental result indicated that Xgbm is much more compelling in satisfying some essential properties of transparency, interpretability, explainability, and reproducibility." }, { "figure_ref": [], "heading": "Scope of the Research", "publication_ref": [], "table_ref": [], "text": "The rest of the paper is organized as follows: Section 2 is a literature survey that starts from types of representation space and loss function on data to optimizers regarding TAI properties. Section 3 introduces a bird's eye view of the dataset and experimental models. Section 4 is the experimental setup and results. Section 5 is the result analysis and discussion. Section 6 is the conclusion and future research direction." }, { "figure_ref": [], "heading": "Literature Review", "publication_ref": [], "table_ref": [], "text": "The following literature review laid out a brief survey of five key related sub-topics regarding trustworthy AI: types of representation space, loss function on data, optimizer, CDS and strategic decision-making, and TAI techniques, especially on explainable AI (XAI). Based on our previous research experiences [36], we primarily focus on the gradient-boosting machine (GBM) or extreme gradient-boost machine (Xgbm) and transform techniques." }, { "figure_ref": [], "heading": "Types of Representation Space", "publication_ref": [ "b7", "b8" ], "table_ref": [], "text": "The decision objectives determine how we build representation space (model) that allows a machine to search for rules effectively. Page [8] articulated seven types of models: reason, explain, design, communicate, act, predict and explore (or REDCAPE).\nKuhn and Silge [9] suggested only three types of models: descriptive, inferential, and predictive models. If we dive into details, these two taxonomies of models are similar. We can synchronize Page's classification and Kuhn's taxonomy, which descriptive models include \"explain\" and \"communicate\"; the inference model is associated with \"reason\" and \"explore\" while the predictive model means \"design\", \"act\", and \"predict.\" (Refer to Fig. 2)" }, { "figure_ref": [], "heading": "Fig. 2. Types of Representation Space", "publication_ref": [ "b9" ], "table_ref": [], "text": "Descriptive models aim to illustrate the characteristics of some data. They usually offer a trend or some clusters in the data. A typical example is customer segmentation [10].\nIf the dataset contains customer information about age, income, purchase history, gender, and ethnic group, the created model should reflect ethical values. Inference models often produce research questions or null hypotheses for further investigation. Exploratory data analysis (EDA) and feature selection are typical examples of an inference model for machine learning. The requirements of inference models are usable and available data that people can trust. The inference model aims to create better predictors. Predictor models often ask \"what\" rather than \"how.\" They also provide a degree of uncertainty. Prediction is often closely related to explanation. The model choice depends on the problem context, the given data, and the performance requirements. The question of selecting a model leads to defining a loss function on data." }, { "figure_ref": [], "heading": "Loss Function on Data", "publication_ref": [], "table_ref": [], "text": "The essence of the loss function on data is to quantify the discrepancy between the predicted output of the AI/ML model and the actual target. The goal of a loss function is to score and evaluate potential rules that a machine can learn from representation space. It defines an objective that can be minimized with respect to data mistakes made in a collection of data.\nWe can also score a loss function indirectly to evaluate a model's performance. This indirect approach is often beneficial for many reasons: optimization focus, non-intuitive scale, imbalanced data, and complex metrics. In order to address these issues, we can The bottom line is that writing score is much simpler than writing rules explicitly. However, if a mathematical model of a loss function on data becomes too complex, it can contribute to transparency issues. Many transparency issues typically arise from data-related challenges, such as data bias, data imbalance, labelling errors, noisy data, missing data, feature selection, data privacy and security, data distribution shift, and dataset size. Furthermore, when we intend to work out the loss function on data to satisfy TAI properties, many challenges lie in selecting the right algorithm to optimise loss functions.\n•" }, { "figure_ref": [], "heading": "Optimizer", "publication_ref": [ "b10" ], "table_ref": [], "text": "Domingos [11] proposed five schools of thought on machine learning, and each school of thought mainly corresponds to one type of central problem. These five schools of ML provide a unifying approach for a broader understanding of the practical implications of algorithms, especially the selection of TAI properties in terms of robustness, reliability, reproducibility, and accuracy. Domingos touched on the strengths and weaknesses of each school's thought of ML." }, { "figure_ref": [ "fig_1" ], "heading": "Griadent Boost Machine", "publication_ref": [ "b12", "b13", "b14", "b16", "b17" ], "table_ref": [], "text": "Historically, the decision tree method can be traced back to the Classification and Regression Tree (CART) [13] in the 1980s. The fundamental idea of a decision tree involves making inquiries for the given dataset and anticipating a precise prediction result. In contrast to other nonparametric algorithms, the decision tree method offers notable transparency and explanatory power for the prediction model [14]. Since then, it has evolved to bagging or bootstrap aggregating, random forest, and boosting iterations [15][16], including at least ten different boosting iteration techniques.\nWe can roughly divide the evolution history into four development phases: 1.) CART.\n2.) Bagging bootstrap aggregation. 3.) Random Forests. 4.) Boosting iterations (See Fig. 3), although no clear demarcation line exists. We can consider the latter three phases as ensemble learning. The essence of ensemble learning is the \"wisdom of crowds\" [17] or meta-learning. Researchers have developed many boosting techniques. We can classify them into three classes: 1.) Adaptive boosting, which is the earliest algorithm. It is very slow in comparison to the next generation of models. 2.) Gradient Boosting Machine (GBM) is based on Frieman's idea of greedy function approximation [18], and 3.) Boosting models for particular types of datasets. The extreme Gradient Boost Machine (Xgbm) is the extension of GBM. The most compelling advantage of Xgbm is that we can run the algorithm in parallel on a high-performance computing (HPC) cluster or a cloud. Mathematically, we can use equations 2 and 3 to represent the gradient tree boosting algorithm for the predicted model:\n𝑓 * = argmin 𝑓 𝐿(𝑓) ; 𝑤ℎ𝑒𝑟𝑒 𝑓 = {𝑓(𝑥 𝑖 )} 𝑖=1 𝑁 ; 𝐿(𝑓) = ∑ 𝐿 𝑁 𝑖=1 |𝑦 𝑖 , 𝑓(𝑥 𝑖 )| (2\n)\n𝑓 𝐵 = ∑ 𝑓 𝑏 , 𝑓 𝑏 ∈ ℝ 𝑁 ; 𝑤ℎ𝑒𝑟𝑒 𝑓 𝑏 = 𝑓 𝑏-1 -𝛾𝑔 𝑏 ; 𝑔 𝑏 = {[ 𝜕𝐿(𝑓) 𝜕𝑓 ] 𝑓=𝑓 𝑏-1(𝑥 𝑖 ) } 𝑖=1 𝑁 𝐵 𝑏=0(3)\nWhere 𝑓 * is an optimal prediction function based on a genetic function 𝑓. 𝐿(𝑓) implies a loss function. 𝑥 𝑖 (𝑖 = 1, 2, … 𝑁) means \"𝑖\" observation and 𝑦 𝑖 stands for a predicted result. 𝑓 𝐵 means the sum of \"𝐵\" or the overall boosting functions based on N-features and 𝑓 𝑏 represents a weak learner of boosting. 𝑔 𝑏 is the steepest descent. " }, { "figure_ref": [], "heading": "Transformer Models", "publication_ref": [ "b39", "b40", "b41", "b43", "b44", "b45", "b46" ], "table_ref": [], "text": "Transformer [41], an attention-based structure, has generated significant interest due to its remarkable performance in computer vision (CV) [42] and natural language processing (NLP), exemplified by models like Generative Pre-trained Transformer (GPT) [43][44] [45]. Its ability to model long-range dependencies and interactions in sequential data makes it an attractive option for time series modelling. Transformer models have been successfully applied in various time series forecasting tasks. Stateof-the-art models include TimesNet [46], which extends 1D time series into 2D space and extracts complex temporal variations from transformed 2D tensors. Crossformer [47] embeds input data into a 2D vector array, utilizing cross-dimension dependency for multivariate time-series forecasting. PatchTST [48] introduces patching and channel-independent structures in their model, allowing for capturing local semantic information and benefiting from longer look-back windows. However, most of these models primarily focus on developing novel techniques to reduce the complexity of the original attention mechanism and achieve better performance. As a result, they are usually applied to energy, transport, and weather prediction applications. This research aims to evaluate these Transformer models from a Trustworthy AI perspective on the CDS dataset for strategic investment decisions." }, { "figure_ref": [], "heading": "CDS and Strategic Decision-Making", "publication_ref": [ "b36", "b37", "b38" ], "table_ref": [], "text": "Merton [38] developed a distance-to-default (DTD) measure based on market information, assuming that the fundamental value of a firm follows a certain stochastic process and computing the default probability from the level and volatility of its market value. Das et al. [39] and Duan et al. [40] treat the default of a firm as an intensity process, λ t ; thus, the probability of surviving from starting time t=0 to default time t=τ is s τ = exp(-∫ τ 0 𝜆 𝑡 dt). The forward intensity 𝜆 𝑡 depends on the firm and economic features and is of exponential affine form,\n𝜆 𝑡 = exp[𝐵 𝑡-𝑖 ′ 𝑋 𝑡-𝑖 ], 𝑖 ≥ 0(4)\nwhere\n𝐵 𝑡-𝑖 = [𝛽 0(𝑡-i) , … , 𝛽 𝑘(𝑡-i) ]\n′ is a vector of coefficients and 𝑋 𝑡-i = [1, 𝑋 1(𝑡-i) , … , 𝑋 𝑘(𝑡-i) ] is a vector of features, including accounting-based, marketbased, and macroeconomic variables ((such as equity value, price sale, inventory turnover, etc.). Assuming that condition on the given feature variables vector 𝑋 𝑡-𝑖 , the forward default intensity is a constant, expressed as 𝐸(𝜆 𝑡 |𝑋 𝑡-𝑖 ) = 𝜆.\nCDS enable market participants to shift the firm's default risk from an insurance buyer to an insurance seller. The buyer pays a premium to guarantee future potential protection. Hence, the decision of whether to buy or sell is often strategic because all market participants share the default risk. In order to predict selling or buying opportunities, the market participants require some trustworthy threshold level as an indicator. There are many accounting and economic features in a dataset; the challenge is to decide which feature is more important than the other and how to draw a threshold level. AI/ML can provide support for market participants' decisions." }, { "figure_ref": [], "heading": "Trustworthy AI (TAI) and Explainable AI (XAI)", "publication_ref": [ "b18", "b19", "b20", "b22", "b23", "b24", "b25", "b27", "b28", "b29", "b30", "b31", "b32", "b33" ], "table_ref": [], "text": "The decision on TAI is very challenging, especially for an application of high-stakes decision, because it involves many aspects of subjective views, such as human beliefs, faith, experiences, ethical values, emotions, justice, fairness, equality, duty, right and wrong, and good and evil. [19] Many metrics are hard to quantify.\nDuring the last decade, numerous ways of XAI have been developed because we often interpret AI/ML from different perspectives, such as users [20], logic [21] [22], biases [23], algorithms [24], responsibilities [25], methods/processes, models [26] [27], systems [28], stage [29], costs, and reasons [30]. Some researchers suggested that we should explain from a social science perspective [31]. Others [32] argue that it is not necessary to explain but interpret it. Burns et al. [33] proposed interpreting AI through hypothesis testing. However, Gilpin et al. [34] argued that the interpretation is insufficient. Whether we should explain or interpret it, many techniques are closely related. The issue of how we can apply a particular technique for a particular problem depends on a particular decision context and a dataset." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "A Brid's Eye View of the Dataset and Model Environment", "publication_ref": [], "table_ref": [], "text": "The dataset of the credit default swaps (CDS) has ten industrial sectors (See Fig. 4). The y-axis is \"spread5\" in the log scale. The spread5 represents the five-year contract of CDS. However, we increase spread5 by 10,000 times for analysis. It is a common practice for the CDS data. 4, the overall fluctuation of spread5 price has been reduced from 1.5 -8.5 before and during 2008 to 2.7 -6.5 after 2015. However, we focus on the technology sector for this research." }, { "figure_ref": [], "heading": "Sub-dataset for Technology Sector", "publication_ref": [], "table_ref": [], "text": "Compared with other industry sectors, the technology sector's fluctuation is relatively wider (See Fig. 5) for 19 companies. However, the overall trend of spread5 contract price has been narrowed down after 2015. It is important to notice that the scatter plot (Fig. 6) shows there are many missing values for some companies along the time domain. There could be many reasons why a company stopped trading for a while and resumed later.\nThe technology sector has 37,526 observations and 139 features. However, some features were generated during the pre-cleaning phase, which are dummy variables.\nOther variables have either no added values or are empty. Therefore, we removed these features and left with only 117 trainable features. " }, { "figure_ref": [], "heading": "Model Environment and Context", "publication_ref": [], "table_ref": [], "text": "The experiment aims to develop a prediction model for strategic investment decisions of CDS contracts. We want AI/ML to generate the overall predictive model to support our investment decisions (buy or sell) by drawing a threshold level of some important metrics (features)." }, { "figure_ref": [ "fig_1" ], "heading": "Selection of Algorithm for optimization", "publication_ref": [], "table_ref": [], "text": "Based on the above scatter plots and the decision context, we select two types of predictive models, namely the decision tree-based and transformer models, for our experiments. The characteristics of tree-based models satisfy many TAI properties: better transparency, explanation, reproducibility, and reasoning. Fig. 3 illustrates at least ten different tree-based models. During the initial phase of this study, we tried different tree-based models. The results indicate that Xgbm is the preferred model for prediction because Xgbm can run in parallel.\nCompared with GBM, Xgbm shows its advantage in running hyperparameters searching for a large dataset if we run the algorithm on a high-performance computing (HPC) platform or a cloud. Xgbm is generally 8-10 times faster than the GBM. In addition, we do not have to worry about missing values and can aggregate results for all 19 companies." }, { "figure_ref": [], "heading": "Experimental Setup, Assumptions and Results", "publication_ref": [ "b51" ], "table_ref": [], "text": "We first split the technology sub-dataset into a 70:30 ratio of 70% for training and 30% for testing. We also adopt a 5-fold cross-validation. The metric of the loss function is to measure root mean square error (RMSE). It is a common practice to use RMSE. [53] We first ran GBM experiments and set up 36 grid points for the initial hyperparameter search to get a basic intuition about the terrain of the hyperparameter search field. Once this initial search has been done, we run a full-scale Xgbm hyperparameter search for 243 grid points on our HPC environment configured with a 128-core and 256 GB RAM cluster. And then, we will select the optimal parameters for the final prediction model.\nAfter the final prediction model, we adopt five tools to explain the predictive model from global and local perspectives. These techniques include variable importance (VI) and partial dependent plots (PDP), individual conditional expectation (ICE), local interpretable model-agnostic explanations (LIME), and Shapley additive explanations (SHAP) values estimation. During our initial trial, we found that some categorical variables have a strong influence in the VI plot but very little explanatory power, such as \"redcode\" and \"cusip\". Therefore, we exclude these variables from our tests." }, { "figure_ref": [], "heading": "GBM Experimental Results", "publication_ref": [], "table_ref": [], "text": "The first experiment is the GBM, which aims to have a rough estimation of some parameters, including the number of trees, shrinkage, interaction node depth, k value of cross-validation folds, bag fraction rate, and the number of minimum nodes." }, { "figure_ref": [], "heading": "Fig. 7. GBM experimental results", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The initial test showed that reducing the shrinkage rate (gradient step) does not help, but increasing interaction depth and the number of minimum nodes increases the prediction performance (See Fig. 7). The right bag fraction value also increases performance (See Table 2). However, these parameters are not optimal. We have to run a hyperparameter search to find the optimal values of all parameters. " }, { "figure_ref": [], "heading": "Transformer Models", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We segmented the sub-dataset into 19 smaller datasets for the transformer model based on the company's code: \"redcode\". We then categorize these subsets into two groups for comparison according to nature threshold observations. In the experiments, we split each subset into a 70:10:20 ratio for training, validation, and testing. Afterwards, we employ three transformer models -TimesNet, PatchTST, and Crossformer for experiments on the HPC platform with one GPU and seven cores. The entire training process shows that PatchTST is a more efficient model. All transformer models in this experiment are in a \"long-term forecasting\" setting. The results are shown in Fig. 8 and Table 4." }, { "figure_ref": [], "heading": "Fig. 8. 19 Companies of RMSE Results for CDS Prediction Models", "publication_ref": [], "table_ref": [], "text": "The average RMSE for all models is 52.43. Based on the default parameters configuration, PatchTST's result is the best among these models. Notice that we did not implement a hyperparameter search for all transformer models because of the limited time and resources. All results are based on a random selection of the models' parameters. Therefore, the results are not optimal. Now, let us explain or interpret the prediction results. " }, { "figure_ref": [], "heading": "Variable Importance or Influence (VI) Results", "publication_ref": [], "table_ref": [], "text": "The essence of the variable importance (VI) technique is its ability to identify and quantify the influence of individual features on the prediction performance. This technique is critical to understanding the most influential features in making accurate predictions. We plot the top 20 influence features or variables in Fig. 9." }, { "figure_ref": [], "heading": "Fig. 9. VI results", "publication_ref": [], "table_ref": [], "text": "Notice that the order of the top five influence features is relatively stable for all predictive models, but the rest of the features may change from one model to another. If the relative influence value is less than 10%, the ranking order of influence features will change." }, { "figure_ref": [], "heading": "Partial Dependent Plot (PDP) Results", "publication_ref": [], "table_ref": [], "text": "According to Fig. 9, which gives the variable importance (influence) results, we select the top six most relatively influential variables: \"equity value\"(total asset-total liabilities), \"price sale\" (market capitalization/total revenue), \"recovery\"(a kind of protection rate for a CDS buyer), \"inventory turnover\", \"Interest coverage ratio\", and \"default spread\" for PDP analysis (See Fig. 10). The PDP provides a transparent and interpretable visualization of the relationship between a particular feature and the predictive outcome while keeping all other features constant. This technique assumes features are independent and identically distributed random variables (i.i.d.)" }, { "figure_ref": [], "heading": "Fig. 10. PDP Results", "publication_ref": [], "table_ref": [], "text": "For example, if the recovery value is less than the 0.2 threshold, the spread5 will drop nearly five times. (Refer to Fig. 10. diagram 2). On the other hand, if the default spread value is larger than 2, the spread5 value increases by 0.5 base. This explainable technique exhibits the average view of prediction results. We can use the ICE technique to reveal the prediction results for more details of each instance." }, { "figure_ref": [ "fig_7", "fig_1" ], "heading": "Individual Conditional Expectation (ICE) Results", "publication_ref": [], "table_ref": [], "text": "This study selects the top two variables (equity value and price sale) for the ICE experiments. There are two plots for each variable shown in Fig. 11. One is a simple stack plot, and the other is a central plot. The ICE delivers a fine-grained understanding of how a specific feature affects the prediction of a single observation. To a certain extent, it provides a distributed view of a particular instance's influence on a particular feature prediction. This technique is invaluable for gaining insights into complex model behaviour and building trust in black-box model predictions. As indicated in Fig 3, the GBM is one type of ensemble model because more individual weak models are added to the ensemble. While ensemble models can be very powerful in predictive performance, they tend to be more complex than individual models. Balancing this complexity with the benefits of predictive accuracy is an important consideration when using ensembles in practice. Each black line is an observation, and the red line is a PDP. " }, { "figure_ref": [], "heading": "Local interpretable model-agnostic explanations (LIME) Results", "publication_ref": [], "table_ref": [], "text": "LIME is another technique that we tested in this paper. It aims to offer interpretable explanations for the predictions made by complex models. The primary goal of LIME is to make the predictive model more interpretable. It focuses on the local level rather than globally. Therefore, we selected eight individual cases for LIME analysis. (See Fig. 12). The first four cases (upper level) are before the 2008 financial crisis, and the other four cases (lower level) are after the 2008 financial crisis.\nFig. 12. LIME Fig. 12 shows that the features contribute to the accuracy of a prediction. Only one case shows that \"equity value\" has a positive impact on the prediction value, but the predive RMSE is very large compared with other results. The remaining cases show a negative impact if RMSE values are less than 100." }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "Shapley Values (SHAP) Results", "publication_ref": [ "b34", "b47", "b48", "b49" ], "table_ref": [ "tab_4" ], "text": "The Shapley values estimation attempts to explain complex machine learning models. It interprets some individual predictions. The essence of Shapley values implies the cooperative game theory and its application in allocating the values or contributions of each feature in a coalition game. Shapley values estimation captures fairness and marginality. It also considers the permutation of feature orderings and calculates each permutation's marginal contribution, then averages these contributions to estimate the Shapley value for each feature. (See Fig. 13) However, data availability is one of the critical factors for Shapley values estimation. We might have explicit measurements of the feature's contributions. It depends on our decision context. Shapley value estimation is quite sensitive to distribution models. We adopt an empirical distribution model for the estimation (See Fig. 13). The result will be slightly different if we use a \"copula\" distribution. Shapley value estimation is both an art and a science. The explainable method depends on both data and the underlying characteristics of feature interaction. Overall, Shapley value estimation provides an equitable way to allocate each feature's contribution to the predicted case. With the given dataset, we decided to use tree-based models and transformers for the experiments. Our experimental results illustrate that the Xgbm technique is more compelling than other models because it is very flexible for different datasets. We can also run the algorithm in parallel, even on a single machine with multiple cores. When we run a hyperparameter search (optimizer), Xgbm can save a lot of time. Table 3 illustrates that Xgbm can save as many as one week for 243 grid points hyperparameter searches. The Xgbm technique implies that we can find an optimal solution quickly that we can trust for a strategic investment decision. [35], this study focuses on a systematic method to approach the TAI issue in the context of strategic investment decisions. We intend to provide a general framework for the TAI solution.\nThe limitation of this study is that we have not covered all TAI properties, such as data governance, privacy, and security issues. As we indicated before, the data has been precleaned. While people cleaned the data, they deleted many observations due to missing values. This process may cause some issues with the accuracy of a prediction model. Furthermore, we did not run a hyperparameter search for transformer models. These issues will be a part of our future study when we receive the raw dataset and have enough computational resources. Another fundamental issue is that many AI/ML techniques focus only on correlation rather than logical reasoning.\nLenat and Marcus [49] argued that the Large Language Model (LLM) models are incomplete because they lack reasoning capabilities. Therefore, these models cannot be completely trustworthy. They proposed a rule-based system known as \"Cyc\" to be a complementary system for modern AI/ML models. They have been working on the \"Cyc\" project since 1984. Lenat and Marcus suggested that the modern AI/ML models are more like Kahneman's system-1 thinking [50], and \"Cyc\" is similar to Kahneman's system-2 thinking, which is underpinned by many logical reasoning approaches, such as inductive, reductive, and abductive methods. The Cyc project intends to build a common knowledge AI we can trust for strategic decision-making.\nSteve Jobs once stated, \"You cannot connect the dots looking forward; you can only connect them looking backwards.\" [51] Similarly, the modern AI/ML models can only extract the patterns of connected dots by looking backwards from a dataset, but strategic decision-making requires us to place dots by looking forward. It seems to be a dilemma or paradoxical. How can we trust the connected dots by looking backwards and lead to placing dots by looking forward? The answer could lie in the common knowledge of AI." }, { "figure_ref": [], "heading": "Conclusions and Future Direction", "publication_ref": [], "table_ref": [], "text": "The research aims to create a novel framework for trustworthy AI from a strategic decision-making perspective. Based on the given dataset, we use GBM, Xgbm, and transformer models to test our hypothesis for the given dataset. The experimental results show that Xgbm is the compelling model for strategic investment decisions. This new framework of trustworthy AI provides a practical solution that can be applied to many contexts. It draws the baseline of deciding what to decide. Our main contribution is to build a bridge between trustworthy AI properties and practical ML solutions for strategic decision-making. However, we only cover a limited part of TAI in this research. We will cover all the TAI properties and the common knowledge AI for other decision contexts in future research." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This research was funded in whole or part by the Luxembourg National Research Fund (FNR), grant ID C21/IS/16221483/CBD and grant ID 15748747. For open access, the author has applied a Creative Commons Attribution 4.0 International (CC BY 4.0) license to any Author Accepted Manuscript version arising from this submission." } ]
When engaging in strategic decision-making, we are frequently confronted with overwhelming information and data. The situation can be further complicated when certain pieces of evidence contradict each other or become paradoxical. The primary challenge is how to determine which information can be trusted when we adopt Artificial Intelligence (AI) systems for decisionmaking. This issue is known as "deciding what to decide" or Trustworthy AI. However, the AI system itself is often considered an opaque "black box". We propose a new approach to address this issue by introducing a novel framework of Trustworthy AI (TAI) encompassing three crucial components of AI: representation space, loss function, and optimizer. Each component is loosely coupled with four TAI properties. Altogether, the framework consists of twelve TAI properties. We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods to satisfy TAI properties for the decision-making context. The framework allows us to formulate an optimal prediction model trained by the given dataset for applying the strategic investment decision of credit default swaps (CDS) in the technology sector. Finally, we provide our view of the future direction of TAI research.
Trustworthy AI: Deciding What to Decide A Strategic Decision on Credit Default Swaps Investment
[ { "figure_caption": "Descriptive • Explain : To provide explanations for empirical phenomena (Value) • Communicate (interpret): To relate knowledge and understand (Value) • Inference • Reason: To identify conditions and deduce logical implications (Evaluation) • Explore: To investigate possibilities and hypotheticals (Evaluation) • Predictive • Design: To choose features of institutions, polices and rules (Selection) • Act: To guide policy choices and strategic actions (Selection) • Predict: To make numerical and categorical predictions of future (Selection) adopt validation metrics, hyperparameter tuning, model selection (comparing different models), and interpretability. These techniques can satisfy TAI properties in practice.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Tree-Based Algorithms' Evolution", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "[ 52 ]52The derivatives market usually has ten different CDS contracts regarding time or year[37]. The dataset that we have only has a five-year contract. The x-axis is the time domain between 3/Jan/2006 and 29/Dec/2017. It contains a total of 749,783 observations and 139 features. The data has been precleaned manually. Consequently, many missing values have been deleted rather than estimated through multiple imputations.", "figure_data": "", "figure_id": "fig_3", "figure_label": "52", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Scatt Plot of 1% Sample of the Dataset As shown in Fig.4, the overall fluctuation of spread5 price has been reduced from 1.5 -8.5 before and during 2008 to 2.7 -6.5 after 2015. However, we focus on the technology sector for this research.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. Scatt Plot Sub-dataset for the Technology Sector", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. ICE Results", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 13 .13Fig. 13. SHAP Values", "figure_data": "", "figure_id": "fig_8", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Table 1 illustrates the details of Domingos' five Schools of ML. Domingos Five Schools of Machine Learning", "figure_data": "Central ProblemKey AlgorithmsReasoning with symbolsDecision tree (if-then)Analyzing perceptual information Neural network/Deep neural network (perception)Managing uncertaintyBayesian networks (statistical data)Discovering new structureGenetic program (natural selection)Exploiting similaritiesNearest Neighbours (previous cases)", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "GBM Experiment ResultsWe set up 243 grid points for the Xgbm hyperparameter search based on the intuition gained from initial tests. With a 128-node HPC cluster, it only takes 1.3 hours. We could achieve an even better RMSE of 25.97 by running a large hyperparameter (768 grid points) and more trees(3,500). However, the model improves very little for test RMSE after around 500 trees. It only improves the training RMSE. Therefore, we select 500 trees as a cutoff point.", "figure_data": "ParametersFig.7 Left DiagramFig.7 Right DiagramFinal ResultsDistributionGaussianGaussianGaussian# trees1000500800", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Xgbm Experiment Results", "figure_data": "ParametersCPU usage timeSystem timeElapsed timeHPC platform593,717.9869.554,713.02Shrinkage or learning rate Max tree depth Min. rows /each end nodek fold CV0.10515Subsample for each treeColumn sampleNumber of treesMin RMSE0.80150026.30", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Transformer Models' RMSE Results", "figure_data": "Transformer ModelsTimesNetPatchTSTCrossformerAverageTraining Time1203.64368.191615.801062.54RMSE54.7138.2964.3052.43", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Fig 8 demonstrates each company's RMSE result for different transformers: TimesNet, PatchTST, and Crossformer. PatchTST performs the best with a set of education guess parameters. However, compared with the XGBM model, the Xgbm model has more explanatory power. To better explain the CDS predictive model, we implemented five experiments to satisfy the listed trustworthy properties: transparency, explainable/ interpretability, usability, accuracy, robustness, reliability, and reproducibility. VI experiment demonstrates which features have a high influence on the predictive model. Based on the VI ranking order, we plot out the number of PDP that provides crucial insight for the strategic investment decision, which is when to sell or buy the CDS (spread5) contracts. The ICE plot shows how individual observation contributes to the overall PDP.LIME provides a local explanation for the prediction model. It generates explanations by training an interpretable surrogate model (usually a simpler linear model) on a neighbourhood of the data point of interest. It tries to mimic the behaviour of the complex model locally. Generally, the LIME aims to make GBM or Xgbm more transparent and interpretable by generating local explanations of how a model arrived at a particular prediction for a specific instance. It is essential in applications where model interpretability is critical for trust and decision-making. Similarly, Shapley value estimation aims to quantify the contribution of each feature across all possible combinations of features. Shapley's method is often more stable and theoretically grounded, providing consistent explanations across different settings.Compared with many previous research works[20][22][23][24]", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Caesar Wu; Yuan-Fang Li; Jian Li; Jingjing Xu; Bouvry Pascal
[ { "authors": "F Flores; C Solomon", "journal": "Business Ethics Quarterly", "ref_id": "b0", "title": "Creating Trust1", "year": "1998" }, { "authors": "G Cawkwell", "journal": "Routledge", "ref_id": "b1", "title": "Thucydides and the Peloponnesian War", "year": "2006" }, { "authors": "H A Kissinger; E Schmidt; D Huttenlocher", "journal": "Hachette UK", "ref_id": "b2", "title": "The age of AI: and our human future", "year": "2021" }, { "authors": "M Wing", "journal": "Communications of the ACM", "ref_id": "b3", "title": "Trustworthy ai", "year": "2021" }, { "authors": "L Siebert", "journal": "AI and Ethics", "ref_id": "b4", "title": "Meaningful human control: Actionable properties for AI system development", "year": "2022" }, { "authors": "E Eryurek", "journal": "Reilly Media, Inc", "ref_id": "b5", "title": "Data Governance: The Definitive Guide. People, Processes, and Tools to Operationalize Data Trustworthiness O", "year": "2021" }, { "authors": "B Li", "journal": "ACM Computing Surveys", "ref_id": "b6", "title": "Trustworthy AI: From principles to practices", "year": "2023" }, { "authors": "E Page", "journal": "Basic Books", "ref_id": "b7", "title": "The model thinker: What you need to know to make data work for you", "year": "2018" }, { "authors": "M Kuhn; Julia ; S ", "journal": "", "ref_id": "b8", "title": "Tidy Modeling With R: A Framework for Modeling in the Tidyverse", "year": "2021" }, { "authors": "C Wu", "journal": "InICSOFT", "ref_id": "b9", "title": "Cloud Computing Market Segmentation", "year": "2018" }, { "authors": "P Domingos", "journal": "Basic Books", "ref_id": "b10", "title": "The master algorithm: How the quest for the ultimate learning machine will remake our world", "year": "2015" }, { "authors": "M Stulz", "journal": "Journal of Economic Perspectives", "ref_id": "b11", "title": "Credit default swaps and the credit crisis", "year": "2010" }, { "authors": "L Breiman", "journal": "Routledge", "ref_id": "b12", "title": "Classification and regression trees", "year": "2017" }, { "authors": "M Lundberg", "journal": "Nature machine intelligence", "ref_id": "b13", "title": "From local explanations to global understanding with explainable AI for trees", "year": "2020" }, { "authors": "A Mayr", "journal": "Methods of information in medicine", "ref_id": "b14", "title": "The evolution of boosting algorithms", "year": "2014" }, { "authors": "Z He", "journal": "", "ref_id": "b15", "title": "Gradient boosting machine: a survey", "year": "2019" }, { "authors": "J Surowiecki", "journal": "Anchor", "ref_id": "b16", "title": "The wisdom of crowds", "year": "2005" }, { "authors": "H Friedman", "journal": "Annals of statistics", "ref_id": "b17", "title": "Greedy function approximation: a Gradient boosting machine", "year": "2001" }, { "authors": "C Wu; P Bouvry", "journal": "ACM Computing Surveys", "ref_id": "b18", "title": "Strategic Decisions: Survey, Taxonomy, and Future Directions from Artificial Intelligence Perspective", "year": "2023" }, { "authors": "D Shin", "journal": "Journal of Broadcasting & Electronic Media", "ref_id": "b19", "title": "User perceptions of algorithmic decisions in the personalized AI system: perceptual evaluation of Fairness, accountability, transparency, and explainability", "year": "2020" }, { "authors": "S Verma", "journal": "", "ref_id": "b20", "title": "Counterfactual explanations for machine learning: A review", "year": "2020" }, { "authors": "R Mothilal", "journal": "", "ref_id": "b21", "title": "Explaining machine learning classifiers through diverse counterfactual explanations", "year": "2020" }, { "authors": "N Mehrabi", "journal": "ACM Computing Surveys(CSUR)", "ref_id": "b22", "title": "A survey on bias and Fairness in machine learning", "year": "2021" }, { "authors": "A Das; Paul R ", "journal": "", "ref_id": "b23", "title": "Opportunities and challenges in explainable artificial intelligence (xai): A survey", "year": "2020" }, { "authors": "B Arrieta", "journal": "AI Information fusion", "ref_id": "b24", "title": "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible", "year": "2020" }, { "authors": "F Bodria", "journal": "", "ref_id": "b25", "title": "Benchmarking and survey of explanation methods for black box models", "year": "2021" }, { "authors": "P Angelov", "journal": "Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery", "ref_id": "b26", "title": "Explainable artificial intelligence: an analytical review", "year": "2021" }, { "authors": "D Pedreschi", "journal": "", "ref_id": "b27", "title": "Meaningful explanations of black box AI decision systems", "year": "2019" }, { "authors": "S Jesus", "journal": "", "ref_id": "b28", "title": "How can I choose an explainer? An application-grounded evaluation of posthoc explanations", "year": "2021" }, { "authors": "A Adadi; Mohammed B ", "journal": "IEEE Access", "ref_id": "b29", "title": "Peeking inside the black box: a survey on explainable artificial intelligence (XAI)", "year": "2018" }, { "authors": "T Miller", "journal": "Artificial intelligence", "ref_id": "b30", "title": "Explanation in artificial intelligence: Insights from the social sciences", "year": "2019" }, { "authors": "Cynthia Rudin", "journal": "Nature Machine Intelligence", "ref_id": "b31", "title": "Stop explaining black-box machine learning models for high-stakes decisions and use interpretable models instead", "year": "2019" }, { "authors": "C Burns", "journal": "", "ref_id": "b32", "title": "Interpreting black box models via hypothesis testing", "year": "2020" }, { "authors": "H Gilpin", "journal": "IEEE", "ref_id": "b33", "title": "Explaining explanations: An overview of interpretability of machine learning", "year": "2018" }, { "authors": "S Sharma", "journal": "", "ref_id": "b34", "title": "Certifai: Counterfactual explanations for robustness, transparency, interpretability, and Fairness of artificial intelligence models", "year": "2019" }, { "authors": "C Wu", "journal": "", "ref_id": "b35", "title": "Strategic Predictions and Explanations By Machine Learning", "year": "2023" }, { "authors": "Robert C Merton", "journal": "The Journal of Finance", "ref_id": "b36", "title": "On the pricing of corporate debt: The risk structure of interest rates", "year": "1974" }, { "authors": "R Das", "journal": "Journal of Banking & Finance", "ref_id": "b37", "title": "Accounting-based versus market-based cross-sectional models of CDS spreads", "year": "2019" }, { "authors": "J Duan", "journal": "Journal of Econometrics", "ref_id": "b38", "title": "Multiperiod corporate default prediction: a forward intensity approach", "year": "2012" }, { "authors": "A Vaswani", "journal": "", "ref_id": "b39", "title": "Attention is all you need", "year": "2017" }, { "authors": "Y Liu", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b40", "title": "A survey of visual transformers", "year": "2023" }, { "authors": "A Radford", "journal": "OpenAI blog", "ref_id": "b41", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "A Radford", "journal": "OpenAI blog", "ref_id": "b42", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "T Brown", "journal": "", "ref_id": "b43", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "H Wu", "journal": "", "ref_id": "b44", "title": "Timesnet: Temporal 2d-variation modelling for general time series analysis", "year": "2022" }, { "authors": "Y Zhang; Y Yan", "journal": "", "ref_id": "b45", "title": "Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting", "year": "2022" }, { "authors": "Y Nie", "journal": "", "ref_id": "b46", "title": "A time series is worth 64 words: Long-term forecasting with transformers", "year": "2022" }, { "authors": "D Lenat; G Marcus", "journal": "", "ref_id": "b47", "title": "Getting from Generative AI to Trustworthy AI: What LLMs might learn from Cyc", "year": "2023" }, { "authors": "D Kahneman", "journal": "", "ref_id": "b48", "title": "Thinking fast and slow", "year": "2017" }, { "authors": "S Jobs", "journal": "", "ref_id": "b49", "title": "Commencement address", "year": "2005" }, { "authors": "J Hull; Alan W ", "journal": "Journal of Derivatives", "ref_id": "b50", "title": "The valuation of credit default swap options", "year": "2003" }, { "authors": "T Chai; R Draxler", "journal": "Geoscientific model development", "ref_id": "b51", "title": "Root mean square error (RMSE) or mean absolute error (MAE)?-Arguments against avoiding RMSE in the literature", "year": "2014-06-30" } ]
[ { "formula_coordinates": [ 5, 133.19, 224.3, 3.32, 8.87 ], "formula_id": "formula_1", "formula_text": "•" }, { "formula_coordinates": [ 7, 141.62, 259.48, 325.28, 19.2 ], "formula_id": "formula_2", "formula_text": "𝑓 * = argmin 𝑓 𝐿(𝑓) ; 𝑤ℎ𝑒𝑟𝑒 𝑓 = {𝑓(𝑥 𝑖 )} 𝑖=1 𝑁 ; 𝐿(𝑓) = ∑ 𝐿 𝑁 𝑖=1 |𝑦 𝑖 , 𝑓(𝑥 𝑖 )| (2" }, { "formula_coordinates": [ 7, 466.9, 262.05, 3.92, 8.96 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 7, 131.9, 286.96, 338.92, 36.6 ], "formula_id": "formula_4", "formula_text": "𝑓 𝐵 = ∑ 𝑓 𝑏 , 𝑓 𝑏 ∈ ℝ 𝑁 ; 𝑤ℎ𝑒𝑟𝑒 𝑓 𝑏 = 𝑓 𝑏-1 -𝛾𝑔 𝑏 ; 𝑔 𝑏 = {[ 𝜕𝐿(𝑓) 𝜕𝑓 ] 𝑓=𝑓 𝑏-1(𝑥 𝑖 ) } 𝑖=1 𝑁 𝐵 𝑏=0(3)" }, { "formula_coordinates": [ 8, 234.17, 308.68, 236.65, 13.2 ], "formula_id": "formula_5", "formula_text": "𝜆 𝑡 = exp[𝐵 𝑡-𝑖 ′ 𝑋 𝑡-𝑖 ], 𝑖 ≥ 0(4)" }, { "formula_coordinates": [ 8, 159.02, 332.5, 107.5, 11.33 ], "formula_id": "formula_6", "formula_text": "𝐵 𝑡-𝑖 = [𝛽 0(𝑡-i) , … , 𝛽 𝑘(𝑡-i) ]" } ]
2023-11-21
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b29", "b13", "b29", "b29", "b29", "b29", "b8" ], "table_ref": [], "text": "Recently, object detection has achieved great success with the help of sufficient labeled data. However, annotating abundant fully labeled datasets is a costly and time-consuming process. Therefore, to effectively leverage abundant unlabeled data, semi-supervised object detection (SSOD) has received extensive attention. The existing works in SSOD [13,23,27,30] mainly focus on general object detection, in which the objects are annotated with horizontal boxes. However, in some scenes, such as aerial images, horizontal boxes have difficulty efficiently representing objects [3,25]. In contrast to those in general scenes, objects in aerial images are typically captured from the bird's-eye view (BEW) and consequently present 10% 20% 30% 100%\nProportion of used labeled data (%) additional challenges including arbitrary orientations, small scales and dense distribution [3]. Therefore, for the processing of such images, semi-supervised oriented object detection should be given serious consideration.\nExisting SSOD methods strongly rely on the precise pseudo labels, which can be divided into spare pseudo-label [10, 14,23,27] and dense pseudo-label [30], according to the sparsity of pseudo-label. In the spare pseudo-label, bounding boxes and their labels are provided as the supervision information, similar to the ground truth. And some strict conditions are applied to select reliable pseudo labels [27]. However, for the dense pseudo-label, pseudo labels are directly selected from the original output of the teacher model without any complicated post-processing steps. By removing post-processing steps, dense pseudo-label retains richer information [30] and thus has received extensive attention. However, for aerial scenes, existing dense pseudolabel selection methods are inefficient. Dense Teacher [30] proposes a region selection technique to highlight the key information and suppress noise but it requires a fixed selecting ratio to control the number of pseudo labels, which limits the ability to select sufficient pseudo labels in dense scenes, as shown in Fig. 2b, and may cause the selected pseudo labels to contain abundant noise in other scenes. SOOD [7] combines dense pseudo-label with spare pseudolabel to reduce noise. In SOOD [7], dense pseudo labels are randomly sampled from the teacher's predictions but involve a sequence of post-processing steps with fine-tuned hyper-parameters, which has been shown to be sensitive in dense scenes [30].\nIn this study, we find that an important factor contributing to the above problems is that the density of potential objects is not taken into account in the existing dense pseudolabel selection methods. In general scenes, objects tend to be evenly distributed and the importance of potential objects' density is ignored. However, in aerial scenes, objects tend to be densely distributed, which means that most of the objects are concentrated in a small area while the rest mainly consists of background. In this case, considering the density of potential objects during the selection of dense pseudo-label can greatly facilitate the selection process.\nTherefore, we propose Adaptive Dense Pseudo Label Selection (ADPLS) for semi-supervised oriented object detection. The key component of ADPLS is an adaptive mechanism designed to estimate the density of potential objects in an image and use it to guide the selection of dense pseudolabel. Specifically, we consider that the post-sigmoid logit predictions of the teacher model can be act as indicators of where the features are rich [29]. Thus we further propose the mean Feature-Richness Score (mFRS) to estimate the density of potential objects contained in an image and then use this score to adjust the number of dense pseudo labels selected. With the help of ADPLS, we formulate a direct way to integrate potential objects information into the dense pseudo-label selection process.\nThe proposed ADPLS method greatly surpasses cur-rent semi-supervised oriented object detection method especially when labeled data are scarce. Specifically, our approach reaches 55.68 mAP with 10% labeled data on the DOTA-v1.5 benchmark, surpassing the previous best method SOOD [7] by 7.05 points. Concretely, this paper offers the following contributions to the field.\n• We find that ignoring the potential objects' density in the existing dense pseudo-label selection methods impedes the performance of semi-supervised oriented object detection. • We propose a simple but effective method, called Adaptive Dense Pseudo Label Selection (ADPLS), to formulate a direct way to integrate potential objects information into the dense pseudo-label selection process and select suitable pseudo labels. • Our ADPLS method achieves state-of-the-art performance under various settings on the DOTA-v1.5 dataset." }, { "figure_ref": [], "heading": "Related works", "publication_ref": [ "b29", "b10", "b13", "b14", "b29", "b8" ], "table_ref": [], "text": "Semi-Supervised Object Detection. branch. In Dense Teacher [30], sparse pseudo-boxes are replaced with dense predictions as a straightforward form of pseudo-label to eliminate the influence of post-processing and handcrafted thresholds. Consistent Teacher [23] is proposed by combining adaptive anchor assignment, 3D feature alignment module and Gaussian Mixture Model to reduce inconsistency during training. However, the above works all focus on general object detection. This paper aims to improve the performance of semi-supervised oriented object detection.\nOrient Object Detection. Different from general object detection, orient object detection locates objects with Oriented Bounding Boxes (OBBs). Typically, oriented objects are captured from BEV perspective, as in the images acquired by satellites and unmanned aerial vehicles (UAVs).\nIn recent years, many methods have been proposed to improve the performance in this area. RoI Transformer [3] is proposed by combining an RRoI learner to convert horizontal regions of interest (HRoIs) into rotated regions of interest (RRoIs) and an RPS RoI Align module to extract spatially rotation-invariant feature maps. In ReDet [5], rotation equivariant networks and RiRoI Align are adopted to extract rotation-invariant features in both spatial and orientation dimensions. Oriented RCNN [26] designs a lightweight module to generate oriented proposals and proposes a midpoint offset to represent objects. LSKNet [11] explores large and selective kernel mechanisms in orient object detection. However, these works are all implemented in a su-pervised setting, whereas the present paper focuses on semisupervised oriented object detection.\nPseudo Label Selection in Semi-supervised Object Detection. In SSOD, as discussed in Sec. 1, depending on sparsity of pseudo-label, pseudo-label can be divided into sparse pseudo-label and dense pseudo-label. For the selection of the former, a threshold-based method is usually used to select reliable pseudo label [14,15,23,27]. Nevertheless, some researchers have noted the drawbacks of using a fixed threshold. For example, in Consistent Teacher [23], a Gaussian Mixture Model is used in place of a fixed threshold to adopt class-wise adaptive threshold in accordance with the training status. However, the selection of dense pseudo-label has not yet been sufficiently studied. Dense Teacher [30] relies on sorting base on the Feature-Richness Score (FRS) [29] to select reliable pseudo label but needs a pre-define fixed hyper-parameter to control the selection number. SOOD [7] selects reliable dense pseudo labels by randomly sampling from the teacher's predictions, but thus it requires a sequence of post-processing steps with finetuned hyper-parameters. However, our method designs an adaptive mechanism to select appropriate dense pseudo labels based on the density of potential objects contained in an image, thus removing above problems.\nSemi-supervised Oriented Object Detection. Recently, SOOD [7] explores semi-supervised oriented object detection by introducing global consistency and adaptive weights based on the orientation gap between the teacher and the student models, achieving excellent performance. Compared with SOOD [7], our method introduces extra potential object information into the selection of dense pseudo labels, and thus improves the quality of dense pseudo labels." }, { "figure_ref": [ "fig_2" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our method in detail. In Sec. 3.1, we first introduce the pseudo-labeling framework. In Sec. 3.2, we further describe the proposed Adaptive Dense Pseudo Label Selection (ADPLS) method. Figure 3 illustrates an overview of our proposed method." }, { "figure_ref": [], "heading": "Pseudo-labeling Framework", "publication_ref": [ "b29", "b19" ], "table_ref": [], "text": "We first introduce the pseudo-labeling framework [27,30] that is widely used in SSOD. Our method follows the teacher-student paradigm [20]. In each training iteration, a batch composed of both labeled images and unlabeled images is sampled, where the sampling is controlled by sample ratio s r . Then the teacher model generates pseudo labels on the unlabeled images and the student model takes both the pseudo labels in unlabeled images and the ground-truth in labeled images as its training targets. The training objective consists of a supervised loss and an unsupervised loss:\nL s = 1 N l N l i [L cls (T (I i l )) + L reg (T (I i l )](1)\nL u = 1 N u Nu i [L cls (T ′ (I i l )) + L reg (T ′ (I i l )](2)\nwhere L u is the classification loss, L s is the regression loss, and N l and N u denote the numbers of labeled and unlabeled images, respectively. I i l and I i u represent the i-th labeled image and the i-th unlabeled image, respectively. T and T ′ indicate weak and strong augmentation, respectively. The overall loss is defined as a weighted sum of the supervised and unsupervised losses:\nL = L s + αL u(3)\nAs training progresses, the parameters of the teacher model are updated in an EMA manner:\nθ t = (1 -λ)θ t + λθ s (4\n)\nwhere λ is momentum." }, { "figure_ref": [], "heading": "Adaptive Dense Pseudo Label Selection", "publication_ref": [ "b8", "b8", "b29", "b23", "b0", "b23" ], "table_ref": [], "text": "The performance of the detector depends on the quality of the pseudo labels. To effectively leverage potential objects in the unlabeled data with huge density difference, we propose Adaptive Dense Pseudo Label Selection (AD-PLS) to integrate potential object information into the dense pseudo-label selection process.\nIdeally, we want to select dense pseudo labels according to the density of potential objects contained in an image. However, in practice, an accurate estimate of the potential objects is impossible. Therefore, we seek an approximate method to estimate it. We consider that the post-sigmoid logit predictions can well serve as indicators of where the features are rich [29] and are easily obtained in the pseudolabeling framework. We empirically find that feature richness density throughout the image can serve as a proxy to estimate the density of potential objects.\nSpecifically, we propose the mean Feature-Richness Score (mFRS) to estimate the density of potential objects. In the previous work [29], the Feature-Richness Score (FRS) is defined as:\nS lij = max c y lij,c(5)\nwhere y lij,c is the probability of category c in the l-th feature pyramid network (FPN) layer at location (i, j) in the corresponding feature map. We further define the mFRS as:\nS mean = 1 N M l=1 W l i=1 H l j=1 S lij(6)\nwhere M is the number of FPN layers. (i, j) denotes a location in the corresponding feature map, and\nN = M l=1 W l i=1 H l j=1 1 lij .\nWith the help of the mFRS, we can dynamically select the pixels with FRS value in the top αS mean % as reliable dense pseudo label and the rest will be suppressed to 0 following the approach used in Dense Teacher [30]. As a result, the dense pseudo labels are selected according to:\n⃗ y lij = 1, if S lij in top αS mean %, 0, otherwise(7)\nwhere ⃗ y lij is the symbol deciding the selection of a dense pseudo label on the l-th FPN layer at location (i, j) in the corresponding feature map, and α is a hyper-parameter for adjusting the intensity of selection, which we empirically set it to 100.\nThe empirical study in Fig. 4 provides a demonstration of our hypothesis. The mFRS has a positive correlation with the relative number of pseudo labels selected. Meanwhile, as shown in Fig. 5, as training progresses, the mFRS gradually drops. We regard this phenomenon as indicative of a good dynamic semi-supervised train scheme [24] because it is usually desirable to mine more potential information at the beginning of training to speed up convergence and then select increasingly accurate pseudo labels as training progresses to alleviate confirmation bias [1,24]. The relative number of pseudo labels selected " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset and Evaluation Protocol", "publication_ref": [], "table_ref": [], "text": "We our method on the DOTA-v1.5 benchmark following the previous work [7]. In DOTA-v1.5, two datasets are provided for model training: the DOTA-v1.5train set contains 1411 labeled images, and the DOTA-v1.5test set contains 937 unlabeled images. In addition, the DOTA-v1.5-val set with 458 labeled images is also provided for validation. In the previous study [7], two settings were used for performance validation: Partially Labeled Data. SOOD [7] first introduced this setting. In this setting, 10%, 20% and 30% of the images in the DOTA-v1.5-train set are sampled as the labeled training data, and the remaining unsampled images are treated as unlabeled data. For each protocol, one fold following the data distribution of the DOTA-v1.5-train is provided. To evaluate our method in more severe situations, we extend this setting to 1% and 5%. Note that in 1% setting, only 14 images are provided as labeled data. Fully Labeled Data. In this setting, the entire DOTA-v1.5train set is used as the labeled data and the DOTA-v1.5-test set is used as the additional unlabeled data.\nWe evaluate our method under both settings and report the performance on DOTA-v1.5-val with the stand mean average precision (mAP) as the evaluation metric." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b20", "b30", "b29", "b29", "b13" ], "table_ref": [], "text": "We use FCOS [21] equipped with a FPN [12] as our default detection framework to evaluate the effectiveness of our method, and ResNet-50 [6] pre-trained on ImageNet [] is adopted as the backbone. Our implementation and hyperparameters are based on MMRotate [31]. Following the previous work [7], we crop the original images into 1024 × 1024 patches with an overlap of 200. Partially Labeled Data. The model is trained for 120k iterations on 2 GPUs with 3 images per GPU. SGD is used, with the learning rate initialized to 0.0025. The weight decay and momentum are set to 0.0001 and 0.9, respectively. For fair comparison, we set the data sample ratio between the labeled and unlabeled data to 2:1 following the setting in SOOD [7]. Fully Labeled Data. The model is trained for 180k iterations on 2 GPUs with 3 images per GPU. SGD is used, with the learning rate initialized to 0.0025 and divided by 10 after 120k and 160k iterations. The weight decay and momentum are set to 0.0001 and 0.9, respectively. The sample ratio between the labeled and unlabeled data is 2:1.\nTo maintain hard negative samples [30], we set the value of α in ADPLS to 100 for both partially labeled data and fully labeled data. We adopt the data augmentation used in [30]. Specifically, we use strong augmentation for the student model and weak augmentation for both the teacher model and supervised pipeline. Scale jittering and random flipping are adopted as weak augmentation, while the strong augmentation includes scale jittering, random flipping, color jittering, random grayscale, and random Gaussian blurring. Note that we do not apply random erasing to avoid injecting excessively strong noise during training. Following previous works in SSOD [7,14,23], we adopt a \"burn-in\" strategy to initialize the teacher model. " }, { "figure_ref": [ "fig_5", "fig_1" ], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "In this section, we compare our method with previous state-of-the-art methods on DOTA-v1.5. Partially Labeled Data. We evaluate our method under different proportions of labeled data , and the results are shown in Tab. 1. Our ADPLS method achieves state-of-theart performance in all cases. Specifically, it obtains 55.68, 60.98, 62.59 mAP with 10%, 20%, and 30% labeled data, surpassing the previous state-of-the-art method by +7.05, +5.40, and +3.36 points, respectively. Additionally, when we further evaluate our method with even fewer labeled data, it obtains 29.95 and 49.78 mAP on 1% and 5%, surpassing the baseline by +14.28 and +13.60 points. Note that in the 5% setting, our method achieves better performance (with only half of the labeled images) than SOOD [7] in 10% setting.\nThe results of our method are qualitatively compared with the supervised baseline and SOOD [7] in Fig. 6. With the help of ADPLS, potential object information can be exploited to guide the selection of dense pseudo label, helping obtain more information and improve the detection quality.\nFigure 2 gives a comparison of intermediate results between different selection methods. Our method selects the most reliable dense pseudo labels. Fully Labeled Data. We also compare our method with the previous state-of-the-art methods in the fully labeled data setting and the results are shown in Tab. 2. Since the reported performance of supervised baseline is different, we also report the performance of the supervised baseline. Our ADPLS method surpasses the previous state-of-the-art method by 0.53 points. Compared with the baseline, we " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b29", "b29", "b29" ], "table_ref": [], "text": "The effect of hyper-parameter α in ADPLS. Here, we study the influence of the hyper-parameter α in ADPLS.\nAs shown in Tab. 3, when we set α to 10, we achieve the performance of 47.09 mAP. As α increases from 10 to 100, the performance of our method improves. However, further increasing α to 300 slightly hurts the performance and the same observation holds for 500 and 700. Based on this observation, we conjecture that increasing α results in the selection of more dense pseudo labels but also introduces more hard negatives into the training. As analysed in Dense Teacher [30], it is helpful to include some valuable hard negatives during training, but when the proportion of hard negatives increases past a certain level, they begin to hurt the performance of the model. The effect of augmentation. In this part, we discuss the effect of different augmentations. We note that different augmentation is adopted in SOOD. Specifically, we additionally use scale jittering to both labeled data and unlabeled data, as done in Soft Teacher [27] and Dense Teacher [30]. Therefore, we compare our method under the same augmentation in SOOD and re-implement SOOD in our augmentation. Moreover, Dense Teacher [30] is also evaluated for comparison. As shown in Tab. 4, our method achieves 51.00 mAP and 57.12 mAP under the same augmentations of SOOD [7], surpassing SOOD by +2.37 and +1.54 mAP. However, in the 30% setting, our method achieves the performance of 58.60 mAP, which is 0.63 mAP behind the SOOD [7]. We conjecture that overfitting of the labeled data impedes model training.\nWhen SOOD is re-implemented using the same augmentation used in ADPLS, our method still surpasses SOOD in all settings. We note that when using stronger augmentation, the performance of Dense Teacher also surpasses SOOD in 10% and 20% settings. We conjecture that stronger augmentations alleviate the overfitting of labeled data and provide more opportunities to benefit from valuable dense pseudo labels." }, { "figure_ref": [], "heading": "Limitations and Discussion", "publication_ref": [], "table_ref": [], "text": "Although our method achieves satisfactory results in semi-supervised oriented object detection, its usage of the distinctive characteristics of aerial objects is still limited. Specifically, We consider only the dense distribution in our method. However, many other characteristics, such as the large scale ratio and complex background could also be considered. With this work, we have explored the possibility of integrating more information into the training process of SSOD. We hope that our work will inspire similar investigations for other semi-supervised tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we find that ignoring the density of potential objects in the existing dense pseudo-label selection methods impedes performance in semi-supervised oriented object detection. To alleviate this problem, we design a simple but effective method called Adaptive Dense Pseudo Label Selection (ADPLS) to estimate the potential object information and use it to guide the selection of dense pseudo label. To validate the effectiveness of our method, we have conducted extensive experiments on the DOTA-v1.5 benchmark.\nCompared with state-of-the-art methods, ADPLS achieves improvements on both partially and fully labeled data." } ]
Recently, dense pseudo-label, which directly selects pseudo labels from the original output of the teacher model without any complicated post-processing steps, has received considerable attention in semi-supervised object detection (SSOD). However, for the multi-oriented and dense objects that are common in aerial scenes, existing dense pseudo-label selection methods are inefficient and impede the performance in semi-supervised oriented object detection. Therefore, we propose Adaptive Dense Pseudo Label Selection (ADPLS) for semi-supervised oriented object detection. In ADPLS, we design a simple but effective adaptive mechanism to guide the selection of dense pseudo labels. Specifically, we propose the mean Feature-Richness Score (mFRS) to estimate the density of potential objects and use this score to adjust the number of dense pseudo labels. On the DOTA-v1.5 benchmark, the proposed method outperforms previous methods especially when labeled data are scarce. For example, it achieves 49.78 mAP given only 5% of annotated data, which surpasses previous state-ofthe-art method given 10% of annotated data by 1.15 mAP. Our codes will be available soon.
Adaptive Dense Pseudo Label Selection for Semi-supervised Oriented Object Detection
[ { "figure_caption": "Figure 1 .1Figure 1. The proposal semi-supervised oriented object detection method outperforms the SOOD by a large margin on DOTAv1.5 benchmark.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Visualization of different dense pseudo labels selection methods. Different color represents different category. Note that in SOOD, dense pseudo labels are selected by randomly sampling from the predictions of teacher model filtered by fixed threshold.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The overview of proposed ADPLS. Each training batch consists of both labeled data and unlabeled data. Note that we hide the supervised part for simplicity. For the unsupervised part, we sample dense pseudo labels according to the adaptive dense pseudo label selection.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .Figure 5 .45Figure 4. The correlation between the mFRS and the relative number of pseudo labels selected under the 10% setting. Relative number indicates the sum of confidence of pseudo labels selected.", "figure_data": "", "figure_id": "fig_4", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Some visualization examples from DOTA-v1.5 dataset. The green rectangles indicate predictions. The red dashed circle, solid red circle, and red arrow represent false negative, false positive, and inaccurate orientation prediction, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Experimental results on DOTA-v1.5 under Parially Labeled Data setting. Experiments are conducted under 10%, 20% and 30% labeled data settings. * indicates reported performance in SOOD. Note that all methods are implemented with rotated-FCOS.", "figure_data": "SettingMethod1%5%10%20%30%SupervisedFCOS [21]15.67 36.18 42.78 50.11 54.79Dense Teacher [30] *--46.90 53.93 57.86Semi-supervisedSOOD [7]--48.63 55.58 59.23Ours29.95 49.78 55.68 60.98 62.59", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results on DOTA-v1.5 under the Fully Labeled Data setting. Numbers in font of the arrow indicate the supervised baseline. * indicates reported performance in SOOD.Note that all methods are implemented with rotated-FCOS.", "figure_data": "MethodmAPDense Teacher [30] * 65.46+0.92 -→ 66.38SOOD [7]65.46+2.24 -→ 67.70Ours65.44+2.79 -→ 68.23", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on the effect of hyper-parameter α in AD-PLS. Experiments are conducted at 10% setting.", "figure_data": "SettingαmAPI10 47.09II50 52.47III100 55.68IV300 55.23V500 55.54VI700 55.04achieve a +2.79 mAP improvement.", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study on different augmentations. Experiments are conducted at 10%, 20% and 30% settings. * and † indicate different augmentations used in SOOD and ours. Dense Teacher [30] 52.25 58.22 60.19 SOOD [7] 51.39 58.05 60.81 Ours 55.68 60.98 62.59", "figure_data": "SettingMethod10%20%30%SupervisedFCOS [21]42.78 50.11 54.79Dense Teacher [30] 46.90 53.93 57.86Semi-supervised *SOOD [7]48.63 55.58 59.23Ours51.00 57.12 58.60Semi-supervised †", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
Tong Zhao; Qiang Fang; Shuohao Shi; Xin Xu
[ { "authors": "Eric Arazo; Diego Ortego; Paul Albert; E O' Noel; Kevin Connor; Mcguinness", "journal": "IEEE", "ref_id": "b0", "title": "Pseudo-labeling and confirmation bias in deep semi-supervised learning", "year": "2020" }, { "authors": "David Berthelot; Nicholas Carlini; Ian Goodfellow; Nicolas Papernot; Avital Oliver; Colin A Raffel", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "Jian Ding; Nan Xue; Yang Long; Gui-Song Xia; Qikai Lu", "journal": "", "ref_id": "b2", "title": "Learning roi transformer for oriented object detection in aerial images", "year": "2019" }, { "authors": "Yves Grandvalet; Yoshua Bengio", "journal": "", "ref_id": "b3", "title": "Semi-supervised learning by entropy minimization", "year": "2004" }, { "authors": "Jiaming Han; Jian Ding; Nan Xue; Gui-Song Xia", "journal": "", "ref_id": "b4", "title": "Redet: A rotation-equivariant detector for aerial object detection", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b5", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Wei Hua; Dingkang Liang; Jingyu Li; Xiaolong Liu; Zhikang Zou; Xiaoqing Ye; Xiang Bai", "journal": "", "ref_id": "b6", "title": "Sood: Towards semisupervised oriented object detection", "year": "2023" }, { "authors": "Jisoo Jeong; Seungeui Lee; Jeesoo Kim; Nojun Kwak", "journal": "Advances in neural information processing systems", "ref_id": "b7", "title": "Consistency-based semi-supervised learning for object detection", "year": "2019" }, { "authors": "Dong-Hyun Lee", "journal": "", "ref_id": "b8", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": "Gang Li; Xiang Li; Yujie Wang; Yichao Wu; Ding Liang; Shanshan Zhang", "journal": "Springer", "ref_id": "b9", "title": "Pseco: Pseudo labeling and consistency training for semi-supervised object detection", "year": "2022" }, { "authors": "Yuxuan Li; Qibin Hou; Zhaohui Zheng; Cheng Ming-Ming; Yang Jian; Li Xiang", "journal": "", "ref_id": "b10", "title": "Large selective kernel network for remote sensing object detection", "year": "2023" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b11", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Liang Liu; Boshen Zhang; Jiangning Zhang; Wuhao Zhang; Zhenye Gan; Guanzhong Tian; Wenbing Zhu; Yabiao Wang; Chengjie Wang", "journal": "", "ref_id": "b12", "title": "Mixteacher: Mining promising labels with mixed scale teacher for semi-supervised object detection", "year": "2023" }, { "authors": "Yen-Cheng Liu; Chih-Yao Ma; Zijian He; Chia-Wen Kuo; Kan Chen; Peizhao Zhang; Bichen Wu; Zsolt Kira; Peter Vajda", "journal": "", "ref_id": "b13", "title": "Unbiased teacher for semi-supervised object detection", "year": "2021" }, { "authors": "Yen-Cheng Liu; Chih-Yao Ma; Zsolt Kira", "journal": "", "ref_id": "b14", "title": "Unbiased teacher v2: Semi-supervised object detection for anchor-free and anchor-based detectors", "year": "2022" }, { "authors": "Hieu Pham; Zihang Dai; Qizhe Xie; Quoc V Le", "journal": "", "ref_id": "b15", "title": "Meta pseudo labels", "year": "2021" }, { "authors": "Kihyuk Sohn; David Berthelot; Nicholas Carlini; Zizhao Zhang; Han Zhang; Colin A Raffel; Ekin Dogus Cubuk; Alexey Kurakin; Chun-Liang Li", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020" }, { "authors": "Kihyuk Sohn; Zizhao Zhang; Chun-Liang Li; Han Zhang; Chen-Yu Lee; Tomas Pfister", "journal": "", "ref_id": "b17", "title": "A simple semi-supervised learning framework for object detection", "year": "2020" }, { "authors": "Yihe Tang; Weifeng Chen; Yijun Luo; Yuting Zhang", "journal": "", "ref_id": "b18", "title": "Humble teachers teach better students for semi-supervised object detection", "year": "2021" }, { "authors": "Antti Tarvainen; Harri Valpola", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b20", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Vikas Verma; Kenji Kawaguchi; Alex Lamb; Juho Kannala; Arno Solin; Yoshua Bengio; David Lopez-Paz", "journal": "Neural Networks", "ref_id": "b21", "title": "Interpolation consistency training for semi-supervised learning", "year": "2022" }, { "authors": "Xingyi Wang; Shilong Yang; Yijiang Zhang; Litong Li; Shijie Feng; Chengqi Fang; Kai Lyu; Wayne Chen; Zhang", "journal": "", "ref_id": "b22", "title": "Consistent-teacher: Towards reducing inconsistent pseudo-targets in semi-supervised object detection", "year": "2023" }, { "authors": "Yidong Wang; Hao Chen; Qiang Heng; Wenxin Hou; Yue Fan; Zhen Wu; Jindong Wang; Marios Savvides; Takahiro Shinozaki; Bhiksha Raj", "journal": "", "ref_id": "b23", "title": "Freematch: Self-adaptive thresholding for semi-supervised learning", "year": "2023" }, { "authors": "Gui-Song Xia; Xiang Bai; Jian Ding; Zhen Zhu; Serge Belongie; Jiebo Luo; Mihai Datcu; Marcello Pelillo; Liangpei Zhang", "journal": "", "ref_id": "b24", "title": "Dota: A large-scale dataset for object detection in aerial images", "year": "2018" }, { "authors": "Xingxing Xie; Gong Cheng; Jiabao Wang; Xiwen Yao; Junwei Han", "journal": "", "ref_id": "b25", "title": "Oriented r-cnn for object detection", "year": "2021" }, { "authors": "Mengde Xu; Zheng Zhang; Han Hu; Jianfeng Wang; Lijuan Wang; Fangyun Wei; Xiang Bai; Zicheng Liu", "journal": "", "ref_id": "b26", "title": "End-toend semi-supervised object detection with soft teacher", "year": "2021" }, { "authors": "Bowen Zhang; Yidong Wang; Wenxin Hou; Hao Wu; Jindong Wang; Manabu Okumura; Takahiro Shinozaki", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling", "year": "2021" }, { "authors": "Du Zhixing; Rui Zhang; Ming Chang; Shaoli Liu; Tianshi Chen; Yunji Chen", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Distilling object detectors with feature richness", "year": "2021" }, { "authors": "Hongyu Zhou; Zheng Ge; Songtao Liu; Weixin Mao; Zeming Li; Haiyan Yu; Jian Sun", "journal": "Springer", "ref_id": "b29", "title": "Dense teacher: Dense pseudolabels for semi-supervised object detection", "year": "2022" }, { "authors": "Yue Zhou; Xue Yang; Gefan Zhang; Jiabao Wang; Yanyi Liu; Liping Hou; Xue Jiang; Xingzhao Liu; Junchi Yan; Chengqi Lyu; Wenwei Zhang; Kai Chen", "journal": "", "ref_id": "b30", "title": "Mmrotate: A rotated object detection benchmark using pytorch", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 84.95, 365.29, 201.41, 30.5 ], "formula_id": "formula_0", "formula_text": "L s = 1 N l N l i [L cls (T (I i l )) + L reg (T (I i l )](1)" }, { "formula_coordinates": [ 4, 80.61, 405.84, 205.75, 30.43 ], "formula_id": "formula_1", "formula_text": "L u = 1 N u Nu i [L cls (T ′ (I i l )) + L reg (T ′ (I i l )](2)" }, { "formula_coordinates": [ 4, 137.26, 534.09, 149.1, 9.65 ], "formula_id": "formula_2", "formula_text": "L = L s + αL u(3)" }, { "formula_coordinates": [ 4, 124.59, 586.8, 157.9, 9.65 ], "formula_id": "formula_3", "formula_text": "θ t = (1 -λ)θ t + λθ s (4" }, { "formula_coordinates": [ 4, 282.49, 587.12, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 392.42, 256.02, 152.69, 14.13 ], "formula_id": "formula_5", "formula_text": "S lij = max c y lij,c(5)" }, { "formula_coordinates": [ 4, 368.5, 330.4, 176.61, 30.72 ], "formula_id": "formula_6", "formula_text": "S mean = 1 N M l=1 W l i=1 H l j=1 S lij(6)" }, { "formula_coordinates": [ 4, 319.38, 387.16, 225.73, 23.1 ], "formula_id": "formula_7", "formula_text": "N = M l=1 W l i=1 H l j=1 1 lij ." }, { "formula_coordinates": [ 4, 334.32, 495.97, 210.8, 23.08 ], "formula_id": "formula_8", "formula_text": "⃗ y lij = 1, if S lij in top αS mean %, 0, otherwise(7)" } ]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b6", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "Image segmentation, the process of partitioning an image into distinct regions, is of great importance in various medical applications. Consequently, it serves as a fundamental tool for automating clinical workflows, reducing data processing time, and providing quantitative measures of organs or pathologies [1,2]. These capabilities greatly assist clinicians in making more accurate diagnoses, assessing the response to therapy, and ultimately improving patient care and outcomes. In this context, semantic segmentation is essential for improving the quality of medical image segmentation, identifying relevant regions of interest (ROIs), such as tumors or organs [3,4], and removing unwanted objects. However, conventional supervised approaches depend on obtaining large-scale annotated data, which can be immensely costly in real-world medical scenarios. To address this challenge, a promising solution is semi-supervised semantic segmentation, where models are trained using a limited number of labeled samples and an abundance of unlabeled data. Effectively leveraging unlabeled data is a crucial in this context, driving researchers to explore innovative approaches [5,6,7,8].\nOne commonly used approach to address this challenge is the application of pseudo-labeling [9]. Pseudo-labels are assigned to unlabeled pixels based on predictions from a model trained on labeled data. These \"pseudo-labels\" then guide the training of a supervised model, improving its performance. In this regard, Basak et al. [10] proposed a method for semantic segmentation that combines contrastive learning (CL) and semi-supervised learning (SemiSL) without requiring a specific initial task. Their method employs pseudo-labels from SemiSL to enhance CL guidance, leading to more accurate multi-class segmentation by learning discriminative class information. Chai et al. [11] proposed a local contrastive loss approach to improve pixel-level feature learning for segmentation tasks. Their method leverages semantic label information obtained from pseudo-labels of unlabeled images, in conjunction with a limited set of annotated images with ground truth labels. In another study, Bai et al. [12] addresses empirical mismatch issues in semi-supervised medical image segmentation by bidirectionally copying and pasting labeled and unlabeled data in a mean teacher architecture. This promoted consistent learning between labeled and unlabeled data, effectively reducing the empirical distribution gap for improved segmentation performance. Wang et. al. [7] introduced a two-stream network as a to address errors within each subnet. This innovative approach considers the discrepancies between pseudo-labels and predictions, effectively rectifying mistakes made by individual networks. Similarly, Luo et al. [13] and Wu et al. [14] adopted a dual-task consistency and mutual consistency training strategy to penalize incorrect predictions made by the networks.\nDespite the effectiveness of the pseudo-labeling paradigm, arXiv:2311.12617v1 [cs.CV] 21 Nov 2023 concerns remain about the reliability of pseudo labels, which can lead to inaccurate mask predictions. Previous research has attempted to mitigate this issue by filtering out predictions with classification scores below a certain threshold [15], but this approach may not be entirely effective in eliminating incorrect predictions. Specifically, some wrong predictions can still exhibit high classification scores, resulting in overconfidence or miscalibration phenomena [16]. Additionally, setting a high threshold to remove unreliable predictions can significantly reduce the number of generated pseudo-labels, limiting the effectiveness of the semi-supervised learning algorithm. This reduction in pseudo-labels can lead to categorically imbalanced training data, which can cause problems like inaccurate assignment of pseudo-labels to pixels related to a particular organ or tissue type. As a result, this imbalance can negatively impact the overall segmentation performance.\nAs discussed before, directly using unreliable predictions as pseudo-labels can adversely affect the model performance.\nTo address this challenge, we thus introduce here an innovative alternative approach that effectively leverages unreliable pseudo-labels while overcoming the limitations of their direct application. To this end, we propose a dual-stream network architecture in which each subnetwork employs a 3D auto-encoder-decoder module to generate segmentation maps for input images. A supervised loss function guides the network in learning representations for each class, enabling precise and dense predictions. We also propose a consistency regularization term to penalize inaccurate predictions made by each network, using a set of confidence predictions obtained from both pathways. This strategic approach allows the model to adaptively update its feature representations, reducing the occurrence of incorrect predictions from both paths. To effectively utilize unlabeled data, we extend the concept of pseudo-labeling by distinguishing between reliable and unreliable predictions. We then optimize the clustering space using a contrastive method to align feature descriptions of unreliable pixels with positive prototypes derived from trustworthy predictions.\nOur contributions can be summarized as follows:\n(1) we introduce a consistency regularization term to reduce false predictions; (2) we conceptualize a self-supervised contrastive learning paradigm to decrease the number of unreliable predictions; (3): we obtained the state-of-the-art (SOTA) results on 3D CT/MRI segmentation datasets." }, { "figure_ref": [], "heading": "PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "Our goal is to train a semantic segmentation model using a combination of labeled (D l = {(x l i , y l i )} N l i=1 ) and larger unlabeled data (D u = {x u i } Nu i=1 ). To achieve this, we employ two subnetworks, Subnet A (f A (x; θ 1 )) and Subnet B (f B (x; θ 2 )), each employing a 3D encoder-decoder architecture. These subnetworks generate prediction maps (denoted as Y A and Y B ) and corresponding feature representations for each voxel (v) in a D-dimensional space.\nDuring each training step, we randomly sample b labeled images (b l ) and b unlabeled images (b u ). For the labeled images, our primary objective is to minimize the standard crossentropy loss and Dice loss as defined in Equation (1):\nL s = 1 |B l | (x l i ,y l i )∈B l ℓ ce (f (x l i ; θ), y l i ) + Dice( ŷi l , y i l ),(1)\nwhere y l i represents the hand-annotated mask label for the ith labeled image. Additionally, to minimize false prediction, we introduce consistency regularization term that considers the confidence predictions of one network against the other. We further employ the contrastive loss function to effectively leverage unreliable predictions through the training process. Figure 1 illustrates the overall network process.\nFor the unlabeled images, we feed them through both networks to obtain prediction maps, Y a and Y b . We then apply a pixel-level entropy-based filtering to exclude unreliable pixel-level pseudo-labels when calculating the unsupervised loss defined in Equation 2:\nL u = 1 |B u | x u i ∈Bu ℓ ce (f (x u i ; θ), ŷu i ) + Dice( ŷi , ŷu i ) + L reg ,(2)\nwhere ŷu i is the pseudo-label for the i-th unlabeled image. To further perform error correction, we introduce a regularization loss, L reg , to the L u , as detailed in Section 2.2. Finally, we employ a contrastive loss to exploit unreliable pixels excluded from the unsupervised loss, as explained in Section Section 2.3. Our optimization objective is to minimize the overall loss as follows:\nL = L s + λ u L u + λ c L c ,(3)\nHere, L s and L u represent the supervised and unsupervised losses applied to labeled and unlabeled images, respectively, while L c is the contrastive loss for unreliable pseudolabels. The weights λ u and λ c control the contributions of the unsupervised loss and contrastive loss, respectively. Both L s and L u are computed using the combination of cross-entropy (CE) and Dice losses as shown in Figure Figure 1." }, { "figure_ref": [], "heading": "Consistency Regularization", "publication_ref": [], "table_ref": [], "text": "We introduce a consistency regularization mechanism to refine the predictions of our dual-subnet model. This module identifies areas where the subnetworks (Subnet A and Subnet B) make discrepant predictions despite being highly confident " }, { "figure_ref": [], "heading": "GT or Pseudo Label", "publication_ref": [], "table_ref": [], "text": "Fig. 1: An illustration of our suggested pipeline. In each itteration, we utilize L s for the labeled data and L u for the unlabeled data. When dealing with the unlabeled data, we adopt the prediction of the network with the lower L s as a pseudo label.\nin their respective predictions, indicating potential mispredictions. The goal is to rectify these incongruities. Mathematically, we define the area of incorrect predictions between the softmax outputs Ŷ A and Ŷ B as follows:\nM diff = arg max (max( Ŷ u A ) > T ) ̸ = arg max (max( Ŷ u B ) > T ),(4)\nwhere M diff represents the set of voxels in which Subnet A and Subnet B generate different predictions with high confidence and T represents the confidence threshold. We dynamically adjust the value of T during the training process. We then define the L1 distance loss function as regularization term to correct potential incorrect predictions by each of the networks:\nL reg = n i=1 |(M diff ⊙ Ŷ u A ) -(M diff ⊙ Ŷ u B )|,(5)\nwhere ⊙ denotes the Hadamard multiplication." }, { "figure_ref": [ "fig_1" ], "heading": "Contrastive loss", "publication_ref": [], "table_ref": [], "text": "To mitigate uncertain predictions in our model, we incorporated the contrastive loss function. Figure 2b illustrates a scenario where the network's predictions exhibit low confidence in categorizing certain voxels. Our contrastive loss design aims to guide these uncertain voxels towards aligning with their corresponding class prototypes, ultimately reducing misclassifications and uncertainty rates. To achieve this, our approach first computes the confidence of each voxel's prediction. It then categorises the predictions into two distinct sets: reliable and unreliable predictions. Next, it defines prototypes for each category using the reliable set as a base. Each prototype is computed as the mean vector of the reliable voxel representations:\nc k = 1 |S k | (v r i,yi)∈Sk f (v r i ),(6)\nwhere, f (v r i ) indicates the feature represenation of the voxel corresponding to the reliable predictions. Our approach uses a distance function, denoted as d : R M × R M → [0, +∞), to compute a distribution over classes for uncertain voxels v u . This distribution is computed by applying a softmax operation to the distances between the voxel's representation in the embedding space and the class prototypes:\np ϕ (y = k | v u ) = exp (-d (f (v u ), c k )) k ′ exp (-d (f (v u ), c k ′ ))(7)\nOur contrastive loss function aims to move uncertain voxels of the same class towards their respective class prototype, while also pushing the prototypes of each class away from each other to account for the distance between them. " }, { "figure_ref": [], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b13", "b6", "b6" ], "table_ref": [], "text": "In our study, we developed and implemented our model using the PyTorch framework on a single RTX 3090 GPU. Following [14,7] we choose Resnet and V-Net for the two stream network for fair comparision. To optimize the parameters of our network, we employed the SGD optimizer with a weight decay factor of 0.0001 and a momentum coefficient of 0.9. We set our initial learning rate to 0.01 and implemented a dynamic learning rate schedule that reduced the learning rate by a factor of 10 after every 2500 iterations, for a total of 6000 iterations. During our training process, we included both labeled and unlabeled samples in each iteration, maintaining a consistent batch size of two for both categories. In Equation ( 3) we use λ u = 1.0 and λ c = 0.1 * e 4(1-t/tmax) 2 , where t and t max denote the current and maximum itterations, respectively. Additionally, for a robust assessment of our model's performance, we adopted the K-fold cross-validation method recommended by [7]." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b16", "b6", "b17", "b12", "b6" ], "table_ref": [], "text": "Left Atrial Dataset (LA): The LA dataset [17] consists of 1003 3D gadolinium-enhanced MR imaging volumes with manual left atrial annotations, featuring an anisotropic resolution of 0.625 × 0.625 × 0.625 mm³. We preprocessed the data according to [7], initially applying volume normalization to standardize the data. During training, we employed random cropping to achieve model input dimensions of 112 × 112 × 80. For inference, we used a sliding window approach with the same dimensions and a stride of 18 × 18 × 4. NIH Pancreas Dataset: The NIH Pancreas Dataset [18] comprises 82 abdominal CT volumes with manual pancreas annotations. The CT volumes have dimensions of 512×512×D, where D represents the number of CT slices, which ranges from 181 to 466. Our preprocessing method, similar to [13,7], involves applying a soft tissue CT window with Hounsfield Units (HU) from -120 to 240. We then align the CT scans to the pancreas region and expand the margins by 25 voxels. During training, we perform random cropping, resulting in volumes with dimensions of 96×96×96. For inference, we use a stride of 16×16×16." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Table 1: Comparison of results using the LA dataset (MRI)." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Method", "publication_ref": [ "b5", "b18", "b19" ], "table_ref": [ "tab_1" ], "text": "Dice(%)↑ Jaccard(%)↑ 95HD(voxel)↓ ASD(voxel)↓ MT [6] 85.89 ± 0.024 76.58 ± 0.027 12.63 ± 5.741 3.44 ± 1.382 UA-MT [19] 85.98 ± 0.014 76.65 ± 0.017 9.86 ± 2.707 2.68 ± 0.776 SASSNet [20] The comparison of our proposed method with SOTA techniques on the left atrial dataset is provided in Table 1. Our method shows significant improvements in all metrics, with a substantial enhancement in organ voxel detection, specifically DSC and Jacard. Compared to MCF, our proposed method exhibits noteworthy enhancements, with an increase in DSC from 88.71 to 89.10 and Jaccard index from 80.41 to 81.62. Furthermore, our approach maintains low-performance variance, contributing to its stability and reliability. Figure 3 presents the visual results of our proposed method compared to other methods for left atrial segmentation. These visual results showcase higher overlap with ground truth labels and fewer false segmentations, highlighting the finer details captured by our approach. Our method showcases strong performance on the Pancreas dataset as well, as presented in Table 2. Figure 3 provides additional insights into the segmentation results, underscoring the impact of the suggested modules for enhancing the overall segmentation quality. In greater detail, our approach generates sharper edges and more precise boundary separation then the MCF and MC-Net methods. This highlights its effectiveness in improving the reliability of object boundary predictions and distinguishing the organ of interest from the background.\nWe also performed an ablation study on the LA dataset to thoroughly assess the impact of the regularization and contrastive loss components of our method. Notably, removing the regularization module resulted in a substantial decrease of 0.5 in the DSC. Similarly, removing the contrastive loss resulted in a more significant drop of 0.73 in the DSC score. These findings underscore the critical roles played by both the regularization and contrastive loss components in enhancing the performance of our method. " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper presents a novel dual-stream network for semisupervised semantic segmentation that leverages labeled and unlabeled imaging data. Our approach focuses on reducing the problem of unreliable predictions by integrating contrastive learning and error correction mechanisms. We outperformed SOTA techniques on CT and MRI images." } ]
Current 3D semi-supervised segmentation methods face significant challenges such as limited consideration of contextual information and the inability to generate reliable pseudolabels for effective unsupervised data use. To address these challenges, we introduce two distinct subnetworks designed to explore and exploit the discrepancies between them, ultimately correcting the erroneous prediction results. More specifically, we identify regions of inconsistent predictions and initiate a targeted verification training process. This procedure strategically fine-tunes and harmonizes the predictions of the subnetworks, leading to enhanced utilization of contextual information. Furthermore, to adaptively fine-tune the network's representational capacity and reduce prediction uncertainty, we employ a self-supervised contrastive learning paradigm. For this, we use the network's confidence to distinguish between reliable and unreliable predictions. The model is then trained to effectively minimize unreliable predictions. Our experimental results for organ segmentation, obtained from clinical MRI and CT scans, demonstrate the effectiveness of our approach when compared to state-of-the-art methods. The codebase is accessible on GitHub.
LEVERAGING UNLABELED DATA FOR 3D MEDICAL IMAGE SEGMENTATION THROUGH SELF-SUPERVISED CONTRASTIVE LEARNING
[ { "figure_caption": "Fig. 2 :2Fig. 2: (a): Illustration of the regularization term and (b) contrastive loss effects on prediction refinement.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: Visual comparison of segmentation results: the first and the second rows show the left atrium (LA) and pancreas, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparison of results using the Pancreas dataset (CT). ± 0.024 60.53 ± 0.030 14.93 ± 2.000 4.61 ± 0.929 UA-MT [19] 74.01 ± 0.029 60.00 ± 3.031 17.00 ± 3.031 5.19 ± 1.267 SASSNet [20] 73.57 ± 0.017 59.71 ± 0.020 13.87 ± 1.079 3.53 ± 1.416 DTC [13] 73.23 ± 0.024 59.18 ± 0.027 13.20 ± 2.241 3.81 ± 0.953 MC-Net [14] 73.73 ± 0.019 59.19 ± 0.021 13.65 ± 3.902 3.92 ± 1.055 MCF [7] 75.00 ± 0.026 61.27 ± 0.030 11.59 ± 1.611 3.27 ± 0.919 Our Method 76.40 ± 0.018 62.96 ± 0.027 10.69 ± 1.603 2.79 ± 0.0954", "figure_data": "MethodDice(%)↑Jaccard(%)↑95HD(voxel)↓ ASD(voxel)↓MT [6]74.43", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Sanaz Karimijafarbigloo; Reza Azad; Yury Velichko; Ulas Bagci; Dorit Merhof
[ { "authors": "Reza Azad; Ehsan Khodapanah Aghdam; Amelie Rauland; Yiwei Jia; Atlas Haddadi Avval; Afshin Bozorgpour; Sanaz Karimijafarbigloo; Joseph Paul Cohen; Ehsan Adeli; Dorit Merhof", "journal": "", "ref_id": "b0", "title": "Medical image segmentation review: The success of u-net", "year": "2022" }, { "authors": "Bobby Azad; Reza Azad; Sania Eskandari; Afshin Bozorgpour; Amirhossein Kazerouni; Islem Rekik; Dorit Merhof", "journal": "", "ref_id": "b1", "title": "Foundational models in medical imaging: A comprehensive survey and future vision", "year": "2023" }, { "authors": "Michela Antonelli; Annika Reinke; Spyridon Bakas; Keyvan Farahani; Annette Kopp-Schneider; Bennett A Landman; Geert Litjens; Bjoern Menze; Olaf Ronneberger; Ronald M Summers", "journal": "Nature communications", "ref_id": "b2", "title": "The medical segmentation decathlon", "year": "2022" }, { "authors": "Abhishek Srivastava; Debesh Jha; Elif Keles; Bulent Aydogan; Mohamed Abazeed; Ulas Bagci", "journal": "", "ref_id": "b3", "title": "An efficient multi-scale fusion network for 3d organ at risk (oar) segmentation", "year": "2022" }, { "authors": "Xiaokang Chen; Yuhui Yuan; Gang Zeng; Jingdong Wang", "journal": "", "ref_id": "b4", "title": "Semi-supervised semantic segmentation with cross pseudo supervision", "year": "2021" }, { "authors": "Antti Tarvainen; Harri Valpola", "journal": "", "ref_id": "b5", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" }, { "authors": "Yongchao Wang; Bin Xiao; Xiuli Bi; Weisheng Li; Xinbo Gao", "journal": "", "ref_id": "b6", "title": "Mcf: Mutual correction framework for semi-supervised medical image segmentation", "year": "2023" }, { "authors": "Yassine Ouali; Céline Hudelot; Myriam Tami", "journal": "", "ref_id": "b7", "title": "Semi-supervised semantic segmentation with crossconsistency training", "year": "2020" }, { "authors": "Dong-Hyun Lee", "journal": "", "ref_id": "b8", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": "Hritam Basak; Zhaozheng Yin", "journal": "", "ref_id": "b9", "title": "Pseudo-label guided contrastive learning for semi-supervised medical image segmentation", "year": "2023" }, { "authors": "Krishna Chaitanya; Ertunc Erdil; Neerav Karani; Ender Konukoglu", "journal": "Medical Image Analysis", "ref_id": "b10", "title": "Local contrastive loss with pseudolabel based self-training for semi-supervised medical image segmentation", "year": "2023" }, { "authors": "Yunhao Bai; Duowen Chen; Qingli Li; Wei Shen; Yan Wang", "journal": "", "ref_id": "b11", "title": "Bidirectional copy-paste for semisupervised medical image segmentation", "year": "2023" }, { "authors": "Xiangde Luo; Jieneng Chen; Tao Song; Guotai Wang", "journal": "", "ref_id": "b12", "title": "Semi-supervised medical image segmentation through dual-task consistency", "year": "2021" }, { "authors": "Yicheng Wu; Minfeng Xu; Zongyuan Ge; Jianfei Cai; Lei Zhang", "journal": "Springer", "ref_id": "b13", "title": "Semi-supervised left atrium segmentation with mutual consistency training", "year": "2021-10-01" }, { "authors": "Bowen Zhang; Yidong Wang; Wenxin Hou; Hao Wu; Jindong Wang; Manabu Okumura; Takahiro Shinozaki", "journal": "", "ref_id": "b14", "title": "Flexmatch: Boosting semi-supervised learning with curriculum pseudo labeling", "year": "2021" }, { "authors": "Chuan Guo; Geoff Pleiss; Yu Sun; Kilian Q Weinberger", "journal": "PMLR", "ref_id": "b15", "title": "On calibration of modern neural networks", "year": "2017" }, { "authors": "Zhaohan Xiong; Qing Xia; Zhiqiang Hu; Ning Huang; Cheng Bian; Yefeng Zheng; Sulaiman Vesal; Nishant Ravikumar; Andreas Maier; Xin Yang", "journal": "Medical image analysis", "ref_id": "b16", "title": "A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging", "year": "2021" }, { "authors": "Le Holger R Roth; Amal Lu; Hoo-Chang Farag; Jiamin Shin; Evrim B Liu; Ronald M Turkbey; Summers", "journal": "Springer", "ref_id": "b17", "title": "Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation", "year": "2015" }, { "authors": "Lequan Yu; Shujun Wang; Xiaomeng Li; Chi-Wing Fu; Pheng-Ann Heng", "journal": "Springer", "ref_id": "b18", "title": "Uncertainty-aware selfensembling model for semi-supervised 3d left atrium segmentation", "year": "2019" }, { "authors": "Shuailin Li; Chuyu Zhang; Xuming He", "journal": "Springer", "ref_id": "b19", "title": "Shapeaware semi-supervised 3d semantic segmentation for medical images", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 321.61, 180.14, 237.39, 29.78 ], "formula_id": "formula_0", "formula_text": "L s = 1 |B l | (x l i ,y l i )∈B l ℓ ce (f (x l i ; θ), y l i ) + Dice( ŷi l , y i l ),(1)" }, { "formula_coordinates": [ 2, 316.37, 390.25, 242.63, 38.66 ], "formula_id": "formula_1", "formula_text": "L u = 1 |B u | x u i ∈Bu ℓ ce (f (x u i ; θ), ŷu i ) + Dice( ŷi , ŷu i ) + L reg ,(2)" }, { "formula_coordinates": [ 2, 385.95, 532.44, 173.04, 9.65 ], "formula_id": "formula_2", "formula_text": "L = L s + λ u L u + λ c L c ,(3)" }, { "formula_coordinates": [ 3, 54.68, 339.11, 243.52, 23.43 ], "formula_id": "formula_3", "formula_text": "M diff = arg max (max( Ŷ u A ) > T ) ̸ = arg max (max( Ŷ u B ) > T ),(4)" }, { "formula_coordinates": [ 3, 88.62, 452.5, 209.59, 30.32 ], "formula_id": "formula_4", "formula_text": "L reg = n i=1 |(M diff ⊙ Ŷ u A ) -(M diff ⊙ Ŷ u B )|,(5)" }, { "formula_coordinates": [ 3, 117.42, 697.46, 180.79, 27.88 ], "formula_id": "formula_5", "formula_text": "c k = 1 |S k | (v r i,yi)∈Sk f (v r i ),(6)" }, { "formula_coordinates": [ 3, 343.19, 364.41, 215.8, 26.31 ], "formula_id": "formula_6", "formula_text": "p ϕ (y = k | v u ) = exp (-d (f (v u ), c k )) k ′ exp (-d (f (v u ), c k ′ ))(7)" } ]
[ { "figure_ref": [], "heading": "I. Introduction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Sample images used in Dataset:", "publication_ref": [], "table_ref": [], "text": "Input &Output:\nThe input is a video that is being monitored for potential criminal activities. The output indicates whether the video involves suspicious activities or not.\nIn the event of criminal behaviour, a notification is sent to the relevant authorities." }, { "figure_ref": [], "heading": "II. OBJECT DETECTION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.EMPLOYEE MONITORING", "publication_ref": [], "table_ref": [], "text": "In the dynamic landscape of contemporary workplaces, the need for efficient and effective work monitoring has become paramount. Organizations strive to optimize productivity, ensure employee safety, and maintain a secure work environment. The advent of machine learning technologies has opened up new avenues for addressing these challenges. This project report delves into the implementation of YOLO (You Only Look Once), a stateof-the-art object detection algorithm, as a pioneering solution for work monitoring." }, { "figure_ref": [], "heading": "YOLO(YOU ONLY LOOK ONCE) MODULE", "publication_ref": [], "table_ref": [], "text": "Unlike traditional object detection methods that involve multiple stages, YOLO streamlines the process, allowing for real-time detection with impressive speed and accuracy. Here's a step-by-step explanation of how the YOLO module works:\nInput Processing:The input image undergoes grid-based division, forming the foundation for subsequent predictions.\nBounding Box Prediction: YOLO predicts multiple bounding boxes within each grid cell, each associated with parameters (x, y) for the box's center, width (w), height (h), confidence score, and class probabilities.\nClass Prediction: YOLO determines the probability of each class for all bounding boxes in a grid cell, enabling simultaneous detection of multiple object classes in a given image.\nConfidence Score: A confidence score indicates the model's certainty that a bounding box contains an object, with a range from 0 to 1." }, { "figure_ref": [], "heading": "Non-Maximum Suppression:", "publication_ref": [], "table_ref": [], "text": "Following predictions for all grid cells, a post-processing step, non-maximum suppression, removes redundant and low-confidence bounding boxes, retaining only the most confident and non-overlapping ones." }, { "figure_ref": [], "heading": "Output:", "publication_ref": [], "table_ref": [], "text": "The YOLO module produces a final output of bounding boxes, each linked to a class and a confidence score, representing the detected objects in the input image." }, { "figure_ref": [], "heading": "INTERSECTION OVER UNION", "publication_ref": [], "table_ref": [], "text": "Intersection over Union (IoU) is a metric used to evaluate the accuracy of an object detection algorithm, particularly in tasks such as image segmentation and bounding box prediction. IoU measures the overlap between the predicted bounding box and the ground truth bounding box for a given object in an image.The IoU is calculated as the ratio of the area of intersection between the predicted and ground truth bounding boxes to the area of their union. The formula for IoU is:" }, { "figure_ref": [], "heading": "IOU= Area of Intersection/Area of Union", "publication_ref": [], "table_ref": [], "text": "Here's a breakdown of the terms:\n1." }, { "figure_ref": [], "heading": "Area of Intersection:", "publication_ref": [], "table_ref": [], "text": "The region where the predicted bounding box and the ground truth bounding box overlap." }, { "figure_ref": [], "heading": "2.", "publication_ref": [], "table_ref": [], "text": "Area of Union:The combined region covered by both the predicted bounding box and the ground truth bounding box.\nThe IoU value ranges from 0 to 1, where: IoU=0 indicates no overlap between the predicted and ground truth bounding boxes.\nIoU=1 indicates a perfect overlap between the predicted and ground truth bounding boxes." }, { "figure_ref": [], "heading": "III. LITERATURE SURVEY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "1) Uses OpenCV for object detection in computer Vision.LSTM (Long Short-Term Memory) is used to classify any event or behaviour as a crime or not. [Autonomous Anomaly Detection System for Crime Monitoring and Alert Generation]", "publication_ref": [], "table_ref": [], "text": "Jyoti Kukad, Swapnil Soner, Sagar Pandya" }, { "figure_ref": [], "heading": "2) Uses state-of-the-art face identification system", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Uses deepneural networks (DNN). [Face Detection and Recognition for Criminal Identification System]", "publication_ref": [], "table_ref": [], "text": "Sanika Tanmay, Aamani Tandasi, Shipra Saraswat Umadevi V Navalgund, Priyadharshini.K" }, { "figure_ref": [], "heading": "5) Focuses on identifying patterns and trends in crime occurrences.Uses ML and DL algorithms to predict crime related activities.[ Crime Prediction Using Machine Learning and Deep Learning: A Systematic Review and Future Directions]", "publication_ref": [], "table_ref": [], "text": "VarunMandalapu , Lavanya Elluri" }, { "figure_ref": [], "heading": "IV. Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "EMPLOYEE MONITORING", "publication_ref": [], "table_ref": [], "text": "In the dynamic landscape of contemporary workplaces, the need for efficient and effective work monitoring has become paramount. Organizations strive to optimize productivity, ensure employee safety, and maintain a secure work environment. The advent of machine learning technologies has opened up new avenues for addressing these challenges. This project report delves into the implementation of YOLO (You Only Look Once), a stateof-the-art object detection algorithm, as a pioneering solution for work monitoring." }, { "figure_ref": [], "heading": "YOLO(YOU ONLY LOOK ONCE) MODULE", "publication_ref": [], "table_ref": [], "text": "Unlike traditional object detection methods that involve multiple stages, YOLO streamlines the process, allowing for real-time detection with impressive speed and accuracy. Here's a step-by-step explanation of how the YOLO module works:\nInput Processing:The input image undergoes grid-based division, forming the foundation for subsequent predictions.\nBounding Box Prediction: YOLO predicts multiple bounding boxes within each grid cell, each associated with parameters (x, y) for the box's center, width (w), height (h), confidence score, and class probabilities.\nClass Prediction: YOLO determines the probability of each class for all bounding boxes in a grid cell, enabling simultaneous detection of multiple object classes in a given image.\nConfidence Score: A confidence score indicates the model's certainty that a bounding box contains an object, with a range from 0 to 1." }, { "figure_ref": [], "heading": "Non-Maximum Suppression:", "publication_ref": [], "table_ref": [], "text": "Following predictions for all grid cells, a post-processing step, non-maximum suppression, removes redundant and low-confidence bounding boxes, retaining only the most confident and non-overlapping ones." }, { "figure_ref": [], "heading": "Output:", "publication_ref": [], "table_ref": [], "text": "The YOLO module produces a final output of bounding boxes, each linked to a class and a confidence score, representing the detected objects in the input image." }, { "figure_ref": [], "heading": "V. RESULTS AND DISCUSSIONS", "publication_ref": [], "table_ref": [], "text": "In the realm of advanced surveillance systems, the integration of work monitoring and crime detection has reached new heights, offering a comprehensive solution to enhance security measures. This innovative project leverages cutting-edge technologies, merging work monitoring outputs with crime detection capabilities, ultimately contributing to a safer and more efficient environment.\nUpon capturing an input image indicative of theft or criminal activity, the system triggers an alert mechanism. This mechanism not only highlights the suspicious event but also sends an immediate alert message to designated authorities. The integration of heatmap visualization enhances the alert system by providing a visual representation of the anomaly, allowing authorities to swiftly assess the situation and respond effectively.\nOne of the project's standout features is the seamless integration of heatmap visualization. This graphical representation method offers a clear and intuitive display of numerical data, indicating the intensity of activities within the monitored space. In the context of work monitoring and crime detection, the heatmap becomes a powerful tool, showcasing the concentration and distribution of work hours and identifying anomalies that may indicate criminal behavior." }, { "figure_ref": [], "heading": "VI. Conclusion", "publication_ref": [], "table_ref": [], "text": "This project marks a significant advancement in the convergence of work monitoring and crime detection, offering a holistic solution that promotes both workplace efficiency and security. The synergy between advanced algorithms, specialized datasets, and heatmap visualization sets this system apart, exemplifying the potential of technology to revolutionize surveillance and safety measures in various domains. At the core of the system lies the utilization of sophisticated AI/ML algorithms, particularly the YOLO model, to simultaneously monitor work activities and detect criminal incidents. The YOLO model, renowned for its efficiency in object detection, ensures precise tracking of individuals and objects within the monitored space. The project's specialized dataset focuses on capturing both work-related scenarios and criminal activities, enabling the model to distinguish between routine work tasks and potential thefts. " }, { "figure_ref": [], "heading": "VII. Reference", "publication_ref": [], "table_ref": [], "text": "" } ]
This research endeavors to harness the potential of existing Closed-Circuit Television (CCTV) networks for a comprehensive approach to crowd management, crime prevention, and workplace monitoring through the integration of Artificial Intelligence (AI) and Machine Learning (ML) technologies. The primary objective is to develop and implement advanced algorithms capable of real-time analysis of video feeds, enabling the identification and assessment of crowd dynamics, early detection of potential criminal activities, and continuous monitoring of workplace environments. By leveraging AI/ML, the project aims to optimize surveillance capabilities, thereby enhancing public safety measures and improving organizational productivity. This initiative underscores the transformative impact that intelligent video analytics can have on existing infrastructure, mitigating the need for extensive system overhauls while significantly advancing security and operational efficiency.
CROWD MANAGEMENT, CRIME DETECTION, WORK MONITORING USING AI/ML
[ { "figure_caption": "Now to know, how a convolution neural network works lets break it into parts. the 3 most important parts of this convolution neural networks are, image, like those in the MNIST dataset used for handwritten digit recognition. In a basic artificial neural network setup, each pixel's value is treated as an individual feature input, resulting in 784 input nodes. While this approach may yield satisfactory results, it falls short in recognizing crucial features within the image. The model essentially processes each pixel independently, potentially missing important patterns. Scaling this concept to a larger image, such as a 1920x1080 Ultra HD image, poses significant challenges. Applying the same methodology would result in an impractical 2 million input nodes. Even with a relatively modest hidden layer of 64 nodes, which is insufficient for such a large input, the network would involve a staggering 130 million weights. This massive scale of parameters necessitates an enormous computational load, overwhelming the capabilities of most machines. The sheer volume of calculations involved makes it unfeasible for effective image recognition, emphasizing the need for more sophisticated approaches in handling high-resolution images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Flattening is the process of transforming a 3D or 2D matrix into a 1D format, serving as the final step in preparing the image for input into the model. This step involves converting the structured representation of the image into a linear, onedimensional input. The flattened data can then be seamlessly connected to a fully connected dense layer, facilitating subsequent stages of classification in the neural network . Libraries used in this project: ➢ OpenCV: Used to read the video input and splitting videointo frames for analysing. ➢ Keras: Used to implement neural networks. It is a high-level neural network library that runs on top of tensorflow ➢ Numpy: Used to process images as the image pixel is in the form of matrix ➢ Pushbullet: It is an API used for sending SMS to mobile phone after detecting crime.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "[ 1 ]1Autonomous Anomaly Detection System for Crime Monitoring and Alert Generation Jyoti Kukad, Swapnil Soner, Sagar Pandya [2] Face Detection and Recognition for Criminal Identification System Sanika Tanmay, Aamani Tandasi, Shipra Saraswat [3] Proposed System for Criminal Detection and Recognition on CCTV Data Using Cloud and Machine Learning Samit Shirsat, Aakash Naik, Darshan Tamse [4] Crime Intention Detection System Using Deep Learning Umadevi V Navalgund, Priyadharshini.K [5] Crime Prediction Using Machine Learning and Deep Learning: A Systematic Review and Future Directions Varun Mandalapu , Lavanya Elluri", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" } ]
Manoj R Kumar; P R Adithya; Akash Ug Scholar
[]
[]
2023-11-21
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b14", "b27", "b15", "b2", "b0" ], "table_ref": [], "text": "High Content Imaging (HCI) plays a pivotal role in modern drug discovery and development, being used throughout preclinical drug discovery cascades. It can capture detailed phenotypic responses of cells treated with compounds or genetic perturbants, and reveal complex subcellular processes. Machine learning can help analyze HCI data to unveil biological correlations, reveal modeof-action, predict compound bioactivity, and predict toxici- Illustration of the proposed approach. CODA divides the role of a classifier into two separate entities: one model designed to extract generic features and a subsequent task-specific model that operates on these features to accomplish the given task.\nWe then adapt the feature extractor when testing on a new source using self-supervision, while leaving the task-specific model untouched. The result is a model that can be easily adapted to new out-of-domain data as it arrives, seeing a significant boost in performance, without the need for any labels.\nties [15,21,28,32,33,38]. Recent advances in ML for HCI have helped accelerate screening of compound libraries, enhanced data interpretation, and enabled novel therapeutic insights [19]. However, several challenges hinder its full potential.\nIn particular, the generalization gap presents significant challenges to HCI and drug discovery. Discrepancies caused by variations in experimental conditions, apparatus, biological noise, and the presence of random or systematic errors can impede model performance. Limited ability of standard ML models to adapt or transfer across HCI settings results in reduced predictive accuracy [16]. To fully harness the power of machine learning in HCI, there is a need for robust and adaptable models that can generalize effectively across different contexts and conditions without compromising performance.\nRecently, ten pharmaceutical companies, six supporting technology companies, and two non-profit partners formed the JUMP-CP (Joint Undertaking in Morphological Profiling) initiative to generate phenotypic response data for over 116,750 unique compounds, over-expression of 12,602 genes, and knockout of 7,975 genes using CRISPR-Cas9, all in human osteosarcoma cells (U2OS) [3]. The dataset is estimated to be 115 TB in size and captures 1.6 billion cells and their single-cell profiles using the Cell Painting assay [1,6]. A subset of the compounds profiled by the consortium were profiled across all the participating labs, using the different equipment setups available in the individual labs. This dataset offers a unique opportunity to develop and test domain adaptation methods for HCI.\nIn this study, we leverage the newly-released JUMP-CP data to develop and validate a new approach for online selfsupervised domain adaptation (SSDA), Cross-batch Online Domain Adaptation (CODA) which uses cross-batch selfsupervision to adapt a feature extractor to incoming outof-domain data. By using the setup illustrated in Figure 1, where the model is separated into an adaptable feature extractor and a frozen task-specific classifier, we are able realize huge improvements, up to 300%, when the model is applied to data from different labs or different microscopes. Crucially, this can be done without the need for any labels for the out-of-domain data.\nWe test our approach using data from different institutions in the JUMP-CP data repository -training CODA using data from a source institution and performing SSDA to successfully adapt to the other institutions without access to any labels. Our contributions can be summarized as follows:\n• Propose CODA, a self-supervised domain adaptation method, enabling online adaptation of a model trained on a single HCI data source to other out-of-domain sources (e.g., different institution or microscope)demonstrating its applicability to a variety of realworld experimental settings. • Introduced ODA as an alternative approach when cross-batch consistency learning is not feasible, resulting in a slight performance drop from CODA but significant performance improvements over supervised methods. • Conducted an extensive experimental validation on diverse subsets of data from the JUMP-CP repository, showcasing the robustness of the proposed approaches to variations in acquisition and apparatus, and verified the effectiveness of CODA in aligning the feature extractor to the target domain.\nThe code to reproduce our experiments can be found at https://github.com/cfredinh/coda." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b8", "b13", "b22", "b19", "b3", "b15", "b15", "b28", "b29", "b38", "b34", "b12", "b12", "b34" ], "table_ref": [], "text": "The problem of distributional shifts between the training and test sets is widely recognized to degrade performance in various domains [5,9,14,23]. To mitigate these effects, traditional strategies involve gathering more data or employing sophisticated augmentation techniques to incorporate test distribution-like data into the training domain [20,40]. However, these approaches may not always be feasible, as anticipating the expected domain shifts during testing is not always possible. HCI data faces similar challenges due to domain shifts [4,16]. Experimental batches in HCI data exhibit high homogeneity within themselves but have limited overlap with other batches due to inherent biological noise and variations in experimental setups. These variations are commonly referred to as batch effects in HCI, representing undesirable domain shifts resulting from biological noise and difficult-to-control experimental conditions.\nResearch in the field of addressing distribution shifts focuses on two main directions: Domain Generalization (DG) and Domain Adaptation (DA). DG aims to learn domain-invariant features from one or multiple source domains during training, using techniques focused on identifying domain-invariant features [5,12,16,29]. DA, on the other hand, leverages data from parts or the entire target domain during training, allowing for supervised or unsupervised alignment of features to handle distribution shifts [30,31,34,39]. However, anticipating all possible distribution shifts during training is impractical, resulting in limited generalization capabilities across test domains. Consequently, performance cannot be guaranteed for unknown test domains.\nThe concept of updating model weights online has recently gained attention, with [35] introducing a pre-text task for weight updates, followed by [13] using a image reconstruction task. The underlying principle in most such approaches is that either only a sample or the full test set can be used to align the test domain without relying on and data associated with the primary task. This approach has shown clear performance gains in the natural image domain e.g. [13,35]. Although these methods individually update weights for each sample, they are suboptimal for feature extraction in HCI data, as confirmed by our own experiments and recent findings [22]. Considering the nature of HCI data, which is often grouped into subsets such as wells, plates, or batches, adapting feature extraction strategies to these groups becomes an appealing and efficient option.\nWhile test time domain adaptation has shown success in the Natural Imaging domain, its application in the medical and biomedical imaging domain, specifically in tasks like medical image segmentation [24] and image recon-struction [10, 18], remains limited. Notably, there is a lack of research directly addressing domain shifts in medical and biomedical classification tasks, and particularly for HCI data, where these distributional shifts overwhelmingly dominate the learning signal." }, { "figure_ref": [ "fig_1", "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Methods", "publication_ref": [ "b12", "b15", "b12", "b34", "b1", "b12", "b15", "b1", "b15", "b12", "b34", "b1", "b15", "b15" ], "table_ref": [ "tab_1" ], "text": "In this study, we address the challenge of generalization gaps caused by domain shifts in new sources of High Content Imaging (HCI) data. To tackle this problem, we propose a novel self-supervised domain adaptation (SSDA) strategy called CODA which is able to deal with the unique challenges associated with HCI data.\nBuilding upon the recent work of [13], we adopt a dualmodel approach, which separates the model into a feature extractor and a classifier (Figure 1). Within our framework, the feature extractor is trained in a self-supervised manner and produces features which are then processed by the classifier to solve the classification task (Figure 2). Then, when unlabeled data from a new domain is encountered, one can update only the feature extractor using self-supervision to adapt to the new domain. This allows the model to adapt the feature extractor to the new domain while preserving the ability of the classifier to make correct predictions.\nHowever, as demonstrated in this study, directly applying this design yields very poor performance in HCI data. This is because the biological signals of interest are overshadowed by acquisition and experimental artifacts (see Table 2). To overcome this obstacle, we made adaptations inspired by [16] to modify the SSDA so that it becomes agnostic to these distracting artifacts. This allows it to learn features that better distinguish the biological signal of interest and pass them on to the frozen classifier.\nBaseline The primary baselines we utilize in this study involve the supervised learning of a standalone Vision Transformer (ViT) model on HCI data, as commonly used. Once trained, we directly apply this model to the target dataset.\nThe dual model In our approach, we utilize a vision transformer (ViT) model to learn meaningful representations from input patches. However, the standard ViT lacks the ability to differentiate generic low-level features from task-specific ones. To address this, we take inspiration from Test-Time Training (TTT) [13,35], which separates the model into a feature extractor and a classifier. As seen in Figure 2, the feature extractor learns generic representations through self-supervised training, while the classifier focuses on solving the specific task. In our study, we employ DINO [2], a consistency-based method, instead of reconstruction based Masked Autoencoders (MAE) [13]. This is motivated by the subpar performance of MAE in HCI data [22] instead using DINO which has show better In the first step, a labeled data source is used to pretrain a feature extractor using self-supervision. In step 2, a classifier is appended to the feature extractor and supervised training is performed on the same data source. In step 3, the model is adapted online to a new unlabeled out-of-domain data source by using cross-batch consistency learning [16] and self-supervision [2] to adapt the feature extractor while keeping the weights of the classifier frozen.\nperformance than other SSL approaches in HCI [16,22].\nAfter the feature extractor is trained, it generates features that are used by the classification model.\nSelf-supervised domain adaptation When a model is deployed in a new domain, the performance can be severely impacted by distribution shifts that alter the data representation, particularly in the case of HCI data, as discussed previously. These shifts primarily stem from intrinsic properties of the data rather than task-specific characteristics. In our problem the task remains constant, it is the appearance of the data that changes. Therefore, the domain adaptation method should focus on producing unbiased features that can be consumed by the classifier for the task at hand. Adapting the features of a monolithic model to a new domain can be challenging due to the entanglement of lowlevel and high-level features, impacting both generic and task-specific representations. However, employing a dual model system allows for updating the feature extractor independently, while preserving the task-specific portion of the network. Inspired by this insight, we adopt the on-thefly feature extractor update approach introduced by [13,35].\nFigure 2 illustrates the process. First, a labeled data source is utilized to pretrain the feature extractor through self-supervised learning (step 1). Subsequently, a classifier is appended to the feature extractor, and supervised training is conducted on the same data source (step 2). Finally, to adapt the model to a new unlabeled out-of-domain data source, using self-supervision techniques, specifically DINO [2] are employed, allowing the feature extractor to be updated while keeping the classifier weights frozen (step 3). By employing this approach, the classification model remains unaffected, while the feature extractor is adapted to extract generic representations suitable for the task, agnostic to the peculiarities of the data source.\nCross-batch consistency learning In High Content Imaging (HCI), the data collection process is characterized by discrete experimental batches, leading to distribution shifts caused by variations in experimental conditions, capturing settings, and time points. These shifts are commonly referred to as batch effects. Ideally, each HCI image should capture only the biological effects of the treatment and no batch effects. However, in practice, batch effects often dominate the data, causing SSL methods to prioritize these confounding factors over the relevant biological signals. As a result, SSL methods tend to model batch effects rather than the desired biological signals, leading to suboptimal performance [16].\nTo address this challenge, Haslum et al. [16] propose a solution called Cross-Domain Consistency Learning (CBCL), which builds upon the principles of consistencybased SSL methods. CDCL leverages the concept of consistent representation between pairs of images that share the same treatment but come from different domains. The underlying assumption is that when the network is presented with two images of the same treatment but from different batches, the shared signal of interest should be the biological signal rather than the batch effects. We adopt this strategy in the Self-Supervised in domain and adaptation step (steps 1 and 3 in Figure 2) to mitigate the influence of batch effects and enhance the robustness of the self-supervised feature extractor." }, { "figure_ref": [], "heading": "Test Time Training", "publication_ref": [ "b12", "b16", "b12" ], "table_ref": [], "text": "Beyond the supervised baseline, we also include a recent Online Domain Adaptations approach, Test-Time Training (TTT) [13]. TTT used a dual-model setup with a MAE [17] as a feature extractor with a classification model stacked on top of it, these are trained in sequence. At test time the FE is update for each test image individually, see [13] for more details." }, { "figure_ref": [ "fig_3", "fig_2" ], "heading": "Experimental Setup", "publication_ref": [ "b2", "b0", "b6", "b2", "b35", "b10", "b4", "b1" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In this section, we describe the datasets, task used and the implementations details of the methods described in the previous section 3. Starting with the task, we focus on Mechanism-of-Action (MoA) prediction. The MoA of a compound describes how a substance produces a pharmacological effect, often involving determining which target, such as proteins or enzymes, the substance interacts with. Understanding the MoA can help in predicting potential drug interactions, side effects, and can help guide the design of new, more effective drugs or therapies.\nDataset We conducted experiments using different subsets of the JUMP-CP Cell Painting High Content Imaging set [3]. The JUMP-CP dataset encompasses a wide range of compound and genetic perturbations that were imaged using an optimized version of the Cell Painting assay [1,6]. This dataset was generated through collaborations between multiple institutions, making it an ideal choice for studying domain shifts due to its diverse origins and the inclusion of various microscope types and settings. See Figure 3 for image examples.\nFor our analyses, we selected a subset of perturbations from all participating institutions of the JUMP-CP initiative. This subset comprised 302 unique compounds that were imaged across 15 different centers, coming from the JUMP-CP TARGET2 plates. It encompassed a total of 120 experimental batches and covered 141 distinct plates. As targets for our study, we aim to predict the Mechanism of Action (MoA) information associated with the compounds, which was obtained from the Drug Repurposing Hub [7]. Among the compounds, 135 had single MoA labels, representing 54 unique MoA types. With the goal of predicting the MoA of each the compounds, we treat the problem as a multi-class classification task.\nOur main experiments focus on a subset of the data from four (anonymized) partners within the JUMP-CP consortium [3]. This subset consists of images captured using different microscopes and microscope types, with variations in objectives used. These four sources were selected as they were the largest subsets of data from each of the microscope types, providing the most diverse set of data sources, see top of Table 1 for details. Two additional sources were also used for auxiliary testing, see Table 1. These sources use similar microscope setups to the ones used in S5, allowing for comparison between similar image acquisition settings. Additional details for the sources used in this work can be found in Appendix A.2.\nNote that in this work we use a subset of images with known MoA labels. The raw image data were prepared using standard illumination correction and intensity outlier removal followed by downscale to half the original size and compressed, using DeepProfiler [27]. Finally, we reduce the image channels from five to the three most informative channels, based in observations by [36].\nImplementation details Throughout this study, we employ DEIT-S models [37], initialized with pretrained weights from IMAGENET [11], similarly to [25]. For all supervised models, we utilize cross-entropy loss for the MoA task. Our training process includes a linear warmup phase of 3 epochs, during which the learning rate gradually increases until it reaches a value of 10 -4 . Subsequently, we employ a step-wise learning rate reduction strategy, reducing the learning rate by a factor of 10 each time the validation loss and accuracy metric reach a saturation point.\nTo train the dual model, we follow a two-step approach depicted in Figure 2. First, we perform a self-supervised step using DINO, adhering to the default settings described in [2] with slight variations. This involves training for 300 epochs, using a learning rate of 10 -4 , linear warm-up for 10 epochs, followed by cosine annealing. We use an exponential moving average of 0.996, following the augmentation strategy from [22]. When incorporating cross-batch examples, the same setup is used, with the exception of how the augmented samples are combined, using one global and three local views from each images in the sampled pair. The pairs are sampled based on metadata, with the requirement that the images are of the same treatment, but come from distinct batches. In the second step, the supervised taskspecific step, we stack a DEIT-S model on top of the frozen feature extractor trained in the first step. Passing the tokens of the feature extractor into the task-specific model by removing the embedding layer and replacing it with a linear layer. The training strategy for this step remains consistent with the supervised baseline approach described earlier.\nFinally, the third step is adaptation using DINO, with or without CB sampling, to adapt the feature extractor to new out-of-domain data. This is done following the exact same strategy as in the second step described above, unless otherwise stated, but applied to the test set images, crucially not relying on any labels related to the primary task." }, { "figure_ref": [], "heading": "Experiments and Results", "publication_ref": [ "b12", "b16" ], "table_ref": [ "tab_1", "tab_1", "tab_1", "tab_1", "tab_1", "tab_1", "tab_1", "tab_1", "tab_4" ], "text": "In previous sections, we discussed the significant impact of distribution shifts in HCI. In this section, we demonstrate how dramatically performance degrades when applying a 7.8 ± 0.2 35.9 ± 0.3 13.5 ± 0.5 5.9 ± 0.4 11.7 ± 1.0 12.7 ± 0.2 Supervised 5.2 ± 0.6 36.2 ± 0.6 13.6 ± 0.5 5.2 ± 0.7 9.4 ± 0.4 11.9 ± 0.6 Dual-model-DINO 6.0 ± 1.0 36.6 ± 0.7 14.9 ± 0.9 5.3 ± 1.0 9.1 ± 0.5 12.6 ± 0.6 Dual-model-CB 6.6 ± 1.6 -10.4 ± 1.8 5.9 ± 1.0 9.2 ± 2.2 9.0 ± 1.2 TTT 13.9 ± 1.1 -25.5 ± 0.4 14.9 standalone model to a new HCI source. We measured performance using Accuracy as the label proportions are maintained across sources. We further report the F1 scores in Table 4 in the Appendix. We first assess the generalization and adaptation capabilities of standalone DEIT models, which serve as our baselines. Next, we deploy the dual model and evaluate its performance with and without adapting the feature extractor on the test source. Finally, we incorporate online self-supervised domain adaptation to address batch effects and assess its effectiveness. The results of our main experiments can be found in Table 2 and Figure 7.\nIn our experiments, we consider the following models and baselines:\n• Supervised A standalone DEIT-S trained in a supervised fashion on the source data, applied to domainshifted target data.\n• Dual-model-DINO A DEIT-S feature extractor with a DEIT-S classifier stacked on top. The feature extractor is self-supervised with DINO and the classifier is trained in a supervised manner, with the feature extractor being frozen, both on the source data.\n• Dual-model-CB The same as above, but in addition to DINO we use cross-batch image pair sampling, as described in section 3.\n• ODA Online domain adaptation -this is the same as Dual-modal-DINO but the feature extractor is adapted to the target data using self-supervision.\n• TTT Similar to ODA but using MAE instead of DINO and updating the feature extractor one image at a time.\n• CODA Cross-batch Online Domain Adaptation -this is the same as Dual-modal-CB but the feature extractor is adapted to the target data.\nBaseline performance We begin our evaluation with the standalone DEIT-S classification models. When the models are trained and evaluated on the same data source, as shown in Table 2, the performance ranges from 30.1 to 40.5 with a mean of 36.5% when including all sources, in terms of MoA accuracy. This represents the situation when there is no domain shift and we have access to labels. When these models are applied to other splits (introducing a domain shift and no access to labels), a significant drop in performance is observed, with values ranging from 5.9 to 19.5 (first row of each non-diagonal element of Table 2 and the first bar in Figure 7 in Appendix). The substantial domain shifts cause the models to be reduced in accuracy by up to 84%.\nPerformance of the dual model Replacing the baseline with the dual model yields a performance similar to that of the standalone model on the in-domain data. Somewhat surprisingly, these models exhibit lower performance compared to the standalone models when applied to out-ofdomain sources. The accuracy ranges from 4.7 to 14.8, as indicated in the second row of each non-diagonal element in Table 2 (and the second bar in Figure 7 in the Appendix).\nPerformance when updating the feature extractor Both the standalone and dual models fail to generalize to new HCI data sources. However, the situation changes dramatically when we employ online domain adaptation (ODA) on the feature extractor of the dual model when we apply it to new sources. As illustrated Table 2 (fifth row of each non-diagonal element of the source split) and the fourth bar in Figure 7 in the Appendix we observe a substantial performance improvement compared to both the standalone model and the dual model without updated feature extractor. The out-of-domain MoA accuracy for ODA ranges from 10.6 to 27.2 (a mean increase of 174.8% ± 32.5 over the baseline) surpassing the performance of the standalone model -although still not reaching the level achieved when testing and evaluating within the domain.\nPerformance when employing cross-batch learning Introducing cross-batch consistency learning (CBCL) for selfsupervision of the dual model's feature extractor brings additional improvements in both in-domain and out-ofdomain scenarios. CODA combines CDCL with online domain adaptation (ODA), yielding an enormous performance boost over the baseline as shown in Figure 7 in the Appendix (last bar) and the last row of Table 2. CODA yields MoA accuracy ranging from 11.9 to 36.7, in many cases, the out-of-domain performance is comparable to the performance achieved when training and testing are conducted within the same domain -in some cases even exceeding that (e.g. S3→S8 with CODA yields 33.9 while the S8→S8 baseline is 31.0). The average performance boost of CODA over the baseline is 232.6% ± 63.6. CDCL is also helpful when used without online domain adaptation, although it provides less of a performance boost than ODA. As seen in row three of the non-diagonals in Table 2 and the third bar in Figure 7 in the Appendix. MAE Performance The idea of using separate models for feature extraction and classification, followed by feature extraction alignment was inspired by [13], where they use MAE [17] for self-supervision. However, the performance of MAE in HCI data have been shown to be inferior to DINO, while also it does not allow for training with crossbatch examples [22]. For completeness, we report in Table 2, Table 3 and in the Appendix the results when using MAE instead of DINO. In domain MAE performs slightly worse than DINO. However, in the ODA and TTT setup MAE fails to approach DINO performance -even without CB training." }, { "figure_ref": [ "fig_4", "fig_5", "fig_4", "fig_5", "fig_5", "fig_5", "fig_5" ], "heading": "Analysis and Ablation Studies", "publication_ref": [ "b5" ], "table_ref": [ "tab_1", "tab_1" ], "text": "In this section, we delve into further analysis and conduct ablation studies to gain a deeper understanding of the performance of CODA and the other models introduced in our experiments. Generalization across similar microscopes In order to gain further insights into the generalization capabilities of the different models, we conducted experiments using two additional data sources, S7 and S10, as test sets. These two sources share more similarities in terms of microscope setup with S5, compared with S3, S8, and S11. If the microscope setup was the primary factor influencing generalization, we would expect to observe improved performance as the similarity (domain distances) between the sources increases [26]. However, upon examining the results in Table 2, we found no significant performance improvements for models trained on S5 and tested on S7 and S10 (S5→S7 and S5→S10). Surprisingly, the performance in S5→S8 was actually higher, despite the use of distinct microscope types and illumination methods. This suggests that the impact of source-to-source variability on the model's generalization performance cannot be solely attributed to differences in imaging settings -but rather to small differences in the protocol, reagents, or the environment.\nEffect of Data Granularity In our main experiments, we investigated the effectiveness of adapting the feature extractor to the full test source dataset, resulting in significant performance benefits, as shown in Table 2 and Figure 7. However, in real drug discovery scenarios, data is generated in separate experimental batches, introducing batch variability. In order to process the data as it arrives, an online approach is desirable. Moreover, we expect batch-level variability to be particularly pronounced, as it involves controlling confounding environmental variables such as incubation time, reagent concentration, and device usage. Therefore, applying ODA at the batch level may prove advantageous compared to applying it across multiple batches.\nTo assess the performance of ODA and the baselines at different granularities, we conducted transfers from S3→S11, S5→S11, and S8→S11, while varying the level of granularity at which the models were applied: plate, batch, and source. S11 consists of seven distinct plates belonging to four batches, and we applied ODA, along with the baseline models, separately for each plate and batch, resulting in a total of 11 settings (4 batches and 7 plates). We used the same setup as described earlier, with the mini-batch size reduced to 64 to accommodate the smaller dataset size when training on individual batches and plates. The results are illustrated in Figure 4.\nOverall, ODA proves beneficial at all three levels of granularity (source, batch, and plate). Interestingly, aligning per batch yields the best performance, while plate-level alignment is slightly better or on par with full source training. This observation supports the known variability between batches, affirming the advantage of aligning within a group of similar variability. Additionally, since the same number of iterations is performed regardless of whether ODA is applied at the source or batch level, there is no additional cost associated with applying it to smaller subsets.\nIn fact, it can be considered preferable as it facilitates easier parallelization.\nODA using only a single plate or batch While aligning the feature space at the batch level proved to be the most effective non-cross-batch strategy, aligning features for each new batch or plate can be time-consuming (although it is still relatively low in time and cost compared to the experimental and imaging pipeline). To evaluate the feasibility of aligning the feature extractor when working with subsets of the data, we repeated the experiment described in the previous paragraphs. However, this time we evaluated each of the models on the full source dataset, rather than only on the subset it was aligned on. That is, we performed ODA on a single plate or batch, and applied this model to the rest of the out-of-domain data. The results of this experiment are shown in Figure 4. We observed a slight but noticeable drop in performance when models were evaluated on the full source dataset compared to when they were evaluated only on the subset they were aligned on. Nevertheless, the overall benefits of ODA were still maintained, as even aligning with a random plate led to substantial performance improvements over the supervised baseline.\nHow ODA/CODA changes the feature space To gain a deeper understanding of how online alignment using ODA or CODA affects the feature space, we conducted a thorough analysis. We visualize the feature space of the dual model in Figure 5 and performe Centered Kernel Alignment (CKA) [8] analysis pre-and post-alignment in Figure 6.\nIn Figure 5 we provide a UMAP of the embeddings of the CLS token to visualize the feature space. When transitioning from the in-domain to the out-of-domain setting (left and middle column), we observe a significant shift in the feature space for both the Dual-Model-Dino and Dual-Model-CB. In the in-domain setting (S3→S3, left panels), clear structures are present. Note that Dual-Model-DINO (S3→S3) is over-clustered because it has picked up on batch effects, an undesirable property. When applied to out-of-domain data (without adaptation, in the middle panels) S3→S5 the models struggles to distinguish any mechanisms-of-action. After applying ODA or CODA (right panels S3→S5), the structure in the feature space is restored, allowing for better differentiation of MoA classes.\nWe further examined feature similarity within the feature extractor across different layers using CKA analysis. The results (Figure 6) revealed noticeable differences in feature representation pre-and post-alignment. This indicates that weights throughout the feature extractor are updated during alignment, particularly for high-level features. It is possible that biological variations between source setups are challenging to adapt to using low-level features. Given the strong predictive performance of ODA and CODA models on out-of-domain data, as well as the good fit of the adapted features to the pre-trained task model, we focused on comparing the feature similarity between CODA before and after adaptation (S3→S5). We compare how closely aligned the pre-and post-aligned features resemble those learned in the in-domain setting. Figure 6 (top right) demonstrates a clear diagonal correlation between the post-alignment model and its S5 Dual-model-CB counterpart, suggesting that CODA training successfully learns features similar to those learned in the in-domain setting. When examining the CLS feature similarity (Figure 6 bottom right), the trend is clearer, and the strength of the similarity seems to be somewhat maintained, even at the high-level features. Interestingly, without aligning the feature extractor, the CLS representations remained similar across different depths of the feature extractor (Figure 6 bottom center), indicating that no high-level features are distinguishable in the new domain. This suggests that the domain shift is so severe that the learn features are no longer useful, potentially explaining why performance degradation is seen without alignment." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "Our empirical findings consistently support our initial expectations and highlight the limitations of standalone models in generalizing to new High Content Imaging (HCI) sources. In contrast, the dual model, with its adaptive capabilities, demonstrates a significant ability to reduce generalization gaps. The dual model's bifurcated structure, consisting of separate feature extraction and task-specific components, facilitates a more straightforward adaptation to domain shifts. This design allows the task-specific features to remain intact while effectively adapting the feature extractor to new data characteristics.\nThe incorporation of CDCL into the self-supervised feature extraction process further strengthens the model's ability to mitigate batch effects, enhancing its overall robustness. CDCL ensures consistent representations across different batches, leading to substantial improvements in the model's generalization capabilities.\nOur work highlights the critical importance of the training methodology employed in instructing the feature extractor via self-supervision. With the increasing development of innovative self-supervised methods, we anticipate the emer- gence of more advanced domain adaptation strategies in the near future. These strategies are expected to effectively address the current generalization gap in HCI, enabling more efficient and robust applications in this field.\nAside from the performance benefits demonstrated in our work, it is worth noting that methods like CODA that can adapt online to domain shifts are of critical importance in HCI and drug discovery, where sources of variation are high and unpredictable. As such, online domain adaptation methods like CODA will be essential in mitigating these sources of variation that are ultimately beyond our control." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our findings emphasize the limitations of standalone models when applied to novel High Content Imaging (HCI) data sources and highlight the effectiveness of the dual model approach in reducing generalization gaps. The dual model's bifurcated structure, comprising self-supervised feature extraction and task-specific components, enhances adaptability to new domain shifts. Furthermore, the integration of Cross-Domain Consistent Learning enhances the model's robustness and consistency across different batches, thereby improving its generalization capabilities. The advancement of sophisticated self-supervised methods is expected to drive progress in online domain adaptation strategies, ultimately addressing the prevailing generalization gap in high content imaging. Overall, our method offers a viable strategy to mitigate batch effects and distribution shifts caused by differences in experimental settings and apparatus, leading to improved generalization performance in the HCI domain." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements. This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP). We acknowledge the use of Berzelius computational resources provided by the Knut and Alice Wallenberg Foundation at the National Supercomputer Centre. We would also like to thank the AWS Open Data Sponsorship Program for sponsoring the JUMP-CP data storage." }, { "figure_ref": [], "heading": "Supplementary Material for Bridging Generalization Gaps in High Content", "publication_ref": [], "table_ref": [], "text": "Imaging Through Online Self-Supervised Domain Adaptation ). We compare the various learning setups trained on data from one source and applied to some target data, without access to labels from the target." }, { "figure_ref": [], "heading": "A. Appendix overview", "publication_ref": [], "table_ref": [], "text": "Here, we provide further details and results from the experiments carried out in this work. In Section A.1 we present supplementary results and figures derived from the experiments discussed in the main text and in Section A.2 we provide additional information regarding the datasets and sources used in this study." }, { "figure_ref": [], "heading": "A.1. Additional results", "publication_ref": [], "table_ref": [], "text": "Here we provide auxiliary information, results and figures from the experiments run and data used in this work.\nIn Table 4, re report results from the same experiments, discussed in the main text and reported in Table 2. We report F1 scores, corroborating the results reported in the main text when using Accuracy. The main results reported for in Table 2 (excluding TTT), are also reported in bar-chart format in Figure 1 -clearly visualizing the performance boosts of ODA and CODA compared to the baseline. In Table 5 additional experiments containing the performance of the DUAL-Model-MAE and TTT are reported between each of the sources used. " }, { "figure_ref": [], "heading": "A.2. Detailed Data Description", "publication_ref": [ "b2" ], "table_ref": [], "text": "As described in section 4 in the main text, the primary experiments focus on a subset of the data from four (anonymized) partners within the JUMP-CP consortium [3]. The data of those sources along with two additional test sources are described in Table 1. Here we include further information about these sources. Starting with the four primary sources that were selected, based on containing the largest subsets of data from each of the different microscope types used, thus, providing the most diverse set of data sources:\n• S3 contains 25 plates, totaling 9,600 unique wells and 85,409 images in total, belonging to 13 distinct experimental batches. These were captured using the Opera Phoenix microscope in widefield mode, using laser excitation and a 20X/1 NA objective.\n• S5 contains 24 plates, totaling 9,216 unique wells and 82,256 images in total, belonging to 23 distinct experimental batches. These were captured using the CV8000 confocal microscope, using laser excitation and a 20X/0.75 NA objective.\n• S8 contains 4 plates, totaling 1,536 unique wells and 13,824 images in total, belonging to 4 distinct experimental batches. These were captured using the Image-Express Micro confocal microscope, using LED excitation and a 20X/0.75 NA objective.\n• S11 contains 7 plates, totaling 2,688 unique wells and 23,373 images in total, belonging to 4 distinct experimental batches. These were captured using the Operetta widefield microscope, using LED excitation and a 20X/1 NA objective.\nTwo additional sources were also used for auxiliary testing. Both use similar microscope setups to that used by S5, allowing comparison of generalization performance between models trained and tested in sources with similar imaging setups.\n• S7 contains 7 plates, totaling 2,688 unique wells and 24,192 images in total, belonging to 7 distinct experimental batches. These were captured using the CV7000 confocal microscope, using laser excitation and a 20X/0.75 NA objective.\n• S10 contains 6 plates, totaling 2,304 unique wells and 13,812 images in total, belonging to 6 distinct experimental batches. These were captured using the CV8000 confocal microscope, using laser excitation and a 20X/0.75 NA objective." } ]
High Content Imaging (HCI) plays a vital role in modern drug discovery and development pipelines, facilitating various stages from hit identification to candidate drug characterization. Applying machine learning models to these datasets can prove challenging as they typically consist of multiple batches, affected by experimental variation, especially if different imaging equipment have been used. Moreover, as new data arrive, it is preferable that they are analyzed in an online fashion. To overcome this, we propose CODA, an online self-supervised domain adaptation approach. CODA divides the classifier's role into a generic feature extractor and a task-specific model. We adapt the feature extractor's weights to the new domain using crossbatch self-supervision while keeping the task-specific model unchanged. Our results demonstrate that this strategy significantly reduces the generalization gap, achieving up to a 300% improvement when applied to data from different labs utilizing different microscopes. CODA can be applied to new, unlabeled out-of-domain data sources of different sizes, from a single plate to multiple experimental batches.
Bridging Generalization Gaps in High Content Imaging Through Online Self-Supervised Domain Adaptation
[ { "figure_caption": "Originally published at the Winter Conference on Applications of Computer Vision (WACV 2024).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 .1Figure1. Illustration of the proposed approach. CODA divides the role of a classifier into two separate entities: one model designed to extract generic features and a subsequent task-specific model that operates on these features to accomplish the given task. We then adapt the feature extractor when testing on a new source using self-supervision, while leaving the task-specific model untouched. The result is a model that can be easily adapted to new out-of-domain data as it arrives, seeing a significant boost in performance, without the need for any labels.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. The training strategy and deployment of the dual model in CODA.In the first step, a labeled data source is used to pretrain a feature extractor using self-supervision. In step 2, a classifier is appended to the feature extractor and supervised training is performed on the same data source. In step 3, the model is adapted online to a new unlabeled out-of-domain data source by using cross-batch consistency learning[16] and self-supervision[2] to adapt the feature extractor while keeping the weights of the classifier frozen.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Example images from the six sources, showing the similarity between images of the same source but different compounds, highlighting the inherent variability and batch effects.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Feature embeddings. UMAP visualization of the feature space, showing the impact of distribution shift and the effects of feature extractor alignment. The first column shows the features in S3, column two and three shows S5.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. CKA. Feature similarities in S5 for all tokens (top) and CLS tokens (bottom) of the feature extractor. (left) CODA before and after adaptation. (middle) CODA before adaptation vs. a Dual-Model-CB trained in S5. (right) CODA after adaptation vs. a Dual-Model-CB trained in S5.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Imaging settings and data volume for the studied sources.", "figure_data": "SourceDescriptionObjective Batches Plates ImagesS3Opera Phoenix, widefield, laser20X/11325 85,409S5CV8000, confocal, laser20X/0.752324 82,256S8 ImageExpress Micro, confocal, LED 20X/0.7544 13,824S11Operetta, widefield, LED20X/147 23,373S7CV7000, confocal, laser20X/0.7577 24,192S10CV8000, confocal, laser20X/0.7566 13,812", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Generalization performance across target sources (Acc.).", "figure_data": "TargetSourceS3S5S8S11S7S10Model type (Set trained)39.0 ± 0.4 13.5 ± 1.3 15.6 ± 0.9 11.4 ± 1.019.5 ± 0.8 12.0 ± 0.6 Supervised35.6 ± 0.2 9.2 ± 1.2 9.9 ± 0.1 7.1 ± 0.514.8 ± 0.5 9.5 ± 0.8Dual-model-DINOS340.5 ± 0.4 14.3 ± 0.2 12.8 ± 1.1 9.0 ± 0.9 -8.2 ± 1.4 9.0 ± 1.0 9.0 ± 2.817.9 ± 0.5 10.8 ± 0.7 Dual-model-CB 10.8 ± 3.3 7.6 ± 1.6 TTT-24.4 ± 0.6 25.4 ± 0.4 20.5 ± 1.327.2 ± 0.6 17.3 ± 1.0 ODA-33.3 ± 0.3 33.9 ± 0.2 30.4 ± 0.636.7 ± 0.3 26.7 ± 0.1 CODAS5", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Effect of Data Granularity. (Accuracy) Investigating the performance impact when using ODA on different subsets (plate, batch, source) of the target domain. Single plate/ batch denotes the scenario where ODA is applied on a single plate or batch and then evaluated on the full target source. All plate/ batch denotes the scenario where ODA is applied on a single plate or batch and tested on the same subset.", "figure_data": "Supervised Dual-model-DINO Single plate ODA Single batch ODA All plates ODA All batches ODA Full source ODA Supervised Supervised Supervised Supervised Dual-model-DINO Dual-model-DINO Dual-model-DINO Dual-model-DINO Single plate ODA Single plate ODA Single plate ODA Single plate ODA Single batch ODA Single batch ODA Single batch ODA Single batch ODA All plates ODA All plates ODA All plates ODA All plates ODA All batches ODA All batches ODA All batches ODA All batches ODA Full source ODA Full source ODA Full source ODA Full source ODA0.10 S3 S11 0.15 0.10 0.15 S3 S11 0.10 0.15 S3 S11 0.10 0.15 S3 S11 0.10 0.15 S3 S110.20 0.20 0.20 0.20 0.200.05 0.05 0.05 0.05 0.050.10 S5 S11 0.10 S5 S11 0.10 S5 S11 0.10 S5 S11 0.10 S5 S110.15 0.15 0.15 0.15 0.150.10 0.10 0.10 0.10 0.100.15 S8 S11 0.15 S8 S11 0.15 S8 S11 0.15 0.15 S8 S11 S8 S110.20 0.20 0.20 0.20 0.20Figure 4.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Generalization performance (Accuracy) from S3 to S7 and S10, between DINO and MAE.", "figure_data": "Model typeS3 → S7S3 → S10Dual-model-MAE14.2 ± 0.88.9 ± 0.5Dual-model-DINO14.8 ± 0.59.5 ± 0.8ODA-MAE17.8 ± 1.111.1 ± 0.4ODA27.2 ± 0.617.3 ± 1.0", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Johan Fredin Haslum; Christos Matsoukas; Karl-Johan Leuchowius; Kevin Smith
[ { "authors": "Mark-Anthony Bray; Shantanu Singh; Han Han; Chadwick T Davis; Blake Borgeson; Cathy L Hartland; Maria Kost-Alimova; Sigrún Margrét Gústafsdóttir; Christopher C Gibson; Anne E Carpenter", "journal": "Nature Protocols", "ref_id": "b0", "title": "Cell painting, a high-content image-based assay for morphological profiling using multiplexed fluorescent dyes", "year": "2016" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b1", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Srinivas Niranj Chandrasekaran; Jeanelle Ackerman; Eric Alix; D Michael Ando; John Arevalo; Melissa Bennion", "journal": "bioRxiv", "ref_id": "b2", "title": "Jump cell painting dataset: morphological impact of 136,000 chemical and genetic perturbations", "year": "2023" }, { "authors": "Srinivas Niranj Chandrasekaran; Hugo Ceulemans; Justin D Boyd; Anne E Carpenter", "journal": "Drug Discovery", "ref_id": "b3", "title": "Image-based profiling for drug discovery: due for a machine-learning upgrade? Nature Reviews", "year": "2020" }, { "authors": "Sungha Choi; Sanghun Jung; Huiwon Yun; Joanne Taery Kim; Seungryong Kim; Jaegul Choo", "journal": "", "ref_id": "b4", "title": "Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening", "year": "2021" }, { "authors": "Beth A Cimini; Srinivas Niranj Chandrasekaran; Maria Kost-Alimova; Lisa Miller; Amy Goodale; Briana Fritchman", "journal": "bioRxiv", "ref_id": "b5", "title": "Optimizing the cell painting assay for imagebased profiling", "year": "2022" }, { "authors": "Steven M Corsello; Joshua A Bittker; Zihan Liu; Joshua Gould; Patrick Mccarren; Jodi E Hirschman; Stephen E Johnston; Anita Vrcic; Bang Wong; Mariya Khan; Jacob K Asiedu; Rajiv Narayan; C C Mader; Aravind Subramanian; Todd R Golub", "journal": "Nature Medicine", "ref_id": "b6", "title": "The drug repurposing hub: a nextgeneration drug library and information resource", "year": "2017" }, { "authors": "Corinna Cortes; Mehryar Mohri; Afshin Rostamizadeh", "journal": "", "ref_id": "b7", "title": "Algorithms for learning kernels based on centered alignment", "year": "2014" }, { "authors": "Gabriela Csurka", "journal": "", "ref_id": "b8", "title": "Domain adaptation for visual applications: A comprehensive survey", "year": "2017" }, { "authors": "Mohammad Zalbagi Darestani; Jiayu Liu; Reinhard Heckel", "journal": "", "ref_id": "b9", "title": "Test-time training can close the natural distribution shift performance gap in deep learning based compressed sensing", "year": "2022" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; K Li; Li Fei-Fei", "journal": "", "ref_id": "b10", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Qi Dou; Daniel Coelho De Castro; Konstantinos Kamnitsas; Ben Glocker", "journal": "Neural Information Processing Systems", "ref_id": "b11", "title": "Domain generalization via model-agnostic learning of semantic features", "year": "2019" }, { "authors": "Yossi Gandelsman; Yu Sun; Xinlei Chen; Alexei A Efros", "journal": "", "ref_id": "b12", "title": "Test-time training with masked autoencoders", "year": "2022" }, { "authors": "Robert Geirhos; Carlos R Medina Temme; Jonas Rauber; H Heiko; Matthias Schütt; Felix Bethge; Wichmann", "journal": "", "ref_id": "b13", "title": "Generalisation in humans and deep neural networks", "year": "2018" }, { "authors": "Johan Fredin Haslum; Charles Lardeau; Johan Karlsson; Riku Turkki; Karl-Johan Leuchowius; Kevin Smith; Erik Müllers", "journal": "bioRxiv", "ref_id": "b14", "title": "Cell painting-based bioactivity prediction boosts high-throughput screening hit-rates and compound diversity", "year": "2023" }, { "authors": "Johan Fredin Haslum; Christos Matsoukas; Karl-Johan Leuchowius; Erik Mullers; Kevin Smith", "journal": "", "ref_id": "b15", "title": "Metadata-guided consistency learning for high content images", "year": "2022" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b16", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Yufan He; Aaron Carass; Lianrui Zuo; Blake E Dewey; Jerry L Prince", "journal": "Medical image analysis", "ref_id": "b17", "title": "Autoencoder based self-supervised test-time adaptation for medical image analysis", "year": "2021" }, { "authors": "Katie Heiser; Chadwick T Peter F Mclean; Ben Davis; Fogelson; Pamela Hannah B Gordon; Brett Jacobson; Ben Hurst; Ronald W Miller; Berton A Alfa; Earnshaw", "journal": "BioRxiv", "ref_id": "b18", "title": "Identification of potential treatments for covid-19 through artificial intelligence-enabled phenomic analysis of human cells infected with sars-cov-2", "year": "2020" }, { "authors": "Dan Hendrycks; Thomas G Dietterich", "journal": "", "ref_id": "b19", "title": "Benchmarking neural network robustness to common corruptions and perturbations", "year": "2018" }, { "authors": "Markus Hofmarcher; Elisabeth Rumetshofer; Djork-Arne Clevert; Sepp Hochreiter; Gunter Klambauer", "journal": "Journal of chemical information and modeling", "ref_id": "b20", "title": "Accurate prediction of biological assays with high-throughput microscopy images and convolutional networks", "year": "2019" }, { "authors": "Vladislav Kim; Nikolaos Adaloglou; Marc Osterland; M Flavio; Paula A Marin Morelli; Zapata", "journal": "bioRxiv", "ref_id": "b21", "title": "Selfsupervision advances morphological profiling by unlocking powerful image representations", "year": "2023" }, { "authors": "Suhyeon Lee; Hongje Seong; Seongwon Lee; Euntai Kim", "journal": "", "ref_id": "b22", "title": "Wildnet: Learning domain generalized semantic segmentation from the wild", "year": "2022" }, { "authors": "Xiaofeng Liu; Fangxu Xing; Chao Yang; Georges El Fakhri; Jonghye Woo", "journal": "International Conference on Medical Image Computing and Computer-Assisted Intervention", "ref_id": "b23", "title": "Adapting off-the-shelf source segmenter for target medical image segmentation. Medical image computing and computer-assisted intervention : MICCAI", "year": "2021" }, { "authors": "Christos Matsoukas; Johan Fredin Haslum; Magnus Söderberg; Kevin Smith", "journal": "", "ref_id": "b24", "title": "Pretrained vits yield versatile representations for medical images", "year": "2023" }, { "authors": "Christos Matsoukas; Johan Fredin Haslum; Moein Sorkhei; Magnus Söderberg; Kevin Smith", "journal": "", "ref_id": "b25", "title": "What makes transfer learning work for medical images: feature reuse & other factors", "year": "2022" }, { "authors": "Nikita Moshkov; Michael Bornholdt; Santiago Benoit; Matthew Smith; Claire Mcquin; Allen Goodman; Rebecca A Senft; Yu Han; Mehrtash Babadi; Peter Horvath; Beth A Cimini; Anne E Carpenter; Shantanu Singh; Juan C Caicedo", "journal": "bioRxiv", "ref_id": "b26", "title": "Learning representations for image-based profiling of perturbations", "year": "2022" }, { "authors": "Jo Nyffeler; Clinton Willis; Felix R Harris; M J Foster; Bryant Chambers; Megan Culbreth; Richard E Brockway; Sarah Davidson-Fritz; Daniel Dawson; Imran Shah; Katie Paul Friedman; Dan Chang; Logan J Everett; John F Wambaugh; Grace Patlewicz; Joshua A Harrill", "journal": "Toxicology and Applied Pharmacology", "ref_id": "b27", "title": "Application of cell painting for chemical hazard evaluation in support of screening-level chemical assessments", "year": "2023" }, { "authors": "Xingang Pan; Ping Luo; Jianping Shi; Xiaoou Tang", "journal": "", "ref_id": "b28", "title": "Two at once: Enhancing learning and generalization capacities via ibn-net", "year": "2018" }, { "authors": "Xingchao Peng; Qinxun Bai; Xide Xia; Zijun Huang; Kate Saenko; Bo Wang", "journal": "", "ref_id": "b29", "title": "Moment matching for multi-source domain adaptation", "year": "2018" }, { "authors": "Kuniaki Saito; Donghyun Kim; Stan Sclaroff; Kate Saenko", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Universal domain adaptation through self supervision", "year": "2020" }, { "authors": "Walter M Christopher J Schulze; Marcos H Bray; Joshua Woerhmann; R Stuart; Roger G Scott Lokey; Linington", "journal": "Chemistry & biology", "ref_id": "b31", "title": "function-first\" lead discovery: mode of action profiling of natural product libraries using image-based screening", "year": "2013" }, { "authors": "Jaak Simm; Günter Klambauer; Adam Arany; Marvin Steijaert; Kurt Jörg; Emmanuel Wegner; Vladimir Gustin; Yolanda T Chupakhin; Jorge Chong; Peter Vialard; Buijnsters", "journal": "Cell chemical biology", "ref_id": "b32", "title": "Repurposing high-throughput image assays enables biological activity prediction for drug discovery", "year": "2018" }, { "authors": "Baochen Sun; Kate Saenko", "journal": "", "ref_id": "b33", "title": "Deep coral: Correlation alignment for deep domain adaptation", "year": "2016" }, { "authors": "Yu Sun; Xiaolong Wang; Zhuang Liu; John Miller; Alexei Efros; Moritz Hardt", "journal": "PMLR", "ref_id": "b34", "title": "Test-time training with selfsupervision for generalization under distribution shifts", "year": "2020" }, { "authors": "Maciej Sypetkowski; Morteza Rezanejad; Saber Saberian; Oren Kraus; John Urbanik; James Taylor; Ben Mabey; Mason Victors; Jason Yosinski; Alborz Rezazadeh Sereshkeh", "journal": "", "ref_id": "b35", "title": "Rxrx1: A dataset for evaluating experimental batch correction methods", "year": "2023" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Herv'e J'egou", "journal": "", "ref_id": "b36", "title": "Training data-efficient image transformers & distillation through attention", "year": "2020" }, { "authors": "Maria Gregory P Way; Tsukasa Kost-Alimova; Shibue; Stanley William F Harrington; Federica Gill; Tim Piccioni; Hamdah Becker; William C Shafqat-Abbasi; Anne E Hahn; Carpenter", "journal": "Molecular biology of the cell", "ref_id": "b37", "title": "Predicting cell health phenotypes using image-based morphology profiling", "year": "2021" }, { "authors": "Xiangyu Yue; Zangwei Zheng; Shanghang Zhang; Yang Gao; Trevor Darrell; Kurt Keutzer; Alberto Sangiovanni Vincentelli", "journal": "", "ref_id": "b38", "title": "Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation", "year": "2021" }, { "authors": "Stephan Zheng; Yang Song; Thomas Leung; Ian J Goodfellow", "journal": "", "ref_id": "b39", "title": "Improving the robustness of deep neural networks via stability training", "year": "2016" } ]
[]
2023-11-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b4", "b9", "b10", "b11", "b12" ], "table_ref": [], "text": "Multivariate time series forecasting is a primary machine learning task in both scientific research and industrial applications [1,2]. The interactions and dependencies between many time series data govern how they evolve, and these can range from simple linear correlations to complex relationships such as the traffic flows underlying intelligent transportation systems [3][4][5][6] or physical forces affecting the trajectories of objects in space [7][8][9].\nAccurately predicting future values of the time series may require understanding their true relationships, which can provide valuable insights into the system represented by the time series. Recent studies aim to jointly infer these relationships and learn to forecast in an end-to-end manner, even without prior knowledge of the underlying graph [5,10]. However, inferring the graph from numerous time series data has a quadratic computational complexity, making it prohibitively expensive to scale to a large number of time signals.\nAnother important aspect of time series forecasting is the presence of non-stationary properties, such as seasonal effects, trends, and other structures that depend on the time index [11]. Such properties may need to be eliminated before modeling, and a recent line of work aims to incorporate trend and seasonality decomposition into the model architecture to simplify the prediction process [12,13].\nTemporal Graph Learning Workshop @ NeurIPS 2023, New Orleans.\nTherefore, it is natural to ask whether one can leverage deep neural networks to combine the strength of both worlds: 1) using a latent graph structure that aids in time series forecasting with each signal represented as a node and the interactions between them as edges, and 2) using end-to-end training to model the time series by decomposing it into multiple levels, which enables separate modeling of different patterns at each level, and then combining them to make accurate predictions. Existing works have not addressed both of these strengths together in a unified framework, and this is precisely the research question we seek to address in our current study.\nTo address this, we propose the use of graph neural networks (GNN) and a self-attention mechanism that efficiently infers latent graph structures with a time complexity and memory usage of O(N log N ) where N is the number of time series. We further incorporate hierarchical residual blocks to learn backcast and forecast outputs. These blocks operate across multiple inferred graphs, and the aggregated forecasts contribute to producing the final prediction. By implementing this approach, we have achieved a superior forecasting performance compared to baseline models, with an average enhancement of 23%. For an overview, this paper brings the following contributions:\n1. We introduce a novel approach that extends hierarchical signal decomposition, merging it with concurrent hierarchical latent graphs learning. This is termed as hierarchical joint graph learning and multivariate time series forecasting (HGMTS).\n2. Our method incorporates a sparse self-attention mechanism, which we establish as a good inductive bias when learning on latent graphs and addressing long sequence time series forecasting (LSTF) challenges.\n3. Through our experimental findings, it is evident that our proposed model outperforms traditional transformer networks in multivariate time series forecasting. The design not only sets a superior standard for direct multi-step forecasting but also establishes itself as a promising spatio-temporal GNN benchmark for subsequent studies bridging latent graph learning and time series forecasting." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b11", "b12", "b23", "b24", "b25", "b26", "b27", "b7", "b5", "b4", "b9" ], "table_ref": [], "text": "Until recently, deep learning methods for time series forecasting have primarily focused on utilizing recurrent neural networks (RNN) and their variants to develop a sequence-to-sequence prediction approach [14][15][16][17], which has shown remarkable outcomes. Despite significant progress, however, these methods are yet to achieve accurate predictions for long sequence time series forecasting due to challenges such as the accumulation of errors in many steps of unrolling, as well as vanishing gradients and memory limitations [18].\nSelf-attention based transformer models proposed recently for LSTF tasks have revolutionized time series prediction and attained remarkable success. In contrast to traditional RNN models, transformers have exhibited superior capability in capturing long-range temporal dependencies. Still, recent advancements in this domain, as illustrated by LongFormer [19], Reformer [20], Informer [21], AutoFormer [22], and ETSformer [23], have predominantly zeroed in on improving the efficiency of the self-attention mechanism, particularly for handling long input and output sequences. Concurrently, there has been a rise in the development of attention-free architectures, as seen in Oreshkin et al. [12] and Challu et al. [13], which present a computationally efficient alternative for modeling extensive input-output relationships by using deep stacks of fully connected layers. However, such models often overlook the intricate interactions between signals in multivariate time series data, tending to process each time series independently.\nSpatio-temporal graph neural networks (ST-GNNs) are a specific type of GNNs that are tailored to handle both time series data and their interactions. They have been used in a wide range of applications such as action recognition [24,25] and traffic forecasting [26][27][28]. These networks integrate sequential models for capturing temporal dependencies with GNNs employed to encapsulate spatial correlations among distinct nodes. However, a caveat with ST-GNNs is that they necessitate prior information regarding structural connectivity to depict the interrelations in time series data. This can be a limitation in cases where the structural information is not available.\nAccordingly, GNNs that include structure learning components have been developed to learn effective graph structures suitable for time series forecasting. Two such models, NRI [8] and GTS [6], calculate the probability of an edge between nodes using pairwise scores, resulting in a discrete adjacency matrix. Nonetheless, this approach can be computationally intensive with a growing number of nodes.\nIn contrast, MTGNN [5] and GDN [10] utilize a randomly initialized node embedding matrix to infer the latent graph structure. While this approach is less taxing on computational resources, it might compromise the accuracy of predictions." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we detail our proposed method, HGMTS. The overarching framework and core operational principles of this approach can be viewed in Figures 1 and2." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "Let X ∈ R N ×T ×M represents a multivariate time series, where N signifies the count of signals originating from various sensors, T denotes the length of the sequence, and M represents the dimension of the signal input (usually M = 1). We depict this multivariate time series as a graph G = {V, E, A}, wherein the collection of nodes denoted by V corresponds to the sensors, the set E pertains to the edges, and A represents the adjacency matrix. Notably, the precise composition of E and A is not known initially; however, our model will acquire this knowledge through the learning process." }, { "figure_ref": [], "heading": "Latent Graph Structure Learning (L-GSL)", "publication_ref": [ "b28" ], "table_ref": [], "text": "We embrace the concept of self-attention (introduced by [29]) and employ the attention scores in the role of edge weights. The process of learning the adjacency matrix of the graph, denoted as A ∈ R N ×N unfolds as follows:\nQ = HW Q , K = HW K , A = softmax QK T √ D(1)\nwhere H ∈ R N ×D corresponds to node embeddings1 , W Q ∈ R D×D and W K ∈ R D×D are weight matrices that project H into query Q and key K, respectively. The main limitation in estimating latent graph structures in Eq.( 1) for a large value of N is the necessity to perform quadratic time dot-product computations along with the utilization of O(N 2 ) memory. In an effort to achieve a self-attention mechanism complexity of O(N log N ), our approach involves identifying pivotal query nodes and their associated significant key nodes in a sequential manner." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Identifying pivotal query nodes", "publication_ref": [ "b29", "b30", "b18", "b20", "b20" ], "table_ref": [], "text": "For the purpose of determining which query nodes will establish connections with other nodes, our initial step involves evaluating the significance of queries. Recent studies [30,31,19,21] have highlighted the existence of sparsity in the distribution of self-attention probabilities. Drawing inspiration from these findings, we establish the importance of queries based on the Kullback-Leibler (KL) divergence between a uniform distribution and the attention probability distribution of query nodes.\nLet q i and k i represent the i-th row in matrices Q and K respectively. For a given query node, p(k\nj |q i ) = exp(q i k ⊤ j )/ ℓ exp(q i k ⊤ ℓ )\ndenotes the attention probability of the i-th query towards the j-th key node. Then, p(K|q i ) = [p(k 1 |q i ) . . . p(k N |q i )] indicates the probability distribution of how the i-th query allocates its attention/weight across all nodes. In this context, D KL (U, p(K|q i )) quantifies the deviation of a query node's attention probabilities from a uniform distribution U{1, N }. This divergence measurement serves as a metric for identifying significant query nodes; a higher KL divergence suggests that a query's attention is mainly directed towards particular key nodes, rather than being evenly distributed. As a result, these query nodes are postulated to be suitable candidates for establishing sparse connections.\nThe traversal of all query nodes for this measurement, however, still entails a quadratic computational requirement. It is worth noting that a recent study demonstrated that the relative magnitudes of query importance remain unchanged even when the divergence metric is calculated using randomly sampled keys [21]. Building on this idea, we determine the importance of query nodes through the computation of D KL ( Ū , p( K|q i )) instead, where Ū = U{1, n}, K represents a matrix containing randomly sampled n row vectors from K, and n = ⌊c • log N ⌋ denotes the number of random samples based on a constant sampling factor c (Figure 1a). Given this measurement of query importance, we select top-n query nodes and denote it as Q (Figure 1b)." }, { "figure_ref": [ "fig_0" ], "heading": "Identifying associated key nodes", "publication_ref": [], "table_ref": [], "text": "Using the selected set of n query nodes, our subsequent step involves identifying the corresponding key nodes to establish connections. In pursuit of achieving this objective, we initiate by computing the attention probabilities p(K|q i ) of the i-th query across all keys nodes; this procedure is reiterated for each of the n query nodes. Next, we choose the top-n key nodes for each query based on their attention scores (Figure 1c), and we designate this collection as K. The ultimate adjacency matrix, adhering to the sparsity constraint, is defined by the equation:\nĀ = softmax Q KT √ D(2)\nIn this equation, Q and K possess the same dimensions as Q and K, except that the row vectors corresponding to insignificant query and key nodes are replaced with zeros. To sum up, the complexity of all the necessary computations for evaluating the significance of a query node and determining which key nodes to establish connections with, considering the top-n chosen queries, amounts to O(N log N )." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Hierarchical Signal Decomposition", "publication_ref": [ "b11" ], "table_ref": [], "text": "This section provides an overview of the proposed approach shown in Figure 2 and discusses the overall design principles. Our approach builds upon N-BEATS [12], enhancing its key elements significantly. Our main methodology comprises of three primary elements: signal decomposition, latent graph structure learning, and constructing forecasts and backcasts in a hierarchical manner. Much like the N-BEATS approach, every block is trained to generate signals for both backcast and forecast outputs. Here, the backcast output is designed to be subtracted from the input of the subsequent block, whereas the forecasts are combined to produce the final prediction (Figure 2). These blocks are arranged in stacks, each focusing on a distinct spatial dependency through a unique set of graph structures." }, { "figure_ref": [], "heading": "Signal decomposition module", "publication_ref": [], "table_ref": [], "text": "Recent research has witnessed a surging interest in disentangling time series data into its trend and seasonal components. These components respectively represent the overall long-term pattern and the seasonal fluctuations within the time signals. However, when it comes to future time series, directly performing this decomposition becomes impractical due to the inherent uncertainty of the future.\nTo address this challenge, we propose the incorporation of a signal decomposition module within \nX trend = AvgPool(Padding(X)), X seas = X -X trend(3)\nwhere X trend , X seas denote the trend and seasonal components respectively. We opt for the AvgPool(•) for the moving average, accompanied by the zero padding operation to maintain the original series length intact." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Message-passing module", "publication_ref": [ "b31" ], "table_ref": [], "text": "The message-passing module receives as input the past L time steps of both seasonal and trend outputs X seas t-L:t , X trend t-L:t ∈ R N ×L obtained from the signal decomposition. As the two components go through the same set of distinct parameterized network modules, their differentiation will be disregarded henceforth. At each time step t, the input consisting of N multivariate time series with L lags are transformed into embedding vectors H ∈ R N ×D using a multilayer perceptron (MLP). Each row vector h i in this matrix represents an individual node embedding. Subsequently, these node embeddings are employed in Eq.1 of the latent graph structure learning module to create a sparse adjacency matrix Ā. This matrix, in conjunction with the node embedding matrix, serves as the input for the message-passing neural network (Figure 2a). To be more specific, the r-th round of message passing in the GNN is executed using the following equations:\nh (0) i = f (x i,t-L:t )(4)\nm (r) ij = g(h (r) i -h (r) j )(5)\nĀ = L-GSL(H)(6)\nh (r+1) i = GRU(h (r) i , j∈N (i) āij • m (r) ij )(7)\nwhere h\n(r)\ni refers to the i-th node embedding after round r, and m (r) ij represents the message vector from node i to j. The interaction strength associated with the edge (i, j), denoted as āij , corresponds to the entry in Ā at the i-th row and j-th column. Both the encoding function f (•) and the message function g(•) are implemented as two-layer MLPs with ReLU nonlinearities. Finally, the node embeddings are updated using a GRU after aggregating all incoming messages through a weighted sum over the neighborhood N (i) for each node i. This sequence of operations is repeated separately for the seasonal and trend inputs, with no sharing of parameters (Figure 2a).\nTo enhance both the model's expressivity and its capacity for generalization, we employ a multimodule GNN framework [32]. More specifically, the next hidden state h (r+1) i is computed by blending two intermediate node states, h (r) i,1 and h (r) i,2 , through a linear combination defined as follows:\nh (r+1) i = β (r) i h (r) i,1 + (1 -β (r) i )h (r) i,2(8)\nwhere the two intermediate representations h (r) i,1 and h (r) i,2 are derived from Eq. ( 7) using two distinct GRUs. The value of the gating variable β (r) i is determined by another processing unit employing a gating function ξ g , which is a neural network producing a scalar output through a sigmoid activation." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Forecast and backcast module", "publication_ref": [], "table_ref": [], "text": "Following the completion of the last R round of message passing (3 rounds in total), the backcast x and forecast outputs ŷ are generated in this procedure. This is achieved by mapping the final node embeddings through separate two MLPs. These MLPs are responsible for handling the generation of backcast and forecast outputs individually (Figure 2a). It is important to note that the last layer of these MLPs is designed as a linear layer. This process of generating backcast and forecast outputs is applied to both the seasonal and trend pathways, and the ultimate backcast and forecast outputs are obtained by summing up the respective outputs from the seasonal and trend components (Figure 2a):\nxseas i,t-L:t = ϕ seas (h (R) i,seas ) xtrend i,t-L:t = ϕ trend (h (R) i,trend ) xi,t-L:t = xseas i,t-L:t + xtrend i,t-L:t ŷseas i,t+1:t+K = ψ seas (h (R) i,seas )(9)\nŷtrend i,t+1:t+K = ψ trend (h (R) i,trend )(10)\nŷi,t+1:t+K = ŷseas i,t+1:t+K + ŷtrend i,t+1:t+K (11) Here, ϕ □ and ψ □ represent two-layer MLPs designed to acquire the predictive decomposition of the partial backcast xi,t-L:t of the preceding L time steps, and the forecast ŷi,t+1:t+K of the subsequent K time steps. These MLPs operate on components denoted as □, which can be either the seasonal or trend aspects. Note that the indexing related to block or stack levels has been excluded for clarity. The resulting global forecast is constructed by summing the outputs of all blocks (Figure 2b-c)." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "We first provide an overview of the datasets (Table 1), evaluation metrics, and baselines employed to quantitatively assess our model's performance. The main results are summarized in Table 2, demonstrating the competitive predictive performance of our approach in comparison to existing works. We then elaborate on the specifics of our training and evaluation setups followed by detailing the ablation studies." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b32", "b20" ], "table_ref": [], "text": "Our experimentation extensively covers six real-world benchmark datasets. Conforming to the standard protocol [33,21], the split of all datasets into training, validation, and test sets has been conducted chronologically, following a split ratio of 60:20:20 for the ETTm 2 dataset and a split ratio of 70:10:20 for the remaining datasets.\n• ETTm 2 (Electricity Transformer Temperature): This dataset encompasses data obtained from electricity transformers, featuring load and oil temperatures recorded every 15 minutes during the period spanning from July 2016 to July 2018.\n• ECL (Electricity Consuming Load): The ECL dataset compiles hourly electricity consumption (in Kwh) data from 321 customers, spanning the years 2012 to 2014.\n• Exchange: This dataset aggregates daily exchange rates of eight different countries relative to the US dollar. The data spans from 1990 to 2016.\n• Traffic: The Traffic dataset is a collection of road occupancy rates from 862 sensors situated along San Francisco Bay area freeways. These rates are recorded every hour, spanning from January 2015 to December 2016.\n• Weather: This dataset comprises 21 meteorological measurements, including air temperature and humidity. These measurements are recorded every 10 minutes throughout the entirety of the year 2020 in Germany. • ILI (Influenza-Like Illness): This dataset provides a record of weekly influenza-like illness (ILI) patients and the total patient count, sourced from the Centers for Disease Control and Prevention of the US. The data covers the extensive period from 2002 to 2021. It represents the ratio of ILI patients versus the total count for each week." }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [], "table_ref": [], "text": "We evaluate the effectiveness of our approach by measuring its accuracy using the mean squared error (MSE) and mean absolute error (MAE) metrics. These evaluations are conducted for various prediction horizon lengths K ∈ {96, 192, 336, 720} given a fixed input length L = 96, except for ILI where L = 36:\nMSE = 1 N K N i=1 t+K τ =t (y i,τ -ŷi,τ ) 2 , MAE = 1 N K N i=1 t+K τ =t |y i,τ -ŷi,τ |(12)" }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b11", "b32", "b20", "b19", "b30", "b33", "b34" ], "table_ref": [], "text": "We evaluate our proposed model by comparing it with seven baseline models. These include: (1) N-BEATS [12], which aligns with the external structure of our model, (2) Autoformer [33], (3) Informer [21], (4) Reformer [20], (5) LogTrans [31] -latest transformer-based models. Additionally, we compare with two conventional RNN-based models: (6) LSTNet [34] and ( 7) LSTM [35]." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [], "table_ref": [], "text": "Our model is trained using the ADAM optimizer, starting with a learning rate of 10 -4 that gets reduced by half every two epochs. We employ early stopping during training, stopping the process if there is no improvement after 10 epochs. The training is carried out with a batch size of 32. We have configured our model with 3 stacks, each containing 1 block. All tests are conducted three times, making use of the PyTorch framework, and are executed on a single NVIDIA RTX 3090 with 24GB GPU.\n5 Experimental Results" }, { "figure_ref": [], "heading": "Multivariate time series forecasting", "publication_ref": [], "table_ref": [], "text": "In the multivariate setting, our proposed model, HGMTS, consistently achieves state-of-the-art performance across all benchmark datasets and prediction length configurations (Table 2). Notably, under the input-96-predict-192 setting, HGMTS demonstrates significant improvements over previous state-of-the-art results, with a 34% (0.273→0.180) reduction in MSE for ETT, 19% (0.180→0.146) reduction for ECL, 53% (0.225→0.105) reduction for Exchange, 5% (0.409→0.389) reduction for Traffic, and 10% (0.229→0.207) reduction for Weather. In the case of the input-36-predict-60 setting for ILI, HGMTS achieves 17% (2.547→2.118) reduction in MSE. Overall, HGMTS delivers an average MSE reduction of 23% across these settings. It is particularly striking how HGMTS drastically improves predictions for the Exchange dataset, where it records an average MSE reduction of 52% for all prediction lengths. Moreover, HGMTS stands out for its outstanding long-term stability, an essential attribute for real-world applications." }, { "figure_ref": [], "heading": "Effect of sparsity in graphs on forecasting", "publication_ref": [], "table_ref": [], "text": "Within the HGMTS model framework, a key hyperparameter is the sampling factor in L-GSL. This factor determines how many query nodes are selected and subsequently linked to key nodes. For the sake of simplicity, we ensure that the number of chosen query and key nodes remains the same. We then measure the sparsity of the latent graphs by computing the proportion of selected pivotal query or key nodes relative to the total time series count. This proportion is denoted as γ = ⌊c • log N ⌋/N and acts as an indicator of the sparsity in building these latent graphs. To understand the impact of sparsity in the learned graphs, we modify γ values between 0.2 and 0.7 and then document the findings from the multivariate forecasting studies. As detailed in Table 2, there is a consistent trend: all the graphs lean towards sparse interactions (γ ≤ 0.5), targeting optimal predictive outcomes in LSTF tasks. Additionally, different benchmark datasets exhibit unique preferences regarding the optimal sparsity level for predictive performance, as displayed in Table 2." }, { "figure_ref": [ "fig_1" ], "heading": "Ablation studies", "publication_ref": [ "b7" ], "table_ref": [ "tab_1", "tab_1" ], "text": "We posit that the strengths of the HGMTS architecture stem from its ability to hierarchically model the interplay between time series, particularly in the realms of trend and seasonality components. To delve deeper into this proposition, we present a series of control models for a comparative analysis:\n• HGMTS 1 : The model as showcased in Figure 2.\n• HGMTS 2 : A model that has shared latent graphs between trend and seasonality channels, but not across different blocks and stacks.\n• HGMTS 3 : A model where latent graphs are shared throughout all blocks and stacks but remain distinct between trend and seasonality channels.\n• HGMTS 4 : This model omits the L-GSL and MPNN modules.\n• HGMTS 5 : A model focusing solely on either the trend or seasonality channel, essentially lacking the signal decomposition module.\n• HGMTS 6 : A model that has used a single GRU module in Eq (8).\nUnder the same multivariate setting, the evaluation metrics for each control model, averaged over all benchmark datasets excluding ILI, are detailed in Table 4. The HGMTS 4 , which forgoes the L-GSL and MPNN modules, experiences a noticeable average MSE surge of 30% (0.258→0.336) across all horizons. This rise is the most significant among all controls, indicating that capturing interdependencies between multivariate signals is vital in our suggested model. HGMTS 5 , which emphasizes solely on a single channel between trend and seasonality, registers the second most pronounced MSE growth (18%: 0.258→0.305), suggesting that signal decomposition is also instrumental in LSTF tasks. Sharing the latent graphs -whether between the trend and seasonality pathways (as in HGMTS 2 ) or among blocks (as in HGMTS 3 ) -does elevate the average MSE, but the rise is modest when compared with the first two control models. Additionally, our findings highlight that incorporating multiple node update mechanisms in MPNN, as seen in HGMTS 6 , brings about a slight enhancement in forecasting precision.\nThe information presented in Table 4 robustly supports the idea that best performance is achieved by integrating both suggested components: the latent graph structure and hierarchical signal decomposition. This emphasizes their synergistic role in enhancing the accuracy of long sequence time series predictions. Furthermore, it is confirmed that crafting distinct latent associations between time series hierarchically, spanning both trend and seasonal channels, is instrumental in attaining improved prediction outcomes." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this paper, we delved into the challenge of long-term multivariate time series forecasting, an area that has seen notable progress recently. However, the intricate temporal patterns often impede models from effectively learning reliable dependencies. In response, we introduce HGMTS, a spatio-temporal multivariate time series forecasting model that incorporates a signal decomposition module and employs a latent graph structure learning as intrinsic operators. This unique approach allows for the hierarchical aggregation of long-term trend and seasonal information from intermediate predictions. Furthermore, we adopt a multi-module message-passing framework to enhance our model's capacity to capture diverse time series data from a range of heterogeneous sensors. This approach distinctly sets us apart from previous neural forecasting models. Notably, HGMTS naturally achieves a computational complexity of O(N log N ) and consistently delivers state-of-the-art performance across a wide array of real-world datasets.\nLearning a latent graph typically poses considerable challenges. Even though our model leverages the top-k pooling method to infer the latent graph, there are many other deep learning techniques that could be investigated in upcoming studies to uncover hidden structural patterns. Enhancements related to both representation capacity and computational efficiency might expand its broader adoption. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the National Research Foundation of Korea (NRF) grant (No. NRF-2021R1F1A1045390), the Brain Convergence Research Program (No. NRF-2021M3E5D2A01023887), the Bio & Medical Technology Development Program (No. RS-2023-00226494) of the National Research Foundation (NRF), the Institute of Information & communications Technology Planning & Evaluation (IITP) grant (No.2020-0-01373, Artificial Intelligence Graduate School Program (Hanyang University)) funded by the Korean government (MSIT), the Technology Innovation Program (20013726, Development of Industrial Intelligent Technology for Manufacturing, Process, and Logistics) funded By the Ministry of Trade, Industry & Energy (MOTIE, Korea), and in part by Samsung Electronics Co., Ltd." }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" } ]
Multivariate time series is prevalent in many scientific and industrial domains. Modeling multivariate signals is challenging due to their long-range temporal dependencies and intricate interactions-both direct and indirect. To confront these complexities, we introduce a method of representing multivariate signals as nodes in a graph with edges indicating interdependency between them. Specifically, we leverage graph neural networks (GNN) and attention mechanisms to efficiently learn the underlying relationships within the time series data. Moreover, we suggest employing hierarchical signal decompositions running over the graphs to capture multiple spatial dependencies. The effectiveness of our proposed model is evaluated across various real-world benchmark datasets designed for long-term forecasting tasks. The results consistently showcase the superiority of our model, achieving an average 23% reduction in mean squared error (MSE) compared to existing models.
Hierarchical Joint Graph Learning and Multivariate Time Series Forecasting
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of the latent graph structure learning (L-GSL). (a) Key nodes chosen at random (depicted as gray circles) are used to measure the significance of a query node (shown as a blue circle). (b) Top-n query nodes (blue circles) are picked according to the importance distribution across all query nodes. (c) Key nodes, colored in orange, that hold sufficient relevance to be linked with the chosen query node.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of the proposed HGMTS model architecture. (a) The hierarchical residual block is marked by signal decomposition and GNN-centric L-GSL modules. (b) The combination of multiple blocks forms a stack, (c) culminating in the entire model design to ultimately produce a global forecasting output.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Ablation study overview. Displayed are four distinct model architectures explored to understand the impact of specific components on overall LSTF performance.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The HGMTS performance evaluated under various selections of the graph sparsity hyperparameter γ. The forecasting setup remains consistent with what is presented in Table 2. .226 0.130 0.229 0.133 0.234 0.138 0.241 0.145 0.249 0.156 0.263 192 0.146 0.249 0.149 0.253 0.152 0.257 0.158 0.266 0.164 0.274 0.169 0.280 336 0.175 0.277 0.177 0.280 0.181 0.285 0.189 0.294 0.193 0.301 0.198 0.307 720 0.238 0.332 0.240 0.335 0.243 0.338 0.247 0.345 0.252 0.351 0.256 0.359 .264 0.374 0.268 0.377 0.272 0.381 0.276 0.386 0.280 0.391 0.287 192 0.389 0.281 0.394 0.286 0.398 0.291 0.405 0.299 0.412 0.308 0.423 0.316 336 0.439 0.302 0.445 0.309 0.451 0.316 0.460 0.327 0.463 0.332 0.466 0.338 720 0.577 0.386 0.581 0.392 0.584 0.395 0.589 0.401 0.594 0.407 0.598 0.414 Weather 96 0.147 0.186 0.146 0.185 0.147 0.187 0.149 0.190 0.150 0.191 0.152 0.193 192 0.208 0.238 0.207 0.236 0.209 0.238 0.212 0.240 0.215 0.244 0.218 0.248 336 0.270 0.293 0.268 0.291 0.269 0.292 0.271 0.294 0.274 0.298 0.277 0.301 720 0.350 0.354 0.348 0.351 0.349 0.353 0.352 0.356 0.354 0.359 0.357 0.364", "figure_data": "Sparsity (γ)0.20.30.40.50.60.7MetricMSE MAE MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE96 0.151 0.240 0.149 0.238 0.146 0.234 0.145 0.232--0.146 0.235ETTm 2192 0.187 0.324 0.183 0.315 0.181 0.313 0.180 0.311 336 0.239 0.364 0.234 0.358 0.230 0.353 0.227 0.349----1.182 0.314 0.229 0.352720 0.292 0.418 0.286 0.407 0.283 0.402 0.280 0.398--0.282 0.401ECL 96 0.128 0Exchange 96 -192 -336 ----0.056 0.173 0.055 0.172 0.057 0.174 0.058 0.176 0.061 0.180 0.107 0.244 0.105 0.242 0.106 0.244 0.107 0.246 0.109 0.249 0.184 0.336 0.182 0.334 0.183 0.336 0.184 0.338 0.187 0.341720--0.563 0.613 0.560 0.609 0.562 0.611 0.564 0.613 0.567 0.617Traffic 96 0.371 0ILI 24 1.832 0.845 1.830 0.842 1.828 0.840 1.827 0.839 36 2.041 0.911 2.037 0.906 2.036 0.905 2.034 0.903 48 2.113 0.926 2.108 0.922 2.105 0.918 2.102 0.915------1.829 0.842 2.035 0.905 2.104 0.91860 2.123 0.964 2.122 0.962 2.120 0.959 2.118 0.956--2.119 0.959", "figure_id": "tab_0", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Empirical evaluation of long sequence time series forecasts for HGMTS. MAE and MSE are averaged over three runs and five datasets, with the best result highlighted in bold and the second best in blue.", "figure_data": "HGMTS1HGMTS2HGMTS3HGMTS4HGMTS5HGMTS6A. MSE96 192 336 7200.168 0.205 0.258 0.4010.171 0.209 0.263 0.4120.170 0.208 0.271 0.4280.195 0.261 0.344 0.5450.183 0.232 0.309 0.4960.169 0.206 0.264 0.414A. MAE96 192 336 7200.214 0.264 0.311 0.4150.219 0.268 0.316 0.4180.218 0.266 0.328 0.4210.237 0.296 0.349 0.4640.229 0.286 0.337 0.4350.216 0.265 0.318 0.420", "figure_id": "tab_1", "figure_label": "4", "figure_type": "table" } ]
Juhyeon Kim; Hyungeun Lee; Seungwon Yu; Ung Hwang; Wooyul Jung; Miseon Park; Kijung Yoon
[ { "authors": "Fotios Petropoulos; Daniele Apiletti; Vassilios Assimakopoulos; Mohamed Zied Babai; Devon K Barrow; Souhaib Ben Taieb; Christoph Bergmeir; Ricardo J Bessa; Jakub Bijak; John E Boylan", "journal": "International Journal of Forecasting", "ref_id": "b0", "title": "Forecasting: theory and practice", "year": "2022" }, { "authors": "Remi Lam; Alvaro Sanchez-Gonzalez; Matthew Willson; Peter Wirnsberger; Meire Fortunato; Alexander Pritzel; Suman Ravuri; Timo Ewalds; Ferran Alet; Zach Eaton-Rosen", "journal": "", "ref_id": "b1", "title": "Graphcast: Learning skillful medium-range global weather forecasting", "year": "2022" }, { "authors": "Austin Derrow-Pinion; Jennifer She; David Wong; Oliver Lange; Todd Hester; Luis Perez; Marc Nunkesser; Seongjae Lee; Xueying Guo; Brett Wiltshire", "journal": "", "ref_id": "b2", "title": "Eta prediction with graph neural networks in google maps", "year": "2021" }, { "authors": "Saeed Rahmani; Asiye Baghbani; Nizar Bouguila; Zachary Patterson", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b3", "title": "Graph neural networks for intelligent transportation systems: A survey", "year": "2023" }, { "authors": "Zonghan Wu; Shirui Pan; Guodong Long; Jing Jiang; Xiaojun Chang; Chengqi Zhang", "journal": "", "ref_id": "b4", "title": "Connecting the dots: Multivariate time series forecasting with graph neural networks", "year": "2020" }, { "authors": "Chao Shang; Jie Chen; Jinbo Bi", "journal": "", "ref_id": "b5", "title": "Discrete graph structure learning for forecasting multiple time series", "year": "2021" }, { "authors": "Peter Battaglia; Razvan Pascanu; Matthew Lai; Danilo Jimenez Rezende", "journal": "", "ref_id": "b6", "title": "Interaction networks for learning about objects, relations and physics", "year": "2016" }, { "authors": "Thomas Kipf; Ethan Fetaya; Kuan-Chieh Wang; Max Welling; Richard Zemel", "journal": "PMLR", "ref_id": "b7", "title": "Neural relational inference for interacting systems", "year": "2018" }, { "authors": "Alvaro Sanchez-Gonzalez; Jonathan Godwin; Tobias Pfaff; Rex Ying; Jure Leskovec; Peter Battaglia", "journal": "PMLR", "ref_id": "b8", "title": "Learning to simulate complex physics with graph networks", "year": "2020" }, { "authors": "Ailin Deng; Bryan Hooi", "journal": "", "ref_id": "b9", "title": "Graph neural network-based anomaly detection in multivariate time series", "year": "2021" }, { "authors": "William S Robert B Cleveland; Jean E Cleveland; Irma Mcrae; Terpenning", "journal": "Journal of Official Statistics", "ref_id": "b10", "title": "Stl: A seasonal-trend decomposition", "year": "1990" }, { "authors": "Boris N Oreshkin; Dmitri Carpov; Nicolas Chapados; Yoshua Bengio", "journal": "", "ref_id": "b11", "title": "N-beats: Neural basis expansion analysis for interpretable time series forecasting", "year": "2020" }, { "authors": "Cristian Challu; Kin G Olivares; Boris N Oreshkin; Federico Garza Ramirez; Max Mergenthaler Canseco; Artur Dubrawski", "journal": "", "ref_id": "b12", "title": "Nhits: Neural hierarchical interpolation for time series forecasting", "year": "2023" }, { "authors": "Rose Yu; Stephan Zheng; Anima Anandkumar; Yisong Yue", "journal": "", "ref_id": "b13", "title": "Long-term forecasting using tensor-train rnns", "year": "2017" }, { "authors": "Dongjin Yao Qin; Haifeng Song; Wei Chen; Guofei Cheng; Garrison Jiang; Cottrell", "journal": "", "ref_id": "b14", "title": "A dual-stage attention-based recurrent neural network for time series prediction", "year": "2017" }, { "authors": "Ruofeng Wen; Kari Torkkola; Balakrishnan Narayanaswamy; Dhruv Madeka", "journal": "", "ref_id": "b15", "title": "A multihorizon quantile recurrent forecaster", "year": "2017" }, { "authors": "David Salinas; Valentin Flunkert; Jan Gasthaus; Tim Januschowski", "journal": "International Journal of Forecasting", "ref_id": "b16", "title": "Deepar: Probabilistic forecasting with autoregressive recurrent networks", "year": "2020" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b18", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Nikita Kitaev; Lukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b19", "title": "Reformer: The efficient transformer", "year": "2020" }, { "authors": "Haoyi Zhou; Shanghang Zhang; Jieqi Peng; Shuai Zhang; Jianxin Li; Hui Xiong; Wancai Zhang", "journal": "", "ref_id": "b20", "title": "Informer: Beyond efficient transformer for long sequence time-series forecasting", "year": "2021" }, { "authors": "Minghao Chen; Houwen Peng; Jianlong Fu; Haibin Ling", "journal": "", "ref_id": "b21", "title": "Autoformer: Searching transformers for visual recognition", "year": "2021" }, { "authors": "Gerald Woo; Chenghao Liu; Doyen Sahoo; Akshat Kumar; Steven Hoi", "journal": "", "ref_id": "b22", "title": "Etsformer: Exponential smoothing transformers for time-series forecasting", "year": "2022" }, { "authors": "Sijie Yan; Yuanjun Xiong; Dahua Lin", "journal": "", "ref_id": "b23", "title": "Spatial temporal graph convolutional networks for skeleton-based action recognition", "year": "2018" }, { "authors": "Zhen Huang; Xu Shen; Xinmei Tian; Houqiang Li; Jianqiang Huang; Xian-Sheng Hua", "journal": "", "ref_id": "b24", "title": "Spatio-temporal inception graph convolutional networks for skeleton-based action recognition", "year": "2020" }, { "authors": "Yaguang Li; Rose Yu; Cyrus Shahabi; Yan Liu", "journal": "", "ref_id": "b25", "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "year": "2017" }, { "authors": "Youngjoo Seo; Michaël Defferrard; Pierre Vandergheynst; Xavier Bresson", "journal": "Springer", "ref_id": "b26", "title": "Structured sequence modeling with graph convolutional recurrent networks", "year": "2018" }, { "authors": "Ling Zhao; Yujiao Song; Chao Zhang; Yu Liu; Pu Wang; Tao Lin; Min Deng; Haifeng Li", "journal": "IEEE transactions on intelligent transportation systems", "ref_id": "b27", "title": "T-gcn: A temporal graph convolutional network for traffic prediction", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b28", "title": "Attention is all you need", "year": "2017" }, { "authors": "Rewon Child; Scott Gray; Alec Radford; Ilya Sutskever", "journal": "", "ref_id": "b29", "title": "Generating long sequences with sparse transformers", "year": "2019" }, { "authors": "Shiyang Li; Xiaoyong Jin; Xiyou Yao Xuan; Wenhu Zhou; Yu-Xiang Chen; Xifeng Wang; Yan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting", "year": "2019" }, { "authors": "Hyungeun Lee; Kijung Yoon", "journal": "Transactions on Machine Learning Research", "ref_id": "b31", "title": "Towards better generalization with flexible representation of multi-module graph neural networks", "year": "2023" }, { "authors": "Haixu Wu; Jiehui Xu; Jianmin Wang; Mingsheng Long", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting", "year": "2021" }, { "authors": "Guokun Lai; Wei-Cheng Chang; Yiming Yang; Hanxiao Liu", "journal": "", "ref_id": "b33", "title": "Modeling long-and short-term temporal patterns with deep neural networks", "year": "2018" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Computation", "ref_id": "b34", "title": "Long short-term memory", "year": "1997" } ]
[ { "formula_coordinates": [ 3, 199.44, 552.77, 305.23, 25.19 ], "formula_id": "formula_0", "formula_text": "Q = HW Q , K = HW K , A = softmax QK T √ D(1)" }, { "formula_coordinates": [ 4, 122.93, 133.61, 146.35, 12.72 ], "formula_id": "formula_1", "formula_text": "j |q i ) = exp(q i k ⊤ j )/ ℓ exp(q i k ⊤ ℓ )" }, { "formula_coordinates": [ 4, 260.33, 417.82, 244.34, 26.16 ], "formula_id": "formula_2", "formula_text": "Ā = softmax Q KT √ D(2)" }, { "formula_coordinates": [ 5, 187.59, 393.13, 317.08, 10.53 ], "formula_id": "formula_3", "formula_text": "X trend = AvgPool(Padding(X)), X seas = X -X trend(3)" }, { "formula_coordinates": [ 5, 144.07, 602.83, 123, 14.07 ], "formula_id": "formula_4", "formula_text": "h (0) i = f (x i,t-L:t )(4)" }, { "formula_coordinates": [ 5, 140.94, 620.49, 126.12, 14.07 ], "formula_id": "formula_5", "formula_text": "m (r) ij = g(h (r) i -h (r) j )(5)" }, { "formula_coordinates": [ 5, 322.18, 603.14, 177.05, 11.47 ], "formula_id": "formula_6", "formula_text": "Ā = L-GSL(H)(6)" }, { "formula_coordinates": [ 5, 299.62, 620.19, 199.61, 14.28 ], "formula_id": "formula_7", "formula_text": "h (r+1) i = GRU(h (r) i , j∈N (i) āij • m (r) ij )(7)" }, { "formula_coordinates": [ 6, 223.51, 129.09, 281.16, 14.07 ], "formula_id": "formula_8", "formula_text": "h (r+1) i = β (r) i h (r) i,1 + (1 -β (r) i )h (r) i,2(8)" }, { "formula_coordinates": [ 6, 143.3, 307.67, 348.02, 48.77 ], "formula_id": "formula_9", "formula_text": "xseas i,t-L:t = ϕ seas (h (R) i,seas ) xtrend i,t-L:t = ϕ trend (h (R) i,trend ) xi,t-L:t = xseas i,t-L:t + xtrend i,t-L:t ŷseas i,t+1:t+K = ψ seas (h (R) i,seas )(9)" }, { "formula_coordinates": [ 6, 310.6, 326.19, 180.71, 14.07 ], "formula_id": "formula_10", "formula_text": "ŷtrend i,t+1:t+K = ψ trend (h (R) i,trend )(10)" }, { "formula_coordinates": [ 7, 152.47, 237.49, 352.2, 30.32 ], "formula_id": "formula_11", "formula_text": "MSE = 1 N K N i=1 t+K τ =t (y i,τ -ŷi,τ ) 2 , MAE = 1 N K N i=1 t+K τ =t |y i,τ -ŷi,τ |(12)" } ]
2024-02-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b22", "b23", "b24", "b27", "b2", "b35", "b32" ], "table_ref": [], "text": "Diffusion models (Ho et al., 2020;Song et al., 2021a,b) have recently advanced high-quality text-to-image (T2I) synthesis (Ramesh et al., 2022;Rombach et al., 2022;Saharia et al., 2022), leading to the exploration of text-to-video (T2V) generation. Earlier works train T2V diffusion models in pixel (Ho et al., 2022b;Singer et al., 2023;Ho et al., 2022a) or latent spaces (Blattmann et al., 2023;Zhou et al., 2022;Wang et al., 2023). Despite the promising results they yield, the heavy computational costs are unbearable. To reduce training * Equal contributions.\n† Corresponding Author." }, { "figure_ref": [ "fig_0" ], "heading": "Prompt GPT4Motion", "publication_ref": [ "b13", "b16", "b17", "b19", "b33", "b23" ], "table_ref": [], "text": "AnimateDiff ModelScope\nText2Video-Zero DirecT2V efforts, recent works have shifted towards trainingfree approaches, such as Text2Video-Zero (Khachatryan et al., 2023), which use pretrained T2I models to synthesize videos without additional training, aiming to lessen resource demands. However, these methods struggle with motion coherence. To overcome this, recent studies (Huang et al., 2023a;Hong et al., 2023;Lian et al., 2023;Lin et al., 2023) have harnessed the descriptive power of large lan-guage models (LLMs) (Ouyang et al., 2022;Wei et al., 2021), such as GPT-4 (OpenAI, 2023), to generate frame-by-frame descriptions and explicit spatiotemporal layouts, enhancing narrative continuity and motion coherence in video sequences generated from a single user prompt. Despite the enhanced video quality they achieve, maintaining motion coherence in the scenes of large motion shifts is still challenging. Motivated by them, we propose GPT4Motion, a training-free framework leveraging GPT-4's planning capability, the physical simulation strength of Blender1 , and the image generation ability of Stable Diffusion (Rombach et al., 2022) to enhance the quality of video synthesis. Given a user textual prompt, GPT4Motion first employs GPT-4 to produce Blender scripts that drive the creation of basic video scene elements, including edges and depth maps. These elements then serve as conditions for Stable Diffusion to generate the final video. This methodology ensures that the resulting video not only faithfully aligns with the textual prompt but also ensures motion coherence across all frames, as shown in Figure 1. The contributions of our work are summarized in the following.\n• We demonstrate GPT-4's ability to guide Blender in simulating physical motion scenes, highlighting LLMs' role in creating physicsbased videos.\n• We propose GPT4Motion, a training-free framework that employs scripts generated by GPT-4 for Blender simulations, allowing for the generation of temporally coherent videos through Stable Diffusion.\n• Experiments on three basic physical motion scenarios prove GPT4Motion's capability to generate high-quality videos with both motion coherency and entity consistency.\n2 Related Work" }, { "figure_ref": [], "heading": "Text-to-Video Generation", "publication_ref": [ "b23", "b3", "b12", "b24", "b7", "b0", "b16" ], "table_ref": [], "text": "Text-to-video (T2V) generation, aiming to create videos from textual descriptions, remains in its early stages despite significant advancements in text-to-image (T2I) synthesis (Rombach et al., 2022;Dhariwal and Nichol, 2021;James et al., 2023;Saharia et al., 2022). The introduction of diffusion models (Ho et al., 2020;Song et al., 2021a) has facilitated developments in T2V, yet challenges such as motion incoherence and entity inconsistency persist. Large language models (LLMs) like GPT-4 (OpenAI, 2023), PaLM (Anil et al., 2023), andBLOOM (Scao et al., 2022) have demonstrated their versatility across various multimodal tasks, suggesting their potential utility in T2V. Incorporating LLMs into T2V, innovations have emerged, such as narrative generation through Free-bloom (Huang et al., 2023a) and spatiotemporal layout creation in LVD (Lian et al., 2023), which guide the synthesis process. This paper introduces an innovative method that leverages the combined strengths of GPT-4 and Blender, addressing key challenges such as motion incoherence and physical accuracy. This represents a significant step forward, bridging the gap between textual descriptions and highquality video generation." }, { "figure_ref": [], "heading": "Blender in Deep Learning", "publication_ref": [ "b31", "b30" ], "table_ref": [], "text": "Blender is an open-source 3D creation suite that provides tools for modeling, animation, and rendering, facilitating the creation of detailed 3D scenes.\nIt is increasingly used in deep learning for synthetic data generation, as seen in the S2RDA benchmark (Tang and Jia, 2023) for image classification and in projects like 3D-GPT (Sun et al., 2023), which leverages Blender for procedural 3D modeling. However, Blender's potential in text-to-video (T2V) synthesis remains unexplored, mainly due to the requirements of much professional technical knowledge and complex manual procedures such as texturing, rigging, animation, lighting and compositing. Our approach, GPT4Motion, leverages GPT-4 to automate Blender scripting, offering a streamlined, user-friendly method for producing high-quality videos that are textually aligned and physically accurate in object motion, marking a significant advancement in T2V technology." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Task Formulation", "publication_ref": [ "b4", "b34" ], "table_ref": [], "text": "Given a user prompt about some basic physical motion scenario, we aim to generate a physically accurate video. Physical phenomena are often associated with the material of the object. We focus on simulating three common types of object materials encountered in daily life: 1) Rigid Objects, such as balls, which maintain their shapes when subjected to forces; 2) Cloth, such as flags, characterized by their softness and propensity to flutter; 3) Liquid, such as water, which exhibits continuous and deformable motions. Moreover, we give particular attention to several typical motion modes for these materials, including collisions (direct impacts between objects), wind effects (motion induced by air currents), and flow (continuously and easily move in one direction). Simulating these physical scenarios typically involves knowledge of Classical Mechanics (Goldstein et al., 2002), Fluid Mechanics (Kundu et al., 2015) and other physical knowledge.\nCurrent text-to-video diffusion models struggle to capture this complex physical knowledge through training, thereby failing to produce videos that adhere to physical principles.\nTo address these challenges, we propose a novel training-free text-to-video generation framework, named GPT4Motion, which is illustrated in Figure 2. The advantage of our approach is that GPT-4's semantic understanding and code generation capabilities are leveraged to translate the user prompt into a Blender Python script. This script can drive Blender's built-in physics engine to simulate the corresponding physical scene. We then introduce ControlNet (Zhang and Agrawala, 2023), which takes as input the dynamic results of the Blender simulation and directs Stable Diffusion to generate each frame of the video. This framework ensures that the generated video is not only consistent with the user prompt, but also physically correct. In the next sections, we describe the details of our framework." }, { "figure_ref": [ "fig_2" ], "heading": "Blender Simulations via GPT-4", "publication_ref": [ "b30" ], "table_ref": [], "text": "GPT-4 is a large language model pre-trained on huge amounts of Internet data with great capability for semantic understanding and code generation. We have observed that while GPT-4 has a certain knowledge about the Blender Python API, it still struggles with generating Blender Python scripts based on user prompts. On the one hand, asking GPT-4 to create even a simple 3D model (like a basketball) directly in Blender seems to be an overwhelming task (Sun et al., 2023). On the other hand, because the Blender Python API has fewer resources and its API version is updated quickly, GPT-4 can easily misuse certain functions or make errors due to version differences. To address these issues, we propose the following schemes:\nLeveraging External 3D Models. Creating 3D models typically requires professional artists to manually craft them, spending substantial time sculpting details, painting fine texture maps, and optimizing the model topology, which GPT-4 cannot independently accomplish. Fortunately, there is a large amount of 3D models available on the Internet2 . Hence, we have collected common 3D objects from everyday life and can automatically load the 3D models via scripts corresponding to textual prompts.\nEncapsulating Blender Functions. Although GPT-4 possesses the necessary knowledge of the Blender Python API, writing a lengthy script to render an entire scene remains challenging. We note that for our target scenarios, Blender Python scripts typically consist of several fixed steps, including scene initialization, rendering, object creation and import, and physical effects. Thus, we guide GPT-4 to encapsulate these reusable functions (see the Appendix Section D). By doing so, we have greatly simplified the entire process from user prompts to rendering corresponding physical scenarios. These encapsulated functions can be broadly categorized into three types:\n• Scene initialization and rendering functions.\nThese functions are responsible for clearing the default initial scene and performing the rendering. In Blender, one can set up the simultaneous image outputs of depth, normal, edge, and segmentation for a video. We find that using edge and depth images yields good performance in our framework, so we render these edge and depth images for video generation.\n• Object creation and import functions. These functions offer the capability to create basic objects (such as viewpoints, floors, cubes, spheres, etc.) within a Blender scene. In addition to creating simple objects, we also provide import functions that allow users to bring external 3D models into Blender.\n• Physics effect functions. These functions encapsulate the basic physics and material effect settings within Blender. For instance, they can assign different physical types (such as rigid, cloth, or liquid) to objects, impart initial velocities and rotations to objects, or set up wind force effects.\nTranslating User Prompts into Physics. Figure 3 shows the general prompt template we design for GPT-4. It includes encapsulated Blender functions, external assets, and instruction. We define the dimensions of the virtual world in the template and provide information about the camera's position and viewpoint. Such information aids GPT-4 in better understanding the layout of the 3D space.\nUltimately, the user prompt becomes part of the instruction, directly guiding GPT-4 to generate the corresponding Blender Python script. Finally, with this script, Blender renders the edge and depth image sequences." }, { "figure_ref": [], "heading": "Video Synthesis with Physical Conditions", "publication_ref": [ "b20", "b23", "b34" ], "table_ref": [], "text": "Our goal is to generate a consistent and realistic video based on the user prompt and corresponding physical motion conditions provided by Blender.\nWe adopt Stable Diffusion XL (SDXL) (Podell et al., 2023), an upgraded version of Stable Diffusion (Rombach et al., 2022). We made the following modifications to SDXL.\nPhysics Motion Constraints. ControlNet (Zhang and Agrawala, 2023) is a network architecture that can control the image generation of a pretrained text-to-image diffusion model with additional conditions, such as edge or depth. However, a single ControlNet is limited to one type of condition. The generation of some physical motion videos requires the control of multiple conditions. For example, when generating a video of a basketball in free fall, its edges can accurately reflect its texture changes, but the edges cannot reflect 3D layout of the scene, resulting in the lack of realism in the video. On the other hand, the depth map of the scene helps address this problem but is unable to capture the texture changes of the basketball. Therefore, we leverage a combination of Canny-edge-based ControlNet and depth-based ControlNet to precisely control the generation of the video. Specifically, we add the intermediate results of the two ControlNets together to serve as the final conditions for SDXL.\nTemporal Consistency Constraint. To ensure temporal consistency across different frames of a video, we modify the self attention (SA) in the U-Net of SDXL into cross-frame attention (CFA). Specifically, the self attention in the U-Net uses linear projections W Q , W K , and W V to project the feature F i of the i-th frame (for simplicity, we ignore the time-step t) into\nQ i = W Q F i , K i = W K F i , and V i = W V F i ,\nand perform the self attention calculation:\nSA(Q i , K i , V i ) = Softmax Q i K T i √ d V i , (1\n)\nwhere d is a scaling factor. To obtain the crossframe attention, we concatenate the feature of the frame F i , i ̸ = 1, with the first frame F 1 for K and V , while keeping Q unchanged:\nK i,1 = W K [F 1 , αF i ], V i,1 = W V [F 1 , F i ],(2)\nand the cross-frame attention operation is:\nCF A(Q i , K i,1 , V i,1 ) = Softmax Q i K T i,1 √ d V i,1 ,\n(3) where [•, •] denotes the concatenation, and α ∈ [0, 1] is a hyperparameter. We find that increasing α improves the fidelity of the moving object but at the same time brings more flickering; on the contrary, decreasing α reduces the flickering but also decreases the fidelity of the moving object. The cross-frame attention has the effect that the i-th frame pays attention to not only itself but also the first frame. Surprisingly, by this cross-frame attention design, the generated video frames exhibit remarkable content consistency. Additionally, we employ the same initial noise for SDXL to generate all the frames of the video, which further enhances the temporal consistency." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "In our experiments, we use the Stable Diffusion XL 1.0-base model3 , along with Canny-edge-based ControlNet4 and depth-based ControlNet5 . The α in the rigid object, cloth, and liquid experiments are set to 0.9, 0.75, and 0.4, respectively. We use the DDIM sampler (Song et al., 2021a) with classifier-free guidance (Ho and Salimans, 2022) and 50 sampling steps in our experiments on one NVIDIA A6000 GPU. The version of the Blender is 3.6. We generate 80-frame sequences of edge and depth maps at a resolution of 1920 × 1080 for each prompt. Theoretically, our method can generate motion video of any length and resolution. For conciseness, in this paper, we show the cropped video with 1080 × 1080 resolution. By the way, the videos in this experimental section may look slow, which is because too many videos are displayed at the same time on the same page. To view the motion in these videos, please use Acrobat Reader6 . The original videos can be found in our supplementary material. " }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3", "fig_4", "fig_5" ], "heading": "Controlling Physical Properties", "publication_ref": [], "table_ref": [], "text": "We show the generative capabilities of our method in three physical scenarios. Furthermore, we demonstrate how our approach allows for control over specific physical properties solely through user prompts, thereby influencing the overall generation results.\nBasketball Drop and Collision. Figure 4 displays basketball motion videos generated by our method with three prompts. In Figure 4 (left), the basketball maintains a high degree of realism in its texture while spinning, and accurately replicates the bouncing behavior after collision with the floor. Figure 4 (middle) demonstrates that our method can precisely control the number of basketballs and efficiently generate the collisions and bounces that occur when multiple basketballs land. Impressively, as shown in Figure 4 (right), when the user requests that the basketball is thrown towards the camera, GPT-4 calculates the necessary initial velocity of the basketball based on its fall time in the generated script, thereby achieving a visually convincing effect. This demonstrates that our approach can be combined with the physical knowledge that GPT-4 has to control the content of the video generation (see the Appendix Section E).\nCloth Fluttering in Wind. Figures 5 and6 validate our method's capability in generating the Water Pouring into a Mug. Figure 7 shows three videos of water of different viscosities being poured into a mug. When the viscosity is low, the flowing water collides and merges with the water in the mug, creating complex turbulence on the surface. As the viscosity increases, the flow becomes slower and the water begins to stick together." }, { "figure_ref": [ "fig_0", "fig_4" ], "heading": "Comparisons with Baselines", "publication_ref": [ "b5", "b32", "b26", "b1", "b13", "b18" ], "table_ref": [ "tab_0" ], "text": "We compare our GPT4Motion against four baselines: 1) AnimateDiff (Guo et al., 2023): combines Stable Diffusion with a motion module, augmented by Realistic Vision DreamBooth7 ; 2) ModelScope (Wang et al., 2023), uses spatial-temporal convolution and attention in Stable Diffusion for T2V tasks, utilizing LAION (Schuhmann et al., 2021) and We-bVid (Bain et al., 2021) datasets; 3) Text2Video-Zero (Khachatryan et al., 2023), leverages imageto-image capabilities for generating videos through cross-attention and modified latent code sampling; 4) DirecT2V (Hong et al., 2023), uses a LLM for frame-level descriptions from prompts, with rotational value mapping and dual-softmax for continuity. To maintain the size of the paper, we only compare GPT4Motion with these baselines on three examples. More comparisons are given in the Appendix Section A. A Basketball Free Falls in the Air. The visual comparison of our method with other baselines is presented in Figure 1. Obviously, the baselines' results do not match the user prompt. DirecT2V and Text2Video-Zero face challenges in texture realism and motion consistency, whereas AnimateDiff and ModelScope improve video smoothness but struggle with consistent textures and realistic movements. In contrast to these methods, GPT4Motion can generate smooth texture changes during the falling of the basketball, and bouncing after collision with the floor, which appear more realistic.\nA White Flag Flaps in the Wind. As shown in Figure 8 (1st row), the videos generated by Ani-mateDiff and Text2Video-Zero exhibit artifacts/distortions in the flags, whereas ModelScope and Di-recT2V are unable to smoothly generate the gradual transition of flag fluttering in the wind. However, as shown in the middle of Figure 5, the video generated by GPT4Motion can show the continuous change of wrinkles and ripples on the flag under the effect of gravity and wind.\nWater Flows into a White Mug on a all the baselines' results fail to align with the user prompt. While the videos from AnimateDiff and ModelScope reflect changes in the water flow, they cannot capture the physical effects of water pouring into a mug. The videos generated by Text2Video-Zero and DirecT2V, on the other hand, show a constantly jittering mug. In comparison, as shown in Figure 7 (left), GPT4Motion generates the video that accurately depicts the surge of water as it collides with the mug, offering a more realistic effect.\nQuantitative Evaluation and User Study. We select three metrics for quantitative comparisons: Motion Smoothness (Huang et al., 2023b), which represents the fluidity of video motion and reflects the physical accuracy to some extent; CLIP scores (Liu et al., 2023), indicative of the alignment between the prompt and the video; and Temporal Flickering (Huang et al., 2023b), which illustrates the flickering level of the generated videos.Please refer to the Appendix Section B for details on each metric. The results, as shown in Table 1, demonstrate that our GPT4Motion, leveraging GPT-4 for understanding and invoking Blender to simulate physical scenes, outperforms the other four methods on all the metrics. While videos generated by GPT4Motion still exhibit some flickering, they show a significant improvement in flickering level compared to the other four models. However, these metrics might not encompass the entire scope of video generation quality, leading us to undertake a user study for a more comprehensive evaluation.\nWe also conduct a user study with 30 participants, where we show videos generated by different methods under the same prompt and ask the participants to vote for the best video based on three evaluation criteria: physical accuracy, text-video alignment, and the least amount of video flickering. Remarkably, our GPT4Motion's results obtain 100% of the participants' votes." }, { "figure_ref": [ "fig_7" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We perform an ablation study to evaluate the importance of control conditions, cross-frame attention, and α values in Eq. 2, analyzing the effect of each w/o edge w/o depth First-Frame Attention (FFA). In this setting,\nFFA i i + 1 i + 2 i + 3\nK i,1 is replaced K 1 = W K F 1 , and V i,1 is replaced with V 1 = W V F 1 during\nthe generation of the i-th frame in Eq. 3. This means that the i-th frame only attends to the first frame (without paying attention to itself). As shown in Figure 9 (3rd row), the model FFA results in incomplete flag generation, where part of the flag merges with the sky and white clouds. Conversely, our cross-frame attention allows the i-th frame during its generation to focus not only on the features of the first frame but also on its own characteristics, thereby maintaining temporal consistency and ensuring the completeness of the generated object. \nα = 0.1 α = 0.75 α = 1.0 i i + 1 i + 2 i + 3" }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper proposes GPT4Motion, a new trainingfree framework that effectively combines the advanced planning capability of Large Language Models (LLMs) with the robust simulation tool, Blender, for efficient text-to-video (T2V) synthesis. By generating Blender's scripts via GPT-4, GPT4Motion significantly simplifies the video generation process, making it more accessible and less reliant on extensive manual effort or a deep, specialized technical knowledge in 3D modeling. Experimental results on three basic physical motion scenarios, including rigid object drop and collision, cloth draping and swinging, and liquid flow, demonstrate GPT4Motion's impressive capability to efficiently generate high-quality videos with temporal coherence, surpassing previous T2V methods. GPT4Motion opens up new perspectives for T2V generation. Its integration of LLM-driven scripting and advanced Blender simulation paves a promising path for tackling more complex scenes in future research." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although GPT4Motion advances the field of T2V synthesis, it has several limitations that set the directions for future research. While GPT4Motion successfully handles basic physical motions related to specific object materials, we have not extended it to more complex motion scenarios. We hypothesize that complex motions could be decomposed into a series of basic motions, requiring more refined instructions for LLMs. Another limitation is that sometimes the generated videos still have flickering in some frames. Despite these limitations, we believe that GPT4Motion provides a promising way for T2V generation." }, { "figure_ref": [], "heading": "GPT4Motion", "publication_ref": [], "table_ref": [], "text": "AnimateDiff ModelScope Text2Video-Zero DirecT2V " }, { "figure_ref": [ "fig_9" ], "heading": "A More Comparison with Baselines", "publication_ref": [ "b5", "b32", "b13" ], "table_ref": [], "text": "In the main paper, we have compared GPT4Motion with four baselines (AnimateDiff (Guo et al., 2023), ModelScope (Wang et al., 2023), Text2Video-Zero (Khachatryan et al., 2023), and DirecT2V (Hong et al., 2023)) on three scenarios (rigid object drop and collision, cloth draping and swinging, and liquid flow). Here, we further conduct an experiment on dynamic effects of a T-shirt being blown by the wind under three wind strengths. The results are shown in Figure 11, where the seed is randomly chosen and fixed in all the generations. We can see that these baselines all fail to generate videos that match the user prompts and are unable to control the intensity of physical phenomena solely based on the linguistic descriptions. In contrast, our GPT4Motion not only precisely designs the parameters of Blender encapsulated functions (such as wind strength) through GPT-4, but also leverages Blender's physics engine to simulate the complex flapping and twisting dynamics of the T-shirt in the wind." }, { "figure_ref": [], "heading": "B Quantitative Evaluation Metrics", "publication_ref": [ "b15", "b18", "b21" ], "table_ref": [], "text": "Here, we introduce the metrics employed in the main paper:\n1. Motion Smoothness (Huang et al., 2023b). This metric evaluates the smoothness of motion in generated videos, ensuring it conforms to the physical laws of the real world. The evaluation utilizes motion priors from the video frame interpolation model (Li et al., 2023) to assess the smoothness of generated motions.\n2. Temporal Flickering (Huang et al., 2023b). This metric identifies imperfections in temporal consistency within generated videos, especially in local and high-frequency details.\nThe method involves analyzing static frames and computing the mean absolute difference across frames. These values are then sub-tracted from 255 and normalized between 0 and 1.\n3. CLIP-Score (Liu et al., 2023). This metric is designed to assess Text-Video Consistency. It leverages a pretrained ViT-B/32 CLIP model (Radford et al., 2021) as a feature extractor to calculate the cosine similarity between each video frame and the associated text." }, { "figure_ref": [], "heading": "C Blender Settings", "publication_ref": [], "table_ref": [], "text": "We use Blender to generate two sequences of scene depth maps and edge maps. The edge maps are obtained by Blender's built-in Freestyle8 feature, which is an advanced rendering technique for nonphotorealistic line drawings from 3D scenes. It not only allows for various line styles, such as handdrawn or sketch, but also serves as an independent rendering pass without losing edge information of the scene and additional post-processing. Moreover, we normalize the scene's Z-channel and render it as an independent depth map. Since the rendering of edge maps and depth maps is unrelated to ray tracing, we choose Blender's \"Workbench\" rendering mode. This mode avoids the substantial computational resources required for ray tracing, significantly enhancing rendering efficiency. In our experiments, completing the entire rendering process using Blender scripts takes only 2 to 3 minutes, with most of the time spent on calculating physical effects." }, { "figure_ref": [], "heading": "D GPT-4's Generation of Encapsulated Python Functions for Blender", "publication_ref": [], "table_ref": [], "text": "Blender incorporates an independent Python library, enabling the transformation of various manual processes within the software to automate tasks through its Python API. Leveraging GPT-4's capabilities of grasping Blender's Python API, we can rapidly generate encapsulated Python functions for Blender. This involves inputting specific functional goals and the associated Blender operation steps into GPT-4. Consequently, GPT-4 can autonomously create Python code that corresponds to these steps, effectively encapsulating them into Python functions. One example is given below." }, { "figure_ref": [], "heading": "User:", "publication_ref": [], "table_ref": [], "text": "You are an expert in Blender GPT-4: Sure, I'd be happy to help with that. Please go ahead and provide me with the step-by-step instructions for the Blender Python functions you need, and I will write the corresponding code for you." }, { "figure_ref": [], "heading": "User:", "publication_ref": [], "table_ref": [], "text": "Please build a function to remove all objects, cameras and lights from the initial scene." }, { "figure_ref": [], "heading": "GPT-4:", "publication_ref": [], "table_ref": [], "text": "Certainly! To create a Blender Python function that removes all objects, cameras, and lights from the initial scene, you can define a function like this: No inputs or outputs .\n12 13\nTypically called at the beginning of a script when starting a new scene setup . Just make sure to be in the correct context where the scene is accessible." }, { "figure_ref": [], "heading": "User:", "publication_ref": [], "table_ref": [], "text": "Please now create two physics functions, one that adds collision properties to the inputted object, and another that adds rigid body properties to the object. Then, construct a function to create a floor that scales it up 50 times in all directions, with physics set to collision and rigid body (passive, with elasticity set to 1)." }, { "figure_ref": [], "heading": "GPT-4:", "publication_ref": [], "table_ref": [], "text": "To accomplish this task, we'll create three functions in Blender's Python API: The object to which the collision modifier will be added . The object to which the rigid body physics will be applied ." }, { "figure_ref": [], "heading": "26", "publication_ref": [], "table_ref": [], "text": "-mass ( float ): The mass of the object , default is 1." }, { "figure_ref": [], "heading": "27", "publication_ref": [], "table_ref": [], "text": "-elasticity ( float ): The restitution ( bounciness ) of the object , default is 0.5." }, { "figure_ref": [], "heading": "-rigid_body_type ( str ):", "publication_ref": [], "table_ref": [], "text": "The type of rigid body , can be ' ACTIVE ' or ' PASSIVE ', default is ' ACTIVE '. The created floor is scaled to be large enough to act as a ground plane for most scenes . These functions together allow you to dynamically add physics properties to objects and create a large, physical floor in your Blender scene. Remember to run these functions in a Blender Python scripting context where the bpy module is accessible.\nAfter generating the function, we enhance its usability for GPT-4 by incorporating instructions (such as the strength interval of the wind) in the function's docstring. This ensures that GPT-4's usage aligns more closely with our expectations." }, { "figure_ref": [], "heading": "E Combination of Blender and GPT-4's Physics Knowledge", "publication_ref": [], "table_ref": [], "text": "We discover that GPT4Motion can not only use GPT-4's semantic understanding and coding capabilities to generate Blender Python scripts, but it can also leverage GPT-4's physical knowledge to control object motion. For example, when prompted with \"a basketball thrown towards the camera.\", we find that GPT-4 can calculate the time it takes for the basketball to fall, based on the height difference between the initial position of the basketball and the camera. This calculation helps to determine the initial velocity required for the basketball to fly to the camera. The speed calculation code generated by GPT-4 is as follows: In the above script, GPT-4 first calculates the height difference between the initial position of the basketball and the camera, which determines the time required for the basketball to fall to the same height as the camera. Subsequently, GPT-4 calculates the distance between the basketball and the camera along the Y-axis to determine the required initial velocity of the basketball. This process effectively integrates basic principles of physics, such as the equations of motion, to solve a practical problem in a simulated environment like Blender." } ]
Recent advances in text-to-video generation have harnessed the power of diffusion models to create visually compelling content conditioned on text prompts. However, they usually encounter high computational costs and struggle to produce videos with coherent physical motions. To tackle these issues, we propose GPT4Motion, a training-free framework that leverages the planning capability of large language models like GPT, the physical simulation strength of Blender, and the image generation ability of text-to-image diffusion models to enhance video synthesis. Specifically, GPT4Motion employs GPT-4 to generate a Blender script based on a user textual prompt, which commands Blender's built-in physics engine to craft fundamental scene components containing coherent physical motions across frames. Then these components are inputted into Stable Diffusion to generate a video aligned with the textual prompt. Experimental results on three basic physical motion scenarios demonstrate that GPT4Motion can generate high-quality videos efficiently in maintaining motion coherency and entity consistency. GPT4Motion offers new insights in textto-video research, enhancing its quality and broadening its horizon for further explorations.
GPT4Motion: Scripting Physical Motions in Text-to-Video Generation via Blender-Oriented GPT Planning
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison of the video results generated by different text-to-video models with the prompt \"A basketball free falls in the air\". Best viewed with Acrobat Reader for animation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: The architecture of our GPT4Motion. First, the user prompt is inserted into our designed prompt template. Then, the Python script generated by GPT-4 drives the Blender physics engine to simulate the corresponding motion, producing sequences of edge maps and depth maps. Finally, two ControlNets are employed to constrain the physical motion of video frames generated by Stable Diffusion, where a temporal consistency constraint is designed to enforce the coherence among frames.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Our prompt template designed for GPT-4. It contains information about functions, external assets, and instruction. The user prompt is inserted into the placeholder \"{PROMPT}\".", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: GPT4Motion's results on basketball drop and collision. Best viewed with Acrobat Reader for animation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: GPT4Motion's results on a fluttering flag.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: GPT4Motion's results on a fluttering T-shirt.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :Figure 8 :78Figure 7: GPT4Motion's results on the water pouring. Best viewed with Acrobat Reader for animation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Ablation experiments on various control conditions and cross-frame attention. Four consecutive frames are shown.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Ablation experiments on different α values. Four consecutive frames are shown.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Comparison of the video results generated by different text-to-video models under different physical conditions. Best viewed with Acrobat Reader for animation.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "each step, designing each function's name, and explaining its functionality and the meaning of each parameter in the docstring.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "15 bpy . ops . object . select_all ( action = ' SELECT ') bpy . ops . object . delete () You can call this function whenever you need to clear the scene of all types of objects.", "figure_data": "", "figure_id": "fig_12", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "± 0.003 0.260 ± 0.022 0.990 ± 0.006 AnimateDiff 0.991 ± 0.002 0.257 ± 0.020 0.988 ± 0.002 ModelScope 0.937 ± 0.051 0.252 ± 0.036 0.924 ± 0.059 Text2Video-Zero 0.946 ± 0.015 0.252 ± 0.024 0.928 ± 0.009 DirecT2V 0.879 ± 0.067 0.253 ± 0.033 0.870 ± 0.071 Quantitative comparison across various methods. The best performances are denoted in bold.", "figure_data": "MethodMotion↑CLIP↑Flickering↑GPT4Motion0.993", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Jiaxi Lv; Yi Huang; Mingfu Yan; Jiancheng Huang; Jianzhuang Liu; Yifan Liu; Yafei Wen; Xiaoxin Chen; Shifeng Chen
[ { "authors": "Rohan Anil; Andrew M Dai; Orhan Firat; Melvin Johnson; Dmitry Lepikhin; Alexandre Passos; Siamak Shakeri; Emanuel Taropa; Paige Bailey; Zhifeng Chen", "journal": "", "ref_id": "b0", "title": "Palm 2 technical report", "year": "2023" }, { "authors": "Max Bain; Arsha Nagrani; Gül Varol; Andrew Zisserman", "journal": "", "ref_id": "b1", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "Andreas Blattmann; Robin Rombach; Huan Ling; Tim Dockhorn; Seung Wook Kim; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b2", "title": "Align your latents: Highresolution video synthesis with latent diffusion models", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b3", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Herbert Goldstein; Charles Poole; John Safko", "journal": "", "ref_id": "b4", "title": "Classical mechanics", "year": "2002" }, { "authors": "Yuwei Guo; Ceyuan Yang; Anyi Rao; Yaohui Wang; Yu Qiao; Dahua Lin; Bo Dai", "journal": "", "ref_id": "b5", "title": "Animatediff: Animate your personalized text-to-image diffusion models without specific tuning", "year": "2023" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Fleet", "journal": "", "ref_id": "b6", "title": "a. Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b7", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b8", "title": "Classifierfree diffusion guidance", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey Gritsenko; William Chan; Mohammad Norouzi; David J Fleet", "journal": "", "ref_id": "b9", "title": "Video diffusion models", "year": "2022" }, { "authors": "Hanzhuo Huang; Yufan Feng; Cheng Shi; Lan Xu; Jingyi Yu; Sibei Yang", "journal": "", "ref_id": "b10", "title": "Free-bloom: Zeroshot text-to-video generator with llm director and ldm animator", "year": "2023" }, { "authors": "Ziqi Huang; Yinan He; Jiashuo Yu; Fan Zhang; Chenyang Si; Yuming Jiang; Yuanhan Zhang; Tianxing Wu; Qingyang Jin; Nattapol Chanpaisit; Yaohui Wang; Xinyuan Chen; Limin Wang; Dahua Lin; Yu Qiao; Ziwei Liu", "journal": "", "ref_id": "b11", "title": "VBench: Comprehensive benchmark suite for video generative models", "year": "2023" }, { "authors": "Betker James; Goh Gabriel; Jing Li; Brooks Tim; Wang Jianfeng; Li Linjie; Ouyang Long; Et ", "journal": "", "ref_id": "b12", "title": "Improving image generation with better captions", "year": "2023" }, { "authors": "Levon Khachatryan; Andranik Movsisyan; Vahram Tadevosyan; Roberto Henschel; Zhangyang Wang; Shant Navasardyan; Humphrey Shi", "journal": "", "ref_id": "b13", "title": "Text2video-zero: Text-to-image diffusion models are zero-shot video generators", "year": "2023" }, { "authors": "Ira M Pijush K Kundu; David R Cohen; Dowling", "journal": "Academic press", "ref_id": "b14", "title": "Fluid mechanics", "year": "2015" }, { "authors": "Zhen Li; Zuo-Liang Zhu; Ling-Hao Han; Qibin Hou; Chun-Le Guo; Ming-Ming Cheng", "journal": "", "ref_id": "b15", "title": "Amt: All-pairs multi-field transforms for efficient frame interpolation", "year": "2023" }, { "authors": "Long Lian; Baifeng Shi; Adam Yala; Trevor Darrell; Boyi Li", "journal": "", "ref_id": "b16", "title": "Llm-grounded video diffusion models", "year": "2023" }, { "authors": "Han Lin; Abhay Zala; Jaemin Cho; Mohit Bansal", "journal": "", "ref_id": "b17", "title": "Videodirectorgpt: Consistent multi-scene video generation via llm-guided planning", "year": "2023" }, { "authors": "Yaofang Liu; Xiaodong Cun; Xuebo Liu; Xintao Wang; Yong Zhang; Haoxin Chen; Yang Liu; Tieyong Zeng; Raymond Chan; Ying Shan", "journal": "OpenAI", "ref_id": "b18", "title": "Evalcrafter: Benchmarking and evaluating large video generation models", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "", "ref_id": "b19", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Dustin Podell; Zion English; Kyle Lacey; Andreas Blattmann; Tim Dockhorn; Jonas Müller; Joe Penna; Robin Rombach", "journal": "", "ref_id": "b20", "title": "Sdxl: Improving latent diffusion models for high-resolution image synthesis", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b21", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b22", "title": "Hierarchical textconditional image generation with clip latents", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b23", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; Sara Mahdavi; Rapha Gontijo Lopes", "journal": "", "ref_id": "b24", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilić; Roman Hesslow; Alexandra Castagné; François Sasha Luccioni; Matthias Yvon; Gallé", "journal": "", "ref_id": "b25", "title": "Bloom: A 176bparameter open-access multilingual language model", "year": "2022" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b26", "title": "Laion-400m: Open dataset of clipfiltered 400 million image-text pairs", "year": "2021" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni", "journal": "", "ref_id": "b27", "title": "Make-a-video: Text-to-video generation without text-video data", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b28", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "ICLR", "ref_id": "b29", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "Chunyi Sun; Junlin Han; Weijian Deng; Xinlong Wang; Zishan Qin; Stephen Gould", "journal": "", "ref_id": "b30", "title": "3d-gpt: Procedural 3d modeling with large language models", "year": "2023" }, { "authors": "Hui Tang; Kui Jia", "journal": "", "ref_id": "b31", "title": "A new benchmark: On the utility of synthetic data with blender for bare supervised learning and downstream domain adaptation", "year": "2023" }, { "authors": "Jiuniu Wang; Hangjie Yuan; Dayou Chen; Yingya Zhang; Xiang Wang; Shiwei Zhang", "journal": "", "ref_id": "b32", "title": "Modelscope text-to-video technical report", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Vincent Zhao; Kelvin Guu; Adams Wei Yu; Brian Lester; Nan Du; Andrew M Dai; Quoc V Le", "journal": "", "ref_id": "b33", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b34", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Daquan Zhou; Weimin Wang; Hanshu Yan; Weiwei Lv; Yizhe Zhu; Jiashi Feng", "journal": "", "ref_id": "b35", "title": "Magicvideo: Efficient video generation with latent diffusion models", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 70.87, 614.45, 218.27, 26.13 ], "formula_id": "formula_0", "formula_text": "Q i = W Q F i , K i = W K F i , and V i = W V F i ," }, { "formula_coordinates": [ 5, 81.42, 662.35, 204.21, 27.87 ], "formula_id": "formula_1", "formula_text": "SA(Q i , K i , V i ) = Softmax Q i K T i √ d V i , (1" }, { "formula_coordinates": [ 5, 285.63, 672.03, 4.24, 9.46 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 5, 81.78, 762.89, 208.08, 13.13 ], "formula_id": "formula_3", "formula_text": "K i,1 = W K [F 1 , αF i ], V i,1 = W V [F 1 , F i ],(2)" }, { "formula_coordinates": [ 5, 306.14, 101.22, 218.88, 29.35 ], "formula_id": "formula_4", "formula_text": "CF A(Q i , K i,1 , V i,1 ) = Softmax Q i K T i,1 √ d V i,1 ," }, { "formula_coordinates": [ 8, 77.01, 195.16, 195.84, 53.58 ], "formula_id": "formula_5", "formula_text": "FFA i i + 1 i + 2 i + 3" }, { "formula_coordinates": [ 8, 70.87, 600.84, 218.27, 26.13 ], "formula_id": "formula_6", "formula_text": "K i,1 is replaced K 1 = W K F 1 , and V i,1 is replaced with V 1 = W V F 1 during" }, { "formula_coordinates": [ 8, 305.72, 73.32, 202.4, 175.77 ], "formula_id": "formula_7", "formula_text": "α = 0.1 α = 0.75 α = 1.0 i i + 1 i + 2 i + 3" } ]
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23" ], "table_ref": [], "text": "The problem of object recognition in computer vision is the problem of matching object features with image features. Nevertheless, since an object has many features associated with it and since an image contains features that do not necessarily belong to that object, the matching process is a complex one because of the large size of the set of image-feature-to-objectfeature assignments. Therefore, in the past, the rationale has been to use constraints -such as the rigidity constraint -in order to maintain the number of assignments as reduced as possible: Therefore, any search process, including exhaustive search, is likely to be fast enough because it has to visit a few thousands of nodes rather than millions.\nVarious implementations of matching-based recognition of rigid objects take the form of either a search-graph or a searchtree. Examples of search graphs are maximal-clique finding algorithms introduced in computer vision by Ambler & al. [1], popularized by Ballard & Brown [2], and applied to 2-D object recognition by Bolles & Cain [3]. Subsequently, the advantage of using trees rather than graphs was stressed by a large number of authors such as Bolles & Horaud [4], Faugeras & Hebert [5], Ayache & Faugeras [6], Grimson & Lozano-Perez [7], Goad [8], Flynn & Jain [9], and many others.\nHowever, most of the object recognition methods just mentioned restrict the recognition to object whose exact geometry is known in advance. A more general approach consists of representing both the image and the object by graphs and of casting the recognition problem into the graph matching problem. Graphs are a convenient way of representing features and relationships between these features. Various graph representations have been used by Kim & Kak [10], Flynn & Jain [11], Dickinson & al. [12], and Bergevin & Levine [13]. However, graph matching is a difficult problem in itself. Whenever the two graphs to be matched have the same number of nodes, graph matching is equivalent to searching for graph isomorphism and polynomial time solutions exist in this case, [14], [15], [16]. It is however rarely the case that the image graph have the same size as the object graph: The problem is therefore equivalent to maximum subgraph matching -find the largest isomorphic subgraphs of the two graphs. So far, solutions proposed for solving the maximum subgraph matching problem involve some form of combinatorial optimization [17], [18].\nIf many objects rather than a single one (as it has often been the case) are present in a database of objects to be recognized, then the matching-based recognition becomes intractable because the complexity grows substantially with the number of features. An indexing process is crucial whenever the recognition process has to select among many possible objects. Recognition by indexing is therefore the process that attempts to rapidly extract from a large list of objects, those few objects that fit a group of image features while avoiding to establish image-feature-to-object-feature assignments.\nNevatia & Binford [19] were among the first to describe indexing as a part of an object recognition system. Ettinger [20] described a hierarchically organized object library well suited for indexing. Each object is decomposed into a list of subparts. The rationale is that many objects share a common set of sub-parts and what distinguishes one object from another is sub-part relationships -the overall list of sub-parts grows sublinearly with the number of objects in the library. This idea is applied to flat objects that are described by their outlines.\nThe idea of using hashing in conjunction with object recognition was introduced by Kalvin & al. [21]. Outlines of flat objects are described in terms of footprints. The best way to think of a footprint is of an intrinsic curve such as curvature as a function of curvilinear abscissa. The footprint of an object is further decomposed into intervals. Each such interval is described by a set of numbers (the sine and cosine Fourier coefficients, for example) and these numbers are hashed in hashtables. The indexing itself takes the form of a vote: Each footprint interval detected in the image votes for those objects in the database containing this footprint interval. Finally the object that received the highest vote score is the recognized object. A variation of this method using local frames and geometric hashing was proposed by Lamdan & Wolfson [22] for solving the matching problem, not the indexing problem.\nFollowing the same idea of hashing, Stein & Medioni [23] were able to recognize 3-D objects from 3-D data using super-segments and surface-patches as features. Their structural hashing technique retrieves object hypotheses from the database using hash-table indexing. A similar approach was proposed by Breuel [24].\nThe approach advocated in this paper capitalizes onto the representation of 3-D objects in terms of 2-D characteristic views and of characterizing such a view with a number of weighted graphs. An identical graph representation is extracted from the image of an unknown 3-D scene. A polynomial characterization of graphs allows us to organize the \"model graphs\" into hash tables. Therefore, recognition consists of computing similar polynomial characterizations for the \"image graphs\" and of indexing in the pre-stored hash tables. Finally, a voting process ranks a number of candidate characteristic views as potential recognized objects." }, { "figure_ref": [], "heading": "Paper organization", "publication_ref": [], "table_ref": [], "text": "The remainder of this paper is organized as follows. Section 2 introduces the polynomial characterization of a binary graph that will be used, namely the second immanantal polynomial of the Laplacian matrix of a graph. Then we briefly describe an extension of this representation to weighted graphs. Section 3 describes the graph indexing method that uses this polynomial characterization of graphs. It describes as well an object representation in terms of weighted graphs, the organization of the database of objects to be recognized, and the indexing method itself which is based on hashing. Section 4 describes a representation of 3-D polyhedral objects in terms of 2-D views and a representation of these views in terms of weighted graphs. Section 5 describes how to extract these weighted graphs from images and how to remove irrelevant image data. Section 6 describes a recognition experiment carried out with a set of 9 images of the same scene in the presence of a database of 6 objects. Finally, section 7 draws some conclusions and gives some directions for future work." }, { "figure_ref": [ "fig_0" ], "heading": "Polynomial characterization of a graph", "publication_ref": [ "b24", "b25", "b25", "b26", "b27", "b27", "b28", "b27", "b26", "b24" ], "table_ref": [], "text": "The method that we propose in this paper in order to achieve indexing uses graphs to represent both images and objects. Let us suppose that one is able to extract a number of graphs from an image and let G 1 be such an image graph that has the same number of nodes as a graph G 2 extracted from an object. Such a graph (an image or an object graph) is defined by a set of vertices V and a set of edges E. The two graphs G 1 = (V 1 , E 1 ) and G 2 = (V 2 , E 2 ) are said to be isomorphic if there is a bijection\nϕ : V 1 -→ V 2 such that: (v 1 , v 2 ) ∈ E 1 if and only if (ϕ(v 1 ), ϕ(v 2 )) ∈ E 2\nIf A 1 and A 2 are the adjacency matrices of the two graphs, one can easily see that G 1 is isomorphic to G 2 if and only if there exists a permutation matrix P satisfying:\nA 2 = PA 1 P -1(1)\nHence, there are two ways to decide whether two graphs are isomorphic:\n1. Find the permutation matrix P that satisfies the equation above. 2. Find an algebraic characterization of the adjacency matrix of a graph that is invariant under a similarity transformation of the adjacency matrix. Such a characterization has been proved to be useful for graph classification.\nOne obvious characterization that is invariant under similarity is the characteristic polynomial associated with the adjacency matrix [25], [26]. Indeed we have:\ndet(xI -PAP -1 ) = det(PxIP -1 -PAP -1 ) = det(P(xI -A)P -1 ) = det(xI -A)\nTherefore, the similarity of adjacency matrices is a necessary condition for isomorphism. Unfortunately it is far from being a sufficient condition. However, an important idea stems out from this example of graph characterization -one may seek to characterize a graph, up to an isomorphism, by the coefficients of a polynomial associated with that graph. More formally, we seek a polynomial associated with a graph, say p(G) such that:\n         if G 1 = G 2 then p(G 1 ) = p(G 2 ) and if p(G 1 ) = p(G 2 ) then G 1 = G 2(2)\nTwo graphs are said to be equal if they have the same number of nodes and if they are isomorphic. Two polynomials are equal if they have the same degree and if their coefficients are equal. If a polynomial satisfying the above condition exists, it follows that the problem of comparing two graphs of the same size is equivalent to the problem of comparing the coefficients of their associated polynomials. Notice however that graph characterization with a polynomial allows one to state whether two graphs are isomorphic or not but it doesn't provide the node-to-node isomorphic mapping between the graphs. Graph characterization is therefore exactly what one needs for model indexing, i.e., rapidly state whether some sensed data equal some object data. The search of an isomorphic node-to-node mapping is the task of matching and not the task of indexing.\nPolynomials that characterize a graph unambiguously up to an isomorphism have been thoroughly studied in the linear algebra literature [26], [27]. Among these polynomials, the second immanantal polynomial -or the d 2 -polynomial -is a good candidate [28].\nOne may associate the second immanantal polynomial with the adjacency matrix of a graph. There are however some reasons to prefer the Laplacian matrix (defined below) to the computationally simpler adjacency matrix. The Laplacian matrix is positive semidefinite symmetric of rank n -1 (if G is a connected graph). Generally speaking, second immanantal polynomials match up well with positive semidefinite matrices. The greater complexity of Laplacian matrices, when compared with adjacent matrices, suggests there may be fewer algebraic accidents [28]. If the time to compute the determinant of a n × n matrix is n 3 , the time to compute the coefficients of the second immanantal polynomial is n 4 .\nThe elements of the Laplacian matrix of a binary graph, L(G), are defined as follows:\nl i j =          d i if i = j -1 if\nthere is an edge between nodes i and j 0 otherwise\n(3)\nwhere d i is the number of graph edges meeting at node i and is called the degree of the node i. The interest reader may find in [29] a complete description of the properties of the Laplacian matrix of a binary graph.\nThe second immanantal polynomial associated with a n × n Laplacian matrix of a graph, L(G), can be written in generic form as:\nd 2 (xI -L(G)) = c 0 (L(G))x n -c 1 (L(G))x n-1 + ... + (-1) n c n (L(G))(4)\nThe coefficients c o , ..., c n of this polynomial are integers and they can be computed using the following formulae which are detailed by Merris [28] (n is the number of nodes of the graph and m is the number of edges of the graph):\n                         c 0 (L(G)) = n -1 c 1 (L(G)) = 2m(n -1) . . . c k (L(G)) = X∈Q k,n n i=1 l ii det (L(G){X}(i)) -det (L(G){X}) (5)\nIn these formulae l ii denotes a diagonal term of L(G) and Q k,n denotes the set of all the C k n strictly increasing sequences of size\nk (2 ≤ k ≤ n) obtained from the set {1, 2, ..., n}. For any n × n matrix M and for X ∈ Q k,n let M[X] be the k × k principal sub-matrix of M corresponding to X. M{X} is the n × n matrix: M{X} = M[X] 0 k 0 k I n-k(6)\nwhere I n-k is the identity matrix of size nk and 0 k is the null matrix of size k. M{X}(i) is the matrix obtained from M{X} by removing the i-th row and the i-th column.\nAn important property of the second immanantal polynomial associated with a graph is that it is preserved under similarity permutation [27]:\nd 2 (xI -L(G)) = d 2 (xI -PL(G)P -1 )\nTherefore, a necessary condition for two graphs to be isomorphic is that they have the same second immanantal polynomial. However, it is not a sufficient condition. In practice, however, there have been found very few examples of non-isomorphic graphs that have the same second immanantal polynomial [25]. In order to illustrate the above formalism let us consider two simple binary graphs and let us outline the computation of their associated second immanantal polynomials. An example of two 4-node binary graphs are shown on Figure 1. The Laplacian matrices are given by equation ( 3) and they are easy to compute:" }, { "figure_ref": [], "heading": "An example", "publication_ref": [], "table_ref": [], "text": "L(G) 1 =               3 -1 -1 -1 -1 1 0 0 -1 0 1 0 -1 0 0 1               L(G) 2 =               2 0 -1 -1 0 1 -1 0 -1 -1 3 -1 -1 0 -1 2              \nFor n = 4 the sets of all the C k 4 strictly increasing sequences of size k (2 ≤ k ≤ 4) are:\nQ 2,4 = {(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)} Q 3,4 = {(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)} Q 4,4 = {(1, 2, 3, 4)}\nIt is straightforward to compute the matrices L(G){X} and the matrices L(G){X}(i), for X ∈ Q k,4 . For example, L(G) 1 {(1, 2)} is a 4×4 matrix obtained by appending the first two rows and first two columns of L(G) 1 with I 2 and 0 2 as follows:\nL(G) 1 {(1, 2)} =               3 -1 0 0 -1 1 0 0 0 0 1 0 0 0 0 1               L(G) 1 {(1, 2)}(2) is a 3×3 matrix obtained from L(G) 1 {(1,\n2)} by removing its 2nd row and 2nd column:\nL(G) 1 {(1, 2)}(2) =           3 0 0 0 1 0 0 0 1          \nAfter some straightforward computation we obtain the following coefficients for the associated polynomials:\nd 2 (xI -L(G) 1 ) = 3x 4 -18x 3 + 33x 2 -24x + 6 d 2 (xI -L(G) 2 ) = 3x 4 -24x 3 + 105x 2 -68x + 24\nOne may also compute the characteristic polynomials associated with the Laplacian matrices, i.e.:\ndet(xI -L(G) 1 ) = x 4 -6x 3 + 9x 2 -4x det(xI -L(G) 2 ) = x 4 -8x 3 + 19x 2 -12x\nFrom this example it is obvious that the second immanantal polynomial is a richer graph description than just the characteristic polynomial." }, { "figure_ref": [], "heading": "Weighted graphs", "publication_ref": [], "table_ref": [], "text": "In general, binary graphs are not sufficient for describing the structure of either images or objects. Weighted graphs are graphs which have a weight w i j associated with the edge linking nodes i and j. The definition of the Laplacian matrix may easily be extended to weighted graphs, as follows:\nl w i j =          D i if i = j -w i j if there is a weighted edge between i & j 0 if there is no edge between nodes i & j(7)\nwith D i being equal to the sum of the weights of the edges meeting at the node i:\nD i = n j=1 w i j(8)\nThis matrix has the same properties as the Laplacian matrix associated with a binary graph -it is symmetric semidefinite positive and of rank n-1 which makes it suitable for computing the d 2 -polynomial." }, { "figure_ref": [], "heading": "Graph indexing", "publication_ref": [ "b29" ], "table_ref": [], "text": "The graph characterization in terms of the coefficients of the second immanantal polynomial allows one to assert whether two graphs with the same number of nodes (n) are \"equal\". The difference between two graphs G 1 and G 2 is given by the formula:\nDiff(G 1 , G 2 ) = n k=1 (c 1 k -c 2 k ) 2(9)\nwhere c i k is the k-th coefficient of the second immanantal polynomial associated with graph i. Since we assume that the above equation is valid only for graphs with the same number of nodes, c 0 has been skipped out from the summation.\nIn the case of indexing we are faced with the problem of comparing an image graph with many database graphs and of deciding which are the few graphs in the database that are equal to the image graph. In that case the graph difference mentioned above is not efficient. One way to implement indexing efficiently is to use hashing [30]. Hashing can be briefly described as follows. Each database object has a numerical key associated with it. Then a hash function maps this key onto the address of an array of a manageable size. The address thus computed for an object is also called the hash-code of that object. In practice, hashing is composed of an off-line step (database construction) and a runtime step (indexing):\n• Database construction consists of computing a hash-code for each object to be stored in the database. Several objects may well have the same address (hash-code). Therefore a list of objects will be associated with each address. The database takes therefore the form of an array (or a hashtable), a list of objects being stored at each array-element address.\n• Indexing consists of computing the address (hash-code) of an unknown object in order to determine whether this object is in the hash-table or not.\nSince a graph may be described by the integer coefficients of a polynomial, these coefficients may well be viewed as the hash-codes of the graph. Hence, a graph with n nodes can be mapped onto n hash tables. For reasons that will be made clear below, the size of the graphs we deal with varies between 5 nodes and 10 nodes. Within this size range the second immanantal polynomial uniquely characterizes binary and weighted graphs. It follows that graph indexing will become an efficient technique because the hashing will have very few collisions associated with it.\nPolynomial characterization of graphs combined with graph indexing will eventually allow us to perform object recognition by indexing. However, two important issues need be raised before we describe a practical object recognition system: object representation and database organization." }, { "figure_ref": [], "heading": "Object representation", "publication_ref": [ "b10" ], "table_ref": [], "text": "Object representation has been thoroughly studied in Computer Vision and a recent paper by Flynn & Jain [11] provides a good state of the art. In general there are two possible representation classes: Object frame centered and viewer frame centered representations. Within our approach we use a representation that is not tight to a specific coordinate frame. An object is mainly described by a list of characteristic views. In the representation that we use the definition of a characteristic view (CV) should be understood in a broad sense. It is a network of object features and feature relationships that are simultaneously visible from some viewpoint. Such a representation is by no means limited to the aspect graph representation of an object. The features in the network may well be either 2-D or 3-D, object-centered or viewer-centered. The important characteristic here is not as much the dimensionality of the features or the coordinate frame to which they relate, but instead, the intrinsic properties of the feature network. As we already mentioned, such a feature network can be conveniently represented by a weighted graph.\nHowever, the data associated with some view of an object rarely encodes a whole characteristic view associated with that object. The data are corrupted by noise, occlusions, self occlusions, and accidental alignments. Therefore it will not be very useful to directly store in the database the graph associated with a characteristic view. Instead, each characteristic view is further decomposed in a number of, possibly overlapping, \"smaller\" views or subviews, where each such subview is in fact associated with a subgraph of the graph describing the characteristic view. There are several reasons in support of the decomposition of a characteristic view into a number of subviews:\n• Following the results of section 2, one can compare only graphs with the same number of nodes. Since a graph extracted from the data has rarely the same number of nodes as the graphs associated with the characteristic views of the objects to be recognized, one may still attempt to compare an unknown-object-view with a characteristic-view by comparing subgraphs associated with subviews of theses views.\n• The cost of the computation of the coefficients of the second immanantal polynomial is proportional to n 4 , where n is the number of nodes of the graph. Since \"raw\" characteristic views may have a large number of features associated with them, it may not be efficient to compute polynomial characterizations for very large graphs.\n• Consider a data graph that is composed of a large network of features. It is very unlikely that such a large data graph belongs to a unique object. Recognition based on such large graphs will fail because these graphs are not present in the database.\nIt is therefore clear that object recognition by indexing must adopt a representation such that each characteristic view is decomposed in a number of subviews or subgraphs. A compromise must be made concerning the size of these subgraphs: Very small subgraphs are too ambiguous because they do not capture information that is object specific while large subgraphs are difficult to extract from the data. At the limit, a single-node graph belongs to all the objects in the database and indexing is useless in this case. At the other extreme, a very large graph encapsulates more than one object and the indexing process will fail to find this graph in the database." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Database organization and the indexing mechanism", "publication_ref": [], "table_ref": [], "text": "Following the above discussion, the organization of the database follows a three-layer structure. A first layer contains a list of graphs of various sizes that are organized in hash tables. The second layer contains characteristic views. A third layer contains descriptions of the object themselves. This structure is shown on Figure 2.\nIt is clear that a graph belongs to several characteristic views and hence, it may belong to several objects. Therefore, an image graph that matches a graph in the database provides handles to more than one object. The interesting feature of this threelayer organization is that the graph list grows sub-linearly with the number of characteristic views.\nThe indexing mechanism proceeds as follows. Let's suppose that an unknown object view has to be recognized. First, this unknown view is decomposed into subviews and a graph is associated with each subview. Polynomial characterizations are computed for these unknown graphs. Based on these characterizations and on the hashing technique just described, each unknown graph is assigned a unique graph in the database. As a consequence, a list of characteristic views may now be associated with each unknown graph in the image. In other terms, each unknown graph votes for a number of characteristic views. This process is repeated for each unknown graph belonging to the unknown view. The characteristic view that received the largest number of votes is the model that best matches the unknown view.\nConsider, for example the database depicted on Figure 2 and suppose that two unknown graphs are assigned graph1 and graph2 respectively. It follows that two characteristic views (CV12 and CV21) received 2 votes while one characteristic view (CV11) received only one vote." }, { "figure_ref": [ "fig_2", "fig_3", "fig_4", "fig_5" ], "heading": "2-D characteristic views of polyhedral objects", "publication_ref": [ "b30", "b31", "b32", "b33" ], "table_ref": [], "text": "In order to recognize an object with the method described above, one has to represent that object in terms of a few characteristic views and to describe each such characteristic view in terms of a set of weighted graphs. Unlike solutions that consists of computing characteristic views from CAD object descriptions, our approach for obtaining these views is to gather as many images of an object as characteristic views are needed for describing that object unambiguously. Although in this paper we use an ad-hoc technique, more formal methods may be found in [31], [32]: The authors define a set of characteristic views of a polyhedral object by partitioning a large set of object views into small sets of characteristic views.\nThe task of decomposing a characteristic view into subgraphs is not an easy one. The most general approach would be to implement a graph decomposition method. For example, one may attempt to partition a graph into a pre-specified number of subgraphs such that the number of connections (edges) between the subgraphs is minimized [33]. Here we prefer a more pragmatic solution. For example, one may consider all the nodes of a characteristic view and form subgraphs around each such node. A subgraph is thus formed by this node as well as the nodes that are at a distance less or equal than p edges away from this node. It turns out that this redundant decomposition of a characteristic view in small graphs of various sizes is one key to the success of our recognition method. Indeed, small perturbations in the topology of a view (due essentially to noise or to segmentation errors) will not affect the topology of all the subgraphs extracted from this view.\nIn the particular case of polyhedral objects, if the degree of a node is, on an average, equal to 3 and for p ≤ 2 then the number of nodes of the associated subgraphs varies between 5 and 10. Figure 3 shows a characteristic view of a simple polyhedral object and some subgraphs extracted from this view with p = 2.\nAs it has already been discussed, the topology of a characteristic view is not sufficient for describing the view unambiguously. For example, Figure 4 shows six different binary graphs which are isomorphic (they have the same topology). Clearly, one would like to be able to state that the top three graphs are different and the bottom three ones are identical. In other words, the top three ones do not look the same, although they have the same topology. The question of how to describe the 2-D appearance of a polyhedral object has already been addressed (see for example [34]) but the question of how to represent such an appearance with a weighted graph has not.\nOne way to label an edge is to characterize it according to the structure of the vertices at each endpoint of that edge. If we consider polyhedra that have at most 3 edges meeting at a vertex, then we obtain a catalogue of possible edge structures or edge appearances. It is sufficient to assign labels to these various appearances and to associate a weight to each label. Figure 5 shows an exhaustive catalogue of edge appearances and their weights.\nThere are three possible vertex structures: a 2-edge vertex or an \"L\", and two 3-edge vertices, an \"Y\" and an \"Arrow\". Since an edge divides the plane into two regions -the left side and the right side -we obtain the following list of features that allows the labelling and the weighting of an edge (see Figure 6 for an example of an edge labelled \"15\"):\n• the type of the first vertex (Arrow);\n• the type of the second vertex (Arrow);\n• the number of edges associated with the first vertex lying on the left side of the edge; (1)\n• the number of edges associated with the first vertex lying on the right side of the edge (1);\n• the number of edges associated with the second vertex lying on the left side of the edge (2);\n• the number of edges associated with the second vertex lying on the right side of the edge (0)." }, { "figure_ref": [ "fig_6" ], "heading": "Image processing", "publication_ref": [ "b34" ], "table_ref": [], "text": "In this section we describe the process by which a number of graphs is extracted from an image. This graph extraction process has some similarities with feature grouping since its goal is to provide a few \"key\" image features and reduce the complexity of the object recognition process. Image processing starts with extracting edges and with approximating these edges with straight lines. Junctions are next extracted. The junctions and the straight lines form a network of features, or a graphthe image graph.\nIf the scene is composed of just one object, then this image graph corresponds, up to some noise, to a view of that object. Single object scenes are used for building the database of characteristic views and it has been previously described.\nIf the scene is composed of more than one object and if the background is not uniform, then the image graph has to be further processed in order to be split into smaller graphs. Each small graph thus obtained is examined in order to decide whether it should be considered for recognition or not. To summarize, the process of extracting graphs from an image comprises the following steps:\n1. image graph extraction; 2. image graph splitting, and 3. graph evaluation.\nStep 1. Image graph extraction has been briefly outlined at the beginning of this section and is described in detail in [35].\nStep 2. Image graph splitting is based on a number of heuristics:\n2.1 Isolated and \"dangling\" edges are thrown out.\n2.2 It is assumed that \"T\" junctions arise from occlusions (an object in front of another object, an object in front of some background, or a self occlusion). Hence, the image graph is cut off at T junctions. Notice that this process may produce isolated edges which are immediately thrown out.\n2.3 Sequences of collinear edges are assumed to arise from the same physical edge and hence, collinear edges are fused into a unique edge.\n2.4 Finally, the image graph is decomposed into connected components.\nStep 3. Graph evaluation considers each connected component, one by one, and evaluates it in order to decide whether it should be further considered for recognition or thrown out. Let n be the number of nodes of a graph and let d i be the degree of node i, i.e., equation ( 3). The quantity:\nf (G) = 1 n n i=1 d i\nallows one to measure the complexity of a graph. It is straightforward to notice that for f (G) = 2 the graph has at most one cycle. Since graphs without cycles are not really relevant, one may consider only graphs for which:\nf (G) ≥ 2\nThe graphs that don't satisfy this constraint are consider irrelevant and therefore they are thrown out.\nLet us illustrate with an example the graph extraction process that we just described. Figure 7 shows an intensity image (topleft) and a network of lines and junctions extracted from this image (top-right) from which isolated and dangling lines are removed (middle-left). The next image (middle-right) shows the T-junctions that are removed from the list of junctions. This T-junction removal process produces isolated lines and dangling lines on one hand (which are removed) and collinear lines which are fused into a unique line on the other hand (bottomleft). We are left now with a number of connected image graphs. Each such connected component is evaluated according to the Step 3 just above. The latter process leaves 4 connected components in the image (bottom-right). These remaining image graphs will be further decomposed into subgraphs in order to be used by the indexing process. The decomposition of the 4 image graphs into subgraphs is not shown." }, { "figure_ref": [ "fig_7", "fig_8", "fig_9", "fig_10", "fig_8" ], "heading": "Experiments", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "All the object recognition experiments that we performed used the same database, namely 20 characteristic views associated with 6 objects, as shown on Figures 8 and9. All the objects in this database are 3-D polyhedral shapes with one exception. The database was built by showing each object, view by view, to the camera and by applying the graph extraction process described in section 5.\nWe carried out many experiments in which the input image varied from a very simple one with just one object against an uniform background to more complex images with many objects against a non-uniform background.\nIn one such experiment we grabbed 9 images of the same scene and we processed these images identically (with exactly the same segmentation parameters). Figure 10 shows these 9 images where the camera position and orientation varies with respect to the observed scene. Figure 11 shows the graphs extracted from these images. The images are numbered 1 to 9 from left to right and top to bottom.\nTable 1 summarizes the results of recognizing the two objects based on the two graphs (labelled \"0\" and \"1\") extracted from the 9 images. The figures in this table correspond to the scores (number of votes) received by each characteristic view when the image graphs are indexing the database. For each image graph the table records its highest score, sometimes the two highest scores. The first object (the graph labelled \"0\") has been correctly recognized 7 times and incorrectly recognized twice (images 5 and 9). Notice the high scores (between 16 and 24) obtained in the case of a correct recognition in comparison with the less high scores (between 6 and 8) obtained in the case of an incorrect recognition.\nThe same phenomena can be observed with the second object (the image graph labelled \"1\") which has been correctly recognized 6 times (the score varies between 8 and 25) and incorrectly recognized 3 times (the score varies between 4 and 7). An interesting remark is that all 5 incorrect recognitions assigned the same characteristic view to the unknown image graphs, namely the view labelled \"pieceLd02m\" (the bottomrightmost view on Figure 9). One may notice the poor segmentation associated with this characteristic view.\nThe recognition results reported above are barely affected if one increased the size of the database of characteristic views by adding views of very different objects. Of course, if the database contains two very similar objects, the system will fail to discriminate between these two objects." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b35" ], "table_ref": [], "text": "Unlike the prevailing paradigm in computer vision that suggests image-feature-to-object-feature matching to solve for ob- ject recognition, we described an approach that uses an indexing technique to compare objects in an image with objects in a database. Our method doesn't rely neither on precise knowledge about the geometry of the objects nor on reliable feature-to-feature assignments. Instead we describe both the images and the models by weighted graphs and we compare these graphs through their polynomial characterization, namely the second immanantal polynomial of the Laplacian matrix of a graph. This graph comparison was implemented in two steps (off-line and on-line): a database construction step (model graphs are stored in hash tables) and an indexing step (an image graph indexes in the pre-stored hash tables).\nIt is worthwhile to notice that, in the past, polynomial characterization of graphs has been used to represent and identify the topology of molecules [36]. At our knowledge, the graph theory literature doesn't describe any attempt to generalize polynomial characterizations to weighted graphs. It turns out that, at least for our purposes, this generalization is straightforward since the Laplacian matrix of a weighted graph has the same mathematical properties as the Laplacian matrix of a binary graph.\nWe believe that our indexing scheme based on this algebraic graph representation has a promising potential in computer vision and may provide in the future an interesting paradigm for indexing. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work has been supported by the Esprit programme through the SECOND project (Esprit-BRA No. 6769) and by the ORASIS project (PRC Communication homme/machine)." } ]
In computer vision, the indexing problem is the problem of recognizing a few objects in a large database of objects while avoiding the help of the classical image-feature-to-object-feature matching paradigm. In this paper we address the problem of recognizing 3-D polyhedral objects from 2-D images by indexing. Both the objects to be recognized and the images are represented by weighted graphs. The indexing problem is therefore the problem of determining whether a graph extracted from the image is present or absent in a database of model graphs. We introduce a novel method for performing this graph indexing process which is based both on polynomial characterization of binary and weighted graphs and on hashing. We describe in detail this polynomial characterization and then we show how it can be used in the context of polyhedral object recognition. Next we describe a practical recognition-byindexing system that includes the organization of the database, the representation of polyhedral objects in terms of 2-D characteristic views, the representation of this views in terms of weighted graphs, and the associated image processing. Finally, some experimental results allow the evaluation of the system performance.
Polyhedral Object Recognition by Indexing
[ { "figure_caption": "Figure 1 :1Figure 1: These two graphs differ by one edge but their associated second immanantal polynomials are quite different.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The database has a three layer structure: graphs, characteristic views, and objects. An object may well have more than 2 characteristic views associated with it.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An example of a characteristic view of an object and a few graphs extracted from this view.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Top -three graphs having the same topology. Bottom -three other graphs having the same topology and the same appearance.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: An exhaustive catalogue of the possible appearances of the edges of a polyhedral object that has, at most, 3 edges meeting at a vertex.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The labelling of an edge depends on the structure of the two vertices at each endpoint of that edge (see text).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: An example of extracting a set of 4 image graphs from an intensity image (top-left): Lines and junctions are detected (top-right), isolated and dangling lines are removed (middle-left), T-junctions are detected and removed (middle-right), collinear lines are fused into longer lines (bottom-left), the remaining graphs are evaluated and four of them survive (bottom-right).", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: This figure shows the intensity images of 20 characteristic views associated with 6 objects. These views and objects constitute the database.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: The 20 graphs (lines and junctions) extracted from the previous images. These 20 graphs are decomposed into subgraphs (the decomposition is not shown) and stored in the hash-tables associated with the database.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Six images of two objects to be recognized where the camera varies in position and orientation with respect to the two objects.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The graphs extracted from the previous images. Notice that the noise corruption of these graphs varies a lot even if there is only a small change in camera position and orientation.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "This table shows the results of recognition for 9 images of the same scene. The figures correspond to scores (number of votes) as a result of the graph indexing process. The scores over-scripted by a ⋆ correspond to a correct recognition.", "figure_data": "imagegraphcharacteristic viewnumber number Ut01m Ut02m M01m M02m L05m Ld02m1018 ⋆24 ⋆123 ⋆2016 ⋆16 ⋆117 ⋆3018 ⋆18 ⋆4017 ⋆24 ⋆113 ⋆10506719 ⋆6017 ⋆23 ⋆1677023 ⋆148016 ⋆1459068113 ⋆", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Radu Horaud; Humberto Sossa
[ { "authors": "A P Ambler; H G Barrow; C M Brown; R M Burstall; R J Popplestone", "journal": "", "ref_id": "b0", "title": "A versatile computer-controlled assembly system", "year": "1973-08" }, { "authors": "D H Ballard; C M Brown", "journal": "Prentice Hall Inc", "ref_id": "b1", "title": "Computer Vision", "year": "1982" }, { "authors": "R C Bolles; R A Cain", "journal": "International Journal of Robotics Research", "ref_id": "b2", "title": "Recognizing and locating partially visible objects, the Local-Feature-Focus method", "year": "1982" }, { "authors": "R C Bolles; R Horaud", "journal": "International Journal of Robotics Research", "ref_id": "b3", "title": "3DPO: A three-dimensional part orientation system", "year": "1986" }, { "authors": "O D Faugeras; M Hebert", "journal": "International Journal of Robotics Research", "ref_id": "b4", "title": "The representation, recognition, and locating of 3-d objects", "year": "1986" }, { "authors": "N Ayache; O D Faugeras", "journal": "IEEE Trans. on Pattern Analysis and Machine Intelligence", "ref_id": "b5", "title": "HYPER: A new approach for the recognition and positioning of two-dimensional objects", "year": "1986-01" }, { "authors": "W E L Grimson; T Lozano-Perez", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b6", "title": "Localizing overlapping parts by searching the interpretation tree", "year": "1987-07" }, { "authors": "C Goad", "journal": "Ablex Publishing Corporation", "ref_id": "b7", "title": "Fast 3D model-based vision", "year": "1986" }, { "authors": "P J Flynn; A K Jain", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b8", "title": "BONSAI: 3-D object recognition using constrained search", "year": "1991-10" }, { "authors": "W Y Kim; A C Kak", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "3-D object recognition using bipartite matching embedded in discrete relaxation", "year": "1991-03" }, { "authors": "P J Flynn; A K Jain", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b10", "title": "CAD-based computer vision: From cad models to relational graphs", "year": "1991-02" }, { "authors": "S Dickinson; A Pentland; A Rosenfeld", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b11", "title": "3-D shape recovery using distributed aspect matching", "year": "1992" }, { "authors": "R Bergevin; M D Levine", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b12", "title": "Generic object recognition: Building and matching coarse descriptions from line drawings", "year": "1993-01" }, { "authors": "S Umeyama", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b13", "title": "An eigendecomposition approach to weighted graph matching problems", "year": "1988-05" }, { "authors": "M Hanajik; F J Kylstra; R G Van Vliet", "journal": "", "ref_id": "b14", "title": "An analytical approach to the matching of attributed graphs", "year": "1993-08" }, { "authors": "H A Almohamad; S O Duffuaa", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "A linear programming approach for the weighted graph matching problem", "year": "1993-05" }, { "authors": "L Hérault; R Horaud; F Veillon; J-J Niez", "journal": "", "ref_id": "b16", "title": "Symbolic Image Matching by Simulated Annealing", "year": "1990-09" }, { "authors": "V Tresp; G Gindi", "journal": "", "ref_id": "b17", "title": "Invariant object recognition by inexact subgraph matching with applications in industrial part recognition", "year": "1990-07" }, { "authors": "R Nevatia; T Binford", "journal": "Artifitial Intelligence", "ref_id": "b18", "title": "Description and recognition of complexcurved objects", "year": "1977" }, { "authors": "G J Ettinger", "journal": "", "ref_id": "b19", "title": "Large Hierarchical Object Recognition Using Libraries of Parmeterized Model Sub-Parts", "year": "1988-09" }, { "authors": "A Kalvin; E Schomberg; J T Schwartz; M Sharir", "journal": "The International Journal of Robotics Research", "ref_id": "b20", "title": "Two-dimensional model-based, boundary matchig using footprints", "year": "1986" }, { "authors": "Y Lamdan; H Wolfson", "journal": "", "ref_id": "b21", "title": "Geometric hashing: A general and efficient model-based recognition scheme", "year": "1988-12" }, { "authors": "G Stein; Medioni", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b22", "title": "Structural hashing: Efficient 3-d object recognition", "year": "1992-02" }, { "authors": "T M Breuel", "journal": "", "ref_id": "b23", "title": "Adaptive model base indexing", "year": "1989" }, { "authors": "D M Cvetkovic; M Doob; H Sachs", "journal": "Academic Press", "ref_id": "b24", "title": "Spectra of Graphs", "year": "1980" }, { "authors": "J Turner", "journal": "SIAM J. Appl. Math", "ref_id": "b25", "title": "Generalized matrix functions and the graph isomorphism problem", "year": "1968-05" }, { "authors": "R Merris; K R Rebman; W Watkings", "journal": "Linear Algebra Applications", "ref_id": "b26", "title": "Permanental polynomials of graphs", "year": "1981" }, { "authors": "R Merris", "journal": "SIAM Journal Alg. Disc. Meth", "ref_id": "b27", "title": "The second immanantal polynomial and the centroid of a graph", "year": "1986" }, { "authors": "G M Constantine", "journal": "Linear and Multilinear Algebra", "ref_id": "b28", "title": "Graph complexity and the laplacian matrix in blocked experiments", "year": "1990" }, { "authors": "R Sedgewick", "journal": "Addison-Wesley Publishing Company, Inc", "ref_id": "b29", "title": "Algorithms", "year": "1988" }, { "authors": "P Gros; R Mohr", "journal": "World Scientific", "ref_id": "b30", "title": "Automatic object modelization in computer vision", "year": "1992-08" }, { "authors": "P Gros", "journal": "", "ref_id": "b31", "title": "Matching and clustering: two steps towards automatic model generation in computer vision", "year": "1993-10" }, { "authors": "L Hérault; J J Niez", "journal": "Complex Systems", "ref_id": "b32", "title": "Neural Networks and Graph K-Partitioning", "year": "1989-12" }, { "authors": "T Kanade", "journal": "Artificial Intelligence", "ref_id": "b33", "title": "Recovery of the 3D shape of an object from a single view", "year": "1981-08" }, { "authors": "R Horaud; F Veillon; Th Skordas", "journal": "Springer Verlag", "ref_id": "b34", "title": "Finding geometric and relational structures in an image", "year": "1990-04" }, { "authors": "Y Kudo; T Yamasaki; S Sasaki", "journal": "Journal of Chemical Documentation", "ref_id": "b35", "title": "The characteristic polynomial uniquely represents the topology of a molecule", "year": "1973" } ]
[ { "formula_coordinates": [ 2, 306.6, 155.44, 204.57, 30.61 ], "formula_id": "formula_0", "formula_text": "ϕ : V 1 -→ V 2 such that: (v 1 , v 2 ) ∈ E 1 if and only if (ϕ(v 1 ), ϕ(v 2 )) ∈ E 2" }, { "formula_coordinates": [ 2, 326.78, 238.26, 230.88, 11.91 ], "formula_id": "formula_1", "formula_text": "A 2 = PA 1 P -1(1)" }, { "formula_coordinates": [ 2, 326.53, 411.55, 180.31, 41.27 ], "formula_id": "formula_2", "formula_text": "det(xI -PAP -1 ) = det(PxIP -1 -PAP -1 ) = det(P(xI -A)P -1 ) = det(xI -A)" }, { "formula_coordinates": [ 2, 326.53, 554.37, 231.13, 34.4 ], "formula_id": "formula_3", "formula_text": "         if G 1 = G 2 then p(G 1 ) = p(G 2 ) and if p(G 1 ) = p(G 2 ) then G 1 = G 2(2)" }, { "formula_coordinates": [ 3, 57.54, 319.76, 79.31, 31.89 ], "formula_id": "formula_4", "formula_text": "l i j =          d i if i = j -1 if" }, { "formula_coordinates": [ 3, 277.06, 332.63, 11.62, 8.9 ], "formula_id": "formula_5", "formula_text": "(3)" }, { "formula_coordinates": [ 3, 57.54, 455.42, 231.14, 26.65 ], "formula_id": "formula_6", "formula_text": "d 2 (xI -L(G)) = c 0 (L(G))x n -c 1 (L(G))x n-1 + ... + (-1) n c n (L(G))(4)" }, { "formula_coordinates": [ 3, 57.54, 548.78, 231.13, 68.25 ], "formula_id": "formula_7", "formula_text": "                         c 0 (L(G)) = n -1 c 1 (L(G)) = 2m(n -1) . . . c k (L(G)) = X∈Q k,n n i=1 l ii det (L(G){X}(i)) -det (L(G){X}) (5)" }, { "formula_coordinates": [ 3, 37.61, 652.62, 251.06, 66.75 ], "formula_id": "formula_8", "formula_text": "k (2 ≤ k ≤ n) obtained from the set {1, 2, ..., n}. For any n × n matrix M and for X ∈ Q k,n let M[X] be the k × k principal sub-matrix of M corresponding to X. M{X} is the n × n matrix: M{X} = M[X] 0 k 0 k I n-k(6)" }, { "formula_coordinates": [ 3, 326.53, 125.9, 143.04, 11.91 ], "formula_id": "formula_9", "formula_text": "d 2 (xI -L(G)) = d 2 (xI -PL(G)P -1 )" }, { "formula_coordinates": [ 3, 326.78, 475.06, 142.74, 96.67 ], "formula_id": "formula_10", "formula_text": "L(G) 1 =               3 -1 -1 -1 -1 1 0 0 -1 0 1 0 -1 0 0 1               L(G) 2 =               2 0 -1 -1 0 1 -1 0 -1 -1 3 -1 -1 0 -1 2              " }, { "formula_coordinates": [ 3, 326.78, 614.4, 194.46, 40.37 ], "formula_id": "formula_11", "formula_text": "Q 2,4 = {(1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4)} Q 3,4 = {(1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)} Q 4,4 = {(1, 2, 3, 4)}" }, { "formula_coordinates": [ 3, 326.78, 719.38, 142.96, 44.86 ], "formula_id": "formula_12", "formula_text": "L(G) 1 {(1, 2)} =               3 -1 0 0 -1 1 0 0 0 0 1 0 0 0 0 1               L(G) 1 {(1, 2)}(2) is a 3×3 matrix obtained from L(G) 1 {(1," }, { "formula_coordinates": [ 4, 57.79, 114.62, 126.96, 33.68 ], "formula_id": "formula_13", "formula_text": "L(G) 1 {(1, 2)}(2) =           3 0 0 0 1 0 0 0 1          " }, { "formula_coordinates": [ 4, 57.54, 187.69, 209.91, 26.78 ], "formula_id": "formula_14", "formula_text": "d 2 (xI -L(G) 1 ) = 3x 4 -18x 3 + 33x 2 -24x + 6 d 2 (xI -L(G) 2 ) = 3x 4 -24x 3 + 105x 2 -68x + 24" }, { "formula_coordinates": [ 4, 57.54, 253.59, 177.87, 26.78 ], "formula_id": "formula_15", "formula_text": "det(xI -L(G) 1 ) = x 4 -6x 3 + 9x 2 -4x det(xI -L(G) 2 ) = x 4 -8x 3 + 19x 2 -12x" }, { "formula_coordinates": [ 4, 57.54, 418.41, 239.5, 33.55 ], "formula_id": "formula_16", "formula_text": "l w i j =          D i if i = j -w i j if there is a weighted edge between i & j 0 if there is no edge between nodes i & j(7)" }, { "formula_coordinates": [ 4, 57.79, 490.8, 230.88, 29.68 ], "formula_id": "formula_17", "formula_text": "D i = n j=1 w i j(8)" }, { "formula_coordinates": [ 4, 57.54, 678.75, 231.13, 29.73 ], "formula_id": "formula_18", "formula_text": "Diff(G 1 , G 2 ) = n k=1 (c 1 k -c 2 k ) 2(9)" }, { "formula_coordinates": [ 7, 347.95, 345.59, 63.48, 29.68 ], "formula_id": "formula_19", "formula_text": "f (G) = 1 n n i=1 d i" }, { "formula_coordinates": [ 7, 347.95, 445.45, 34.9, 9 ], "formula_id": "formula_20", "formula_text": "f (G) ≥ 2" } ]
2023-11-21
[ { "figure_ref": [ "fig_1", "fig_1", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b14", "b16", "b24", "b0", "b7", "b16", "b8", "b25", "b35", "b41", "b1", "b42", "b9", "b19", "b43", "b12" ], "table_ref": [], "text": "Graph Neural Networks (GNNs) [15,17,25] have consistently displayed remarkable performance for several graph classification tasks, including predicting molecular properties, diagnosing cancer, and analyzing brain data [1,8]. In comparison to node-level tasks, such as node classification, which predominantly utilize Graph Convolutional Networks (GCNs) [17] to create node representations for subsequent tasks, graph classification tasks demand comprehensive graph-level representations. This crucial distinction emphasizes the indispensable role of the pooling mechanism in graph classification. The pooling mechanism is vital, since it efficiently transforms the input graph, enriched with node representations derived from GCNs, into a single vector or a size-reduced simplified graph. This transformation is crucial for capturing the graph's overall structure and characteristics, thereby facilitating more precise and insightful graph classification tasks.\nSignificant progress has been made in developing effective graph pooling methods, which are crucial for enhancing the performance of downstream tasks. Employing a hierarchical architecture, graph pooling captures node correlations, as highlighted in works such as [9,26,36]. These methods can be broadly categorized into node clustering pooling and node drop pooling. Node clustering pooling methods (e.g., DiffPool [42], MinCutPool [2], and StructPool [43]) cluster nodes to form new ones, effectively preserving feature information. However, a major drawback of these methods is the distortion of the original graph structures. Furthermore, they require additional networks to learn a dense cluster assignment matrix, resulting in substantial computational and storage demands, especially for large graphs. On the other hand, node drop pooling methods (e.g., Graph U-Net [10], SAGPool [20], and GSAPool [44]), focus on retaining the most representative nodes by evaluating their significance. This method effectively preserves crucial structural information and is more efficient and practical compared to node clustering pooling, especially for managing large-scale networks. Although the node drop pooling method is renowned for its efficiency, it encounters challenges in mainstream applications. Node drop pooling iteratively discards nodes deemed less important, based on specific criteria, to achieve hierarchical representations. As illustrated in the top right of Figure 1, prevalent pooling methods typically employ a single independent network to assign scores, indicating each node's significance. Although the scoring process is learnable, its indirect connection to the final prediction can sometimes cause sub-optimal node selection. Therefore, addressing this issue requires a graph pooling function that explicitly bases the retention of nodes on their contributions to the classification result. As depicted in the bottom right of Figure 1, our proposed graph explicit pooling method (GrePool) selects nodes according to their impact on the final prediction results. Specifically, each node in the graph calculates attention scores in relation to an additional learnable global node, denoted as q global . Then, these attention scores are utilized to retain informative nodes. As a result, the capacity of pooling method can be flexibly controlled through the identification process where no additional parameters are introduced. Consequently, the global node's embedding is created through a weighted combination of the retained nodes' embeddings based on their attention scores. Subsequently, the global node's embedding serves as the basis for the final prediction. Hence, our method enables a strong correlation between the intertwined node selection and final prediction tasks, ensuring that the retained nodes truly contribute to the final prediction outcome. Compared to other node drop pooling models, GrePool takes into account the explicit influence on the classification result when performing node selection, without introducing additional parameters. Current graph pooling methods typically prioritize informative nodes for information propagation and neglect the discarded ones. The discarded nodes may enhance performance; thus, it is necessary to re-examine these methods. Therefore, we propose an approach where uninformative nodes, which are unnecessary for classification in our view, are uniformly distributed across all categories instead of being completely dismissed (GrePool+). For example, in Figure 2, Phenol and Anisole are organic compounds with a benzene ring. However, their chemical behavior is determined by their distinctive attributes: Phenol has a hydroxyl group ( -OH) while Anisole has a methoxy one ( -CH 3 ). The presence of the hydroxyl group in Phenol makes it weakly acidic and highly reactive, whereas the methoxy group in Anisole makes it non-acidic and relatively reactive. These groups (hydroxyl and methoxy) are critical for classification since they provide important information. On the other hand, carbon rings are common elements in many organic compounds; therefore, they are less distinctive or informative. Given that these common patterns coexist across various categories, our approach aims to distribute their embeddings uniformly across all categories, applying uniform loss. This ensures equal prediction probabilities at any given category. This strategy's outcomes are two fold. First, it emphasizes the significance of the input graph's informative components while filtering out the trivial elements. Second, it facilitates the back-propagation of gradients through the uninformative nodes, ensuring a more comprehensive and balanced learning process.\nIn summary, this paper presents GrePool, an innovative attention-based pooling method that ensures the retained nodes explicitly contribute to the final prediction outcome. Moreover, we enhance this approach by applying uniform loss to the discarded nodes. This enhanced approach is called GrePool+. Furthermore, this refined approach emphasizes the identification of informative nodes, thereby improving the overall efficacy and precision of graph classification. To evaluate our proposed method's effectiveness, we conduct extensive experiments on 12 commonly used datasets, including the large-scale Open Graph Benchmark (OGB) [13]. We also compare the results with 14 established baseline methods. The experimental results demonstrate that GrePool consistently surpasses the baseline methods for most of the datasets. Notably, the introduction of GrePool+ invariably boosts GrePool's performance without requiring additional computation. Our main contributions are summarized as follows:\n• We propose an attention-based graph pooling method that explicitly selects reserved nodes based on their significant contribution to the final prediction outcome, concurrently eliminating the need for additional score-predicting networks commonly observed in conventional graph pooling methods.\n• We innovatively harness the information from nodes that are commonly overlooked and discarded in conventional pooling methods, enhancing the training process and improving classification accuracy.\n• Our proposed methods, GrePool and GrePool+, are consistently effective and generalizable across 12 widely used datasets, outperforming 14 baseline methods in extensive experimental evaluations.\n2 Preliminaries and Related Works" }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "G = (V, E) denotes a graph with the node set V and edge set E. The node attributes are denoted by X ∈ R n×d , where n is the number of nodes and d is the node attribute dimension. The graph topology is represented by an adjacency matrix A ∈ {0, 1} n×n ." }, { "figure_ref": [], "heading": "Problem Statement", "publication_ref": [], "table_ref": [], "text": "Definition 1 (Graph Classification). Given a set of graphs D = {(G 1 , y 1 ) , (G 2 , y 2 ) , • • • , (G t , y t )}, the primary objective of graph classification is to learn a mapping function f that can effectively associate each input graph G i with its corresponding label y i . This can be mathematically represented as:\nf : G → Y,(1)\nwhere G denotes the set of input graphs, Y represents the set of labels associated with the graphs, and t signifies the total number of graphs in the dataset." }, { "figure_ref": [], "heading": "Graph Pooling", "publication_ref": [ "b6", "b2", "b28", "b33", "b37", "b44", "b1", "b5", "b25", "b38", "b40", "b41", "b42", "b11", "b22" ], "table_ref": [], "text": "Definition 2 (Graph Pooling). A graph pooling operator, denoted as POOL, is defined as any function that maps a given graph G = (V, E) to a new pooled graph G ′ = (V ′ , E ′ ):\nG ′ = POOL(G),(2)\nwhere |V ′ | < |V|, and |V| represents the number of nodes in the original graph. It is worth noting that in certain exceptional cases, scenarios where |V ′ | ⩾ |V| may exist, resulting in upscaling the graph through pooling. The fundamental objective of graph pooling is to effectively reduce the number of nodes in a graph while capturing its hierarchical information.\nGraph pooling plays a crucial role in capturing the overall graph representation and can be broadly categorized into two types: global pooling and hierarchical pooling. Global pooling methods typically employ operations such as sum/average/max-pooling [7] or more sophisticated techniques [3,29,34,38,45] to aggregate node features and obtain graph-level representations. However, these methods often encounter information loss as they overlook the underlying graph structures. To address this issue, hierarchical pooling models have been proposed, which are classified into node clustering pooling and node drop pooling. Node clustering pooling treats graph pooling as a clustering problem, where nodes are mapped into clusters as new nodes in a coarsened graph [2,6,26,39,[41][42][43]. On the other hand, node drop pooling utilizes learnable scoring functions to identify a subset of nodes with lower significance scores from the original graph [4, 9-11, 14, 20-22, 24, 27, 30, 31, 44, 46]. Based on these selected nodes, a new coarsened graph is constructed by obtaining a new feature and adjacency matrix. Notably, node clustering pooling methods have limitations regarding storage complexity due to the computation of dense soft-assignment matrices. In contrast, node drop pooling methods are memory-efficient and better suited for large-scale graphs, although they may result in some information loss. For a more comprehensive understanding of these topics, we recommend referring to the recent reviews on graph pooling [12,23], which provides in-depth insights into the various pooling methods." }, { "figure_ref": [], "heading": "Attention in Graph Pooling", "publication_ref": [ "b32", "b9", "b17", "b13", "b0", "b31" ], "table_ref": [], "text": "Definition 3 (Graph Attention Mechanism). The attention mechanism with various graph attention functions can be defined, within a generalized framework, as follows:\nAttention = p(q(X), X),(3)\nwhere q(•) represents a function that generates attention to capture the node significance within the graph. The function p(•) utilizes the input data X to extract essential information based on the attention function. By processing the input data through the attention function, the model can extract relevant information, enhancing the overall performance and interpretability of the graph-based learning system.\nThe attention mechanism has recently emerged as a powerful tool in natural language processing and computer vision. Its effectiveness in adaptively selecting discriminative features and filtering noise information has led to its integration into GNNs [33]. Notably, attention mechanisms have also been introduced in graph pooling. One such approach is gPool [10,18], which employs a linear projection as an attention module to predict individual node coefficients. AttPool [14], on the other hand, leverages local/global attention to select discriminative nodes and generate a graph representation through attention-weighted pooling. GMT [1] takes a different approach, utilizing multi-head attention [32] to compress the nodes into a small set of important nodes and calculate their inter-node relationships. In contrast to these existing methods, our proposed approach introduces a novel technique: multi-head selfattention. This technique enables us to perform node selection and information aggregation distinctively and effectively. By leveraging the power of self-attention, we can dynamically identify and prioritize the most relevant nodes in the graph, while simultaneously aggregating their information to generate a comprehensive representation." }, { "figure_ref": [], "heading": "The Proposed Method", "publication_ref": [], "table_ref": [], "text": "This section provides a comprehensive analysis of the proposed method. We begin by introducing the key features and mechanisms of GrePool ( §3.1) and its variant, GrePool+ ( §3.2). We then analyze GrePool to examine the power of its expressiveness ( §3.3). Next, we compare our method with several closely related approaches ( §3.4). Finally, we explore the computational complexity of GrePool ( §3.5)." }, { "figure_ref": [ "fig_3" ], "heading": "Graph Explicit Pooling (GrePool)", "publication_ref": [], "table_ref": [], "text": "This section introduces GrePool's key mechanisms, emphasizing its node selection and information aggregation proficiency. As depicted in Figure 3, GrePool consists of three essential modules: graph convolution, attention-based graph pooling, and optimization objective. A detailed explanation of each module is presented below." }, { "figure_ref": [], "heading": "Graph Convolution", "publication_ref": [], "table_ref": [], "text": "The GCN module is GrePool's fundamental building block, enabling the model to effectively capture and propagate information throughout the graph. This module utilizes GCN to learn node representations by aggregating information from neighboring nodes. It plays a pivotal role in capturing the local structural patterns and inherent features in the graph. A generic GNN layer can be expressed as follows:\nh (l) v = f (l) h (l-1) v , h (l-1) u | u ∈ N (v) ,(4)\nGCN GrePool where \nN (v) ⊆ V represents the neighborhood of node v, h(0)" }, { "figure_ref": [ "fig_2" ], "heading": "Attention-based Graph Pooling", "publication_ref": [ "b31", "b9", "b19" ], "table_ref": [], "text": "In the preceding analysis (in Figure 2), it is posited that only a subset of the node embeddings help to predict the labels of a graph for a given task, permitting the safe removal of other nodes without affecting the network output. Drawing inspiration from the application of Transformers in text classification [32], we introduce a learnable global node, the output embedding of which encapsulates all pairwise interactions in a single classification vector. This approach offers two significant advantages: 1) it improves aggregation methods, outperforming traditional, non-learned readout strategies such as sum/mean pooling; 2) it reveals each node's contribution to the classification outcome by forwarding the global node to the classifier for label prediction, using the attention scores between the individual and global nodes. We expound on this module in the subsequent discourse.\nThe global node's embedding is updated using the self-attention mechanism, making the attention map a reflection of the relation or similarity metrics between the nodes in the graph and the global node, formally expressed as:\nh global = softmax q global • K √ d V = a • V ,(5)\nwhere q global , K, and V represent global node's query vectors, the key matrix, and the value matrix, respectively, within an attention head. Thus, the global node's output, h global , consists of a linear combination of the value vectors V = [h v1 , h v2 , . . . , h vn ] ⊤ . The coefficients of this combination are the attention values corresponding to the global node in relation to all other nodes.\nGiven that h vi is derived from the i-th node, the attention value a i (the i-th element in a) quantifies the extent to which information from the i-th node is integrated into the global node's output (i.e., h global ) through linear combination. Therefore, inferring that the magnitude of a i is indicative of the significance of the i-th node is reasonable. As a result, utilizing the attention values has become a straightforward and widely adopted method for interpreting model decisions. Hence, we leverage these attention scores to define the node importance score S for graph pooling during training and inference, facilitating the dynamic discrimination between informative and uninformative nodes within graphs. Specifically, after calculating the node significance scores in the graph, we perform a selection operation, which selects nodes with the highest ⌈p × n⌉ significance scores and coarsens the graph accordingly. In this instance, p is the pooling ratio, similar to the established graph pooling methodologies [10,20]. This approach can be expressed as follows:\nidx (l) = TOP k (S (l) ); X (l+1) = X (l) (idx (l) , :) ⊙ S (l) (idx (l) , :); A (l+1) = A (l) (idx (l) , idx (l) ),(6)\nwhere TOP k orders values and returns the top k value indices in S (l) , and idx (l) denotes the indices of the nodes retained for the subsequent graph in layer l + 1.\nThe Optimization Objective We regard the global node embedding h global from each network layer as a comprehensive representation of the entire graph. Then, the representation is subjected to a linear transformation and softmax activation, enabling prediction generation, as shown below:\ny = softmax W (h(1)\nglobal + h\n(2) global + • • • + h (L)) global ) ,(7)\nwhere h\n(l)\nglobal represents the global node embedding at the l-th layer, W ∈ R d×d ′ is the weight matrix, and L denotes the total number of layers. Moreover, we aim to minimize the cross-entropy loss between the predictions and ground-truth graph labels to optimize our model. This is defined as:\nL sup = - 1 |D| G∈D y ⊤ G log ( y G ) ,(8)\nwhere L sup represents the cross-entropy loss computed throughout the dataset D, and y G is the groundtruth label vector associated with the graph G." }, { "figure_ref": [ "fig_3" ], "heading": "Graph Explicit Pooling with Uniform Loss (GrePool+)", "publication_ref": [ "b18" ], "table_ref": [], "text": "Similar to other graph pooling methodologies, our GrePool method selectively drops nodes from the graph, specifically those with lower attention scores, as illustrated in Figure 3 (b). However, concerns regarding the potential contributions of these dropped nodes to the prediction outcomes should be considered. In response, we apply a uniform loss to the discarded nodes to collect information from uninformative nodes, thereby enhancing our method.\nMore precisely, we hypothesize that nodes with lower attention scores converge at trivial patterns or nodes, which may be irrelevant to classification tasks. To counteract this, we employ an even distribution of the predictive probabilities of the above nodes across all categories. Thus, the uniform classification loss is defined as:\nL unif = 1 |D| G∈D KL (y unif , y G ) ,(9)\nwhere KL represents the Kullback-Leibler Divergence [19], y G denotes the discarded node embeddings, and y unif represents the uniform distribution across categories. Subsequently, we define the comprehensive loss function for GrePool+ as:\nL total = L sup + λ * L unif ,(10)\nwhere λ serves as a parameter for the trade-off between the primary objective (L sup ) and the auxiliary uniform loss (L unif ). By optimizing the dual objectives, our approach successfully distinguishes between informative and uninformative nodes within the graph. Explicitly penalizing uninformative node embeddings encourages the GrePool method to prioritize and emphasize informative nodes, improving the quality of the resulting graph representation." }, { "figure_ref": [], "heading": "Expressiveness Power of GrePool", "publication_ref": [ "b36" ], "table_ref": [], "text": "In this section, we theoretically examine the GrePool methodology, emphasizing its expressiveness capacity. By leveraging the advancements made by powerful GNNs, we demonstrate that if our graph pooling function is injective, GrePool can achieve a level of expressiveness comparable to that of the renowned Weisfeiler-Lehman (WL) test [37]. The WL test is widely acknowledged for its exceptional ability to distinguish the local structures within a graph.\nTheorem 1. Let A : G → R n denote a GNN adhering to the neighborhood aggregation paradigm and utilizing an attention-based aggregator in conjunction with a readout function. This suggests that A achieves its maximal discriminating ability-which involves distinguishing unique local structures and achieving the same level of discrimination as the 1-Weisfeiler-Lehman (1-WL) test for differentiating distinct global structures-when both aggregation and readout functions are designed to be injective." }, { "figure_ref": [], "heading": "Proof.", "publication_ref": [ "b39", "b36", "b39" ], "table_ref": [], "text": "From Lemma 2 and Theorem 3 in [40], we know that when all functions in A are injective, A can reach the upper bound of its discriminating ability, which is the same as the WL test [37], when determining the graph isomorphism. The detailed proofs can be found in [40].\nCorollary 1. F is defined as the original attention-based aggregator and readout function. It operates on a multi-set H ∈ H with H representing a node feature space that has been systematically transformed from the countable input feature space X . When F is characterized by injectivity, it possesses the capability to map two disparate graphs, G 1 and G 2 , onto distinct embedding spaces. This attribute ensures that the overarching process within the GrePool framework can achieve a level of expressiveness and discrimination analogous to that of the WL test. This injectivity is crucial for preserving the structural information's uniqueness during the transformation from the graph domain to the embedding space, thereby facilitating the effective discrimination between non-isomorphic graphs.\nProof. To streamline the proof, we explore the injectivity of the attention-based aggregator and readout function, and limit our discussion to graphs with a fixed number of nodes. We consider that each graph comprises n nodes represented as matrices X ∈ R n×d , where d denotes each node vector's dimension. The self-attention mechanism's transformations are defined as\nQ = XW Q , K = XW K , V = XW V ,\nwhere W Q , W K , W V are the weight matrices associated with queries, keys, and values, respectively. The self-attention output is then computed as:\nAttention(Q, K, V ) = softmax( QK T √ d )V .(11)\nIn this instance, X 1 and X 2 represent the node features of two distinct graphs. Our objective is to demonstrate that their respective self-attention outputs are uniquely distinguishable. Given the distinctness of X 1 and X 2 , their corresponding Q, K, V matrices will also be distinct, provided that the weight matrices W Q , W K , W V are of full rank. This premise is grounded in the fact that multiplication by a full-rank matrix preserves the uniqueness of varying inputs. By examining the product QK T , we observe that the inputs X 1 and X 2 yield distinct matrices\nQ 1 K T 1 and Q 2 K T 2 .\nFollowing the softmax operation, which acts on these unique matrices, the resulting distinct probability matrices-assuming no value overlap exists-produce unique output matrices when multiplied by V .\nFurthermore, the weighted readout can be approximated by any instance-wise feed-forward network, representing a transformative function ϕ : R d → R d ′ . This function can be seamlessly constructed over multi-set elements h ∈ H, ensuring injectivity. Thus, assuming a fixed-node graph and a nonoverlapping softmax function, the attention-based aggregator and readout function together constitute an injective function. Based on this injectivity, the overall architectural framework of our model exhibits a discrimination level equivalent to that of the WL test, affirming its efficacy in graph representation." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b0" ], "table_ref": [], "text": "This section discusses the comparison of our proposed method with several closely related approaches. Through this analysis, we aim to highlight the distinctive features and advantages of our method and emphasize its unique contributions to graph representation learning. [1], which utilizes multi-head attention to cluster the given graph into representative nodes and calculate the relationships between them, the GrePool method adopts multi-head attention to select informative nodes and summarize the global embedding using the attention scores. This allows GrePool to focus on capturing the most relevant and informative nodes, enhancing the discriminating ability of the resulting graph representation." }, { "figure_ref": [], "heading": "GrePool vs. GMT In comparison to GMT", "publication_ref": [ "b29" ], "table_ref": [], "text": "GrePool vs. CGIPool On the other hand, CGIPool [30], introduces positive and negative coarsening modules with an attention mechanism to learn real and fake coarsened graphs. However, two primary distinctions exist between CGIPool and GrePool. First, CGIPool's positive and negative coarsening modules maximize the mutual information between the input and coarsened graph using a discriminator. In contrast, GrePool focuses on selecting informative nodes directly through multi-head self-attention. Second, while CGIPool adopts a GNN as the attention mechanism to generate attention scores, GrePool employs multi-head self-attention. This enables a more fine-grained analysis of the inter-node relationships and enhances the model's ability to capture complex dependencies within the graph structure." }, { "figure_ref": [], "heading": "Complexity Analysis", "publication_ref": [ "b19" ], "table_ref": [], "text": "The GrePool algorithm differs from existing pooling methods, such as SAGPool [20], by eliminating the need for a score prediction stage. This simplifies the process and avoids extra computational and parameter-related complexity. The only computation required is self-attention, which has a computational cost of O(n 2 ) in a single epoch, where n represents the number of nodes in the graph. Real-world graphs typically comprise around 20-30 nodes, so they are relatively small. Furthermore, the core mechanism of our method, self-attention, is well-suited for efficient GPU-based matrix operations. Therefore, although some perceived increase in computational complexity exists, the actual process is highly efficient and suitable for real-world applications." }, { "figure_ref": [], "heading": "Summary", "publication_ref": [], "table_ref": [], "text": "Our GrePool method utilizes self-attention mechanisms to intelligently select informative nodes within a graph. This node selection process is directly connected to the final prediction, allowing for more accurate and effective graph pooling. Importantly, our approach requires no additional parameters or significant computational overhead, distinguishing it from previous methods. Building upon the success of GrePool, our GrePool+ method utilizes the information from the dropped nodes, typically ignored by previous graph pooling methods. By incorporating a uniform loss, we ensure that the dropped nodes contribute to the overall learning process, enhancing the model's ability to capture the full range of information within the graph. To provide a comprehensive understanding of the capabilities of GrePool and GrePool+, we conduct a thorough analysis of their theoretical foundations and distinguishing factors. This comprehensive evaluation allows us to establish the significance and potential impact of our methods in the field of graph representation learning." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b27", "b12", "b16", "b39", "b33", "b44", "b41", "b1", "b15", "b34", "b0", "b9", "b19", "b43", "b30", "b4", "b0", "b39" ], "table_ref": [ "tab_0", "tab_1" ], "text": "Datasets We comprehensively evaluated our method using 12 graph datasets. These datasets consist of six biochemical, two social from TU Datasests [28], and four large-scale datasets from the Open Graph Benchmark (OGB) [13]. Including these real-world datasets provides a wide-range of content domains and dataset sizes for a robust assessment of our method's performance. A clear overview of the dataset characteristics is provided in Tables 1 and2.\nModels To validate the superiority of GrePool, we used 14 methods as baselines for a comprehensive comparison: 1) GNN-based methods such as GCN [17] and GIN [40]; 2) Flat pooling methods such as Set2set [34] and SortPool [45]; 3) Node clustering pooling methods, including DiffPool [42], MinCutPool [2], MemPool [16], HaarPool [35], and GMT [1]; 4) Node drop pooling methods, including TopKPool [10], SAGPool [20], GSAPool [44], and ASAP [31]; 5) Edge-based pooling method such as EdgePool [5]. The diverse range of baseline methodologies ensures that our comparative assessment is robust, spanning various approaches and paradigms in graph pooling.\nImplementation Details To ensure a fair comparison, we standardized the pooling ratio to 0.5 and 0.25 for the TU and OGB datasets, respectively, across all methods, following the established settings outlined in [1,40]. Additionally, we adopted the parameter settings (excluding the pooling ratio) specified in the corresponding papers for certain comparative models. In cases where the parameter settings were not provided, we conducted parameter tuning to optimize the model's performance. Accuracy was selected as the metric and 10 runs were performed to ensure the reliability of the results. Furthermore, we explored the additional parameter λ within the range of {0.01, 0.1, 1} for our method." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [ "tab_0", "tab_1", "tab_0", "tab_1" ], "text": "The main experimental results are presented in Tables 1 and2, which provide valuable insights into our method's performance. Through a thorough analysis of these results, we uncovered several significant and insightful findings. In the following sections, we will explore these findings in detail.\nPerformance of GrePool First, GrePool consistently surpasses competing methods on nearly all examined datasets, adequately proving its efficacy. This validation reinforces the superiority of our approach in achieving superior performance in graph classification tasks. Second, in comparison to GCNbased and flat pooling methods such as GCN, GIN, Set2Set, and SortPool, the GrePool method exhibits substantial improvements on most datasets.\nSci China Inf Sci 10 This is because GCN-based and flat pooling methods do not consider the graphs' hierarchical structures, a limitation that our method effectively addresses. Moreover, these findings highlight the importance of incorporating hierarchical pooling layers in graph representation learning. Third, the GrePool method outperforms node drop pooling methods such as SAGPool, indicating a more efficient strategy in retaining nodes critical for performance. Fourth, as indicated in Table 1, the GrePool method demonstrates increasingly pronounced enhancements over the baseline models as the dataset size rises. For instance, on the NCI1, NCI109, and MUTA-GENICITY datasets, GrePool's performance improved by 6.85%, 7.42%, and 2.87%, respectively. These significant improvements indicate GrePool's scalability, especially in the context of large datasets. Notably, due to memory and computational resource limitations, some memoryintensive or time-consuming pooling baselines, such as HaarPool, are excluded from the results in Table 2 for the OGB datasets." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Performance of GrePool+", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "The GrePool method utilizing uniform loss on uninformative nodes consistently outperforms the alternative that does not apply uniform loss. Although the improvement may seem modest, it is achieved without introducing any additional computational overhead. This demonstrates that preserving informative nodes while retaining the information from the uninformative ones is more effective than only keeping informative nodes. This also highlights the effectiveness of our informative node identification strategy, since the majority of informative nodes are well preserved. Furthermore, we note that GrePool+ displays smaller accuracy fluctuations in accuracy compared to the approaches that discard uninformative nodes. This is evident in the lower standard deviation values presented in Tables 1 and2 for MUTAG, NCI1, and TOXCAST datasets. The reduced fluctuation indicates that the including uniform loss on informative nodes improves accuracy and enhances training stability. The comprehensive results of our analysis are presented in Figures 4 and5. First, we investigated the impact of the pooling ratio on graph classification performance. Our findings reveal that employing large pooling ratios result in performance fluctuations, indicating the presence of redundant information within the graphs. Specifically, larger pooling ratios introduce an increased amount of redundant information, which can hinder performance rather than enhance it. Furthermore, GrePool's accuracy range is relatively small, suggesting that our method effectively selects essential nodes for graph-level representation learning, regardless of the pooling ratio. Second, we examined the effect of increasing the value of L. Our observations demonstrate that for small-scale datasets such as MUTAG, the test accuracy decreases as L increases. Conversely, for relatively large-scale datasets like OGB-HIV, the test accuracy exhibits an upward trend. Such phenomenon can be attributed to the potential overfitting of deeper GrePool models when applied to small-scale datasets. Third, we explored the impact of the trade-off parameter λ. The results indicate that our model performs optimally when λ is set around 0. Furthermore, we investigated the performance of different strategies with varying pooling ratios. The results presented in Figure 7 demonstrate that as the pooling ratio increases, the performance gap among the three strategies becomes narrower. Notably, when the pooling ratio reaches 0.9, random selection's accuracy nearly equals to our attention-based method. This is because the selection strategy becomes less influential as the number of remaining nodes increases. Actually, as shown in Figure 7, pooling methods mostly perform better when the pooling ratio is around from 0.5 to 0.7, since redundant information is present in the graphs. " }, { "figure_ref": [], "heading": "Broader Evolution", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Evaluation on Other Graph Pooling Methods", "publication_ref": [ "b19", "b43" ], "table_ref": [ "tab_3" ], "text": "To evaluate the generalization ability of the uniform loss operation, we extended its application to other node drop pooling methods. Specifically, we selected two representative pooling methods, namely SAGPool [20] and GSAPool [44], and conducted experiments on five diverse graph datasets with varying sizes and domains. The experimental settings were kept consistent with those employed in the GrePool experiments. As illustrated in Table 3, SAGPool+ and GSAPool+ denote the integration of uniform loss with SAGPool and GSAPool respectively. The results exhibit a marked improvement in performance attributable to our methodological enhancements, with pronounced benefits observed in small-scale graphs. This empirical evidence aligns with the advancements observed in the GrePool+ experiments previously discussed, further validating the efficacy and adaptability of our proposed approach. " }, { "figure_ref": [], "heading": "Evaluation on the Node Classification Task", "publication_ref": [ "b32", "b46" ], "table_ref": [ "tab_4" ], "text": "In light of the success of GrePool+, we recognized the potential for combining the drop-withuniform-loss strategy with the Graph Attention Network (GAT) [33] to address the node classification task. To our knowledge, GAT aggregates information from all neighborhoods using attention mechanisms. However, not all the information is beneficial for node classification, especially for heterophilic graph datasets [47]. To mitigate this issue, we applied the proposed drop-with-uniform-loss strategy to GAT (i.e., GAT+). This strategy effectively reduces the impact of noise information by discontinuing information aggregation from nodes with lower attention scores. Additionally, we applied uniform loss to the representations of these stopaggregating nodes. To evaluate the effectiveness of GAT+, we conducted experiments on six commonlyused heterophilic datasets, including Cornell, Texas, Wisconsin, Actor, Squirrel, and Chameleon. The results presented in Table 4 consistently demonstrate that our method outperforms the baseline models across all datasets. This highlights the effectiveness and generalization ability of our proposed strategy and offers a novel perspective on node classification tasks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This study introduced GrePool, an innovative graph pooling method designed to selectively discard based on their direct impact on the final prediction outcome. This is achieved without additional networks or parameters. Based on this, we present GrePool+, an enhanced version of GrePool, which utilizes information from the nodes typically overlooked and discarded by standard graph pooling methods. This approach refines the training process and enhances classification accuracy. After that, we theoretically and empirically validated the efficacy and generalization capabilities of GrePool and GrePool+, providing substantial evidence for their applications. The experimental evaluation results demonstrate the effectiveness of our proposed methods and reveal several insightful observations regarding existing graph pooling practices. These findings hold the potential to inspire further advancements in the field. Despite the above contributions, this study encountered various challenges. Future research could focus on adjusting the attention weights, particularly concerning scores from different heads within the self-attention mechanism. Additionally, applying GrePool and GrePool+ to other graph-related tasks presents an opportunity for further exploration and validation of these methods." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements This work was supported in part by the Natural Science Foundation of China (Nos. 61976162, 82174230), Artificial Intelligence Innovation Project of Wuhan Science and Technology Bureau (No.2022010702040070)." } ]
Graph pooling has been increasingly recognized as crucial for Graph Neural Networks (GNNs) to facilitate hierarchical graph representation learning. Existing graph pooling methods commonly consist of two stages: selecting top-ranked nodes and discarding the remaining to construct coarsened graph representations. However, this paper highlights two key issues with these methods: 1) The process of selecting nodes to discard frequently employs additional Graph Convolutional Networks or Multilayer Perceptrons, lacking a thorough evaluation of each node's impact on the final graph representation and subsequent prediction tasks. 2) Current graph pooling methods tend to directly discard the noise segment (dropped) of the graph without accounting for the latent information contained within these elements. To address the first issue, we introduce a novel Graph explicit Pooling (GrePool) method, which selects nodes by explicitly leveraging the relationships between the nodes and final representation vectors crucial for classification. The second issue is addressed using an extended version of GrePool (i.e., GrePool+), which applies a uniform loss on the discarded nodes. This addition is designed to augment the training process and improve classification accuracy. Furthermore, we conduct comprehensive experiments across 12 widely used datasets to validate our proposed method's effectiveness, including the Open Graph Benchmark datasets. Our experimental results uniformly demonstrate that GrePool outperforms 14 baseline methods for most datasets. Likewise, implementing GrePool+ enhances GrePool's performance without incurring additional computational costs.
Careful Selection and Thoughtful Discarding: Graph Explicit Pooling Utilizing Discarded Nodes
[ { "figure_caption": "Figure 11Figure 1 Comparison between mainstream graph pooling methods (top right) and our proposed method (bottom right). A detailed introduction to the symbols in the figure can be found in §2.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 22Figure 2 Illustration of informative and uninformative nodes in the graph.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 33Figure 3 An illustrative overview of our proposed method. (a) We present the overall graph pooling procedure, highlighting the integration of GrePool. (b) We explore the details of GrePool, emphasizing its ability to distinguish between informative and uninformative nodes based on the attention scores from the attention module. (c) We introduce the concept of uniform loss. This loss function assigns a uniform label to uninformative nodes' embeddings, facilitating their differentiation from informative nodes during training. The enhanced version of GrePool is referred to as GrePool+.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "v = X ∈ R d denotes the initial node representation, and f (l) (•) is a function parameterized by a neural network. This function transforms and aggregates information from the previous to the current layer. Notably, this function can be incorporated into various GNN formulations.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure 4 Performance of GrePool with distinct pooling ratios and a different number of layers.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure 5 Performance of GrePool with different trade-off parameters on two graph datasets.Impact of Pooling Ratio, Number of Layers and Trade-off Parameter We conducted an in-depth analysis of the effects of L, p, and λ using GrePool on five graph datasets: NCI1, COLLAB, MUTAG, PTC-MR, and OGB-HIV. The comprehensive results of our analysis are presented in Figures4 and 5. First, we investigated the impact of the pooling ratio on graph classification performance. Our findings reveal that employing large pooling ratios result in performance fluctuations, indicating the presence of redundant information within the graphs. Specifically, larger pooling ratios introduce an increased amount of redundant information, which can hinder performance rather than enhance it. Furthermore, GrePool's accuracy range is relatively small, suggesting that our method effectively selects essential nodes for graph-level representation learning, regardless of the pooling ratio. Second, we examined the effect of increasing the value of L. Our observations demonstrate that for small-scale datasets such as MUTAG, the test accuracy decreases as L increases. Conversely, for relatively large-scale datasets like OGB-HIV, the test accuracy exhibits an upward trend. Such phenomenon can be attributed to the potential overfitting of deeper GrePool models when applied to small-scale datasets. Third, we explored the impact of the trade-off", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Performance across nine datasets in the graph classification task. The reported results are mean and standard deviations over 10 different runs. 1) Red: the best performance per dataset. 2) Green: the second best performance per dataset. 3) Blue: the third best performance per dataset. ±3.52 67.00 ±11.4 52.57 ±8.30 76.45 ±2.34 44.84 ±9.10 78.64 ±1.60 50.40 ±3.12 79.22 ±1.59 GIN 70.44 ±2.49 64.00 ±8.89 53.71 ±8.11 69.88 ±2.06 43.33 ±6.41 75.47 ±2.24 49.93 ±2.92 78.52 ±2.01 Set2set 71.00 ±4.14 67.50 ±6.02 54.86 ±9.63 69.40 ±3.00 44.00 ±8.44 80.11 ±1.73 50.47 ±2.78 78.54 ±1.74 SortPool 74.14 ±2.66 78.56 ±10.5 56.00 ±9.23 72.85 ±2.26 31.50 ±9.76 77.93 ±1.90 50.20 ±2.98 78.90 ±1.79 EdgePool 76.53 ±0.50 73.00 ±0.87 54.29 ±4.67 75.56 ±2.53 36.17 ±12.7 81.71 ±1.92 49.80 ±2.51 81.20 ±1.42 DiffPool 77.04 ±0.73 82.50 ±2.54 55.26 ±3.84 75.38 ±0.66 51.27 ±2.89 79.80 ±0.24 51.03 ±0.48 79.24 ±1.99 MinCutPool 75.26 ±2.57 81.00 ±10.2 55.43 ±3.54 73.99 ±1.82 48.83 ±12.5 78.48 ±2.29 50.33 ±3.63 81.16 ±1.25 HaarPool 77.06 ±1.91 62.50 ±2.50 58.14 ±7.98 70.97 ±1.89 43.67 ±5.26 79.36 ±2.13 47.73 ±2.24 80.66 ±1.59 MemPool 63.43 ±2.78 63.50 ±8.67 53.43 ±9.90 62.50 ±2.37 46.83 ±7.80 72.02 ±2.27 48.13 ±3.07 78.56 ±1.42 GMT 77.32 ±1.97 83.00 ±11.8 55.71 ±9.54 78.04 ±1.88 48.29 ±7.12 81.95 ±1.73 50.40 ±1.83 80.62 ±1.96 SAGPool 71.95 ±2.81 67.50 ±7.83 55.43 ±7.47 70.99 ±2.37 42.67 ±8.92 76.23 ±3.73 49.87 ±3.51 79.64 ±2.19 TopKPool 73.48 ±2.07 84.50 ±9.34 54.57 ±10.3 73.07 ±1.87 40.83 ±7.97 75.68 ±4.65 50.20 ±2.98 78.56 ±2.11 GSAPool 73.31 ±3.70 63.00 ±10.5 53.59 ±2.69 72.25 ±2.24 45.67 ±9.58 77.98 ±3.25 49.80 ±3.46 78.64 ±1.78 ASAP 74.82 ±2.90 73.00 ±11.6 55.24 ±4.86 72.20 ±3.18 42.33 ±7.79 80.16 ±2.13 49.60 ±3.23 --GrePool 82.62 ±2.21 86.25 ±8.35 59.86 ±6.67 82.13 ±1.57 51.92 ±5.64 83.03 ±1.79 50.77 ±3.25 81.42 ±1.53 GrePool+ 83.07 ±1.73 88.50 ±6.73 62.86 ±6.70 82.15 ±1.92 52.92 ±6.81 83.30 ±1.71 51.10 ±2.87 81.51 ±1.19", "figure_data": "Biochemical Domain (6)Social Domain (2)NCI1MUTAG PTC-MR NCI109 ENZYMES MUTAGE. IMDB-M COLLAB# graphs4,1101883444,1276004,3371,5005,000# nodes29.8717.9314.2929.6832.6330.3213.0074.49GCN76.03", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance of graph classification task on four OGB datasets. ±2.34 64.43 ±2.16 73.42 ±0.67 59.76 ±0.65 SortPool 71.88 ±1.83 64.33 ±3.10 68.90 ±0.78 59.28 ±0.99 EdgePool 72.15 ±1.56 68.56 ±1.43 74.54 ±0.79 62.57 ±1.36 DiffPool 75.05 ±1.71 64.77 ±2.43 75.82 ±0.69 65.79 ±0.87 MinCutPool 73.91 ±1.10 66.47 ±1.90 78.78 ±0.61 63.66 ±1.56 MemPool 73.75 ±1.90 66.47 ±1.90 72.05 ±0.93 61.85 ±0.36 GMT 76.41 ±2.32 66.88 ±1.59 76.56 ±0.90 64.53 ±0.92 SAGPool 70.19 ±3.66 64.29 ±2.96 69.39 ±1.88 59.09 ±1.38 TopKPool 71.24 ±2.97 65.93 ±2.60 68.69 ±2.02 58.63 ±1.56 GSAPool 71.47 ±2.43 64.49 ±3.31 69.18 ±2.05 59.60 ±1.17 ASAP 71.60 ±1.71 61.93 ±3.18 70.00 ±1.50 60.32 ±1.34 GrePool 76.17 ±1.06 66.21 ±1.69 77.27 ±0.54 65.92 ±0.75 GrePool+ 75.82 ±1.48 66.91 ±1.74 77.62 ±0.44 65.97 ±0.58", "figure_data": "OGB Datasets (4)HIVBBPBTOX21 TOXCAST# graphs41,1272,0397,8318,576# nodes25.5124.0618.5718.78Set2Set73.42", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "1 and 1. Values that are too large or small can have a detrimental effect on the model's performance. GrePool model. All the node selection strategies were conducted under identical settings, except for the node selection process. The results in Figure6demonstrate that our attention-based node selection strategy outperforms the other selection strategies in accuracy. Additionally, the reverse selection strategy's performance underperforms compared to random selection, indicating that our method effectively identifies essential nodes for graph-level representation learning, while the discarded nodes are deemed uninformative for the classification task. Furthermore, it is observed that with the increase in the average number of nodes in the graphs (e.g., MUTAG: 17.9, ENZYMES: 32.63, and D&D: 284.3), the performance gap between GrePool and random selection decreases. This observation suggests that the selection strategy is essential for graphs with fewer nodes. It is notable that the average node number in most real-world datasets for graph classification tasks typically ranges from 20 to 30.", "figure_data": "Impact of Node Selection Strategies In this section, we aim to evaluate the effectiveness of our proposed method by testing the effects of different node selection strategies on various graph datasets. Specifically, we compared our attention-based node selection strategy with ran-dom and reverse selection strategies, as shown in Figure 6. Random selection refers to randomlyAccuracy (%)84 86MUTAG +1.9 Reverse48 51+1.7 Random ENZYMES73 75 Attention (our) +1.0 D&Dselecting informative tokens, whereas reverse se-lection involves selecting nodes with the lowestFigure 6 Model performance with different node selectionstrategies. 45 attention scores in the 0.3 48 51 54 51.0 49.4 47.4 Accuracy (%)0.5 51.9 50.1 47.90.7 53.0 52.2 51.90.9 53.7 52.9 52.3 Attention (our) Random ReversePooling RatioFigure 7 Performance of different strategies in different poolingratios.", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of graph classification task with two baseline methods on five different size datasets. MUTAG PTC-MR NCI109 FRAN. COLLAB SAGPool 67.50 ±7.83 55.43 ±7.47 70.99 ±2.37 59.70 ±2.54 79.64 ±2.19 SAGPool+ 71.00 ±6.15 56.59 ±5.81 70.44 ±2.43 60.20 ±2.48 80.97 ±1.86 GSAPool 63.00 ±10.5 53.59 ±2.69 72.25 ±2.24 60.30 ±2.66 78.64 ±1.78 GSAPool+ 67.50 ±9.69 56.18 ±1.43 73.95 ±1.81 60.48 ±2.48 79.44 ±1.30", "figure_data": "Impro.5.19% ↑2.09% ↑0.7% ↓0.8% ↑1.67% ↑Impro.7.14% ↑4.83% ↑ 2.35% ↑0.3% ↑1.0% ↑", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of node classification across six datasets. The reported results are mean and standard deviation over 10 runs. Cornell Texas Wiscon. Actor Squirrel Chamel. GAT 43.51 ±7.1 58.65 ±4.2 52.35 ±2.6 27.57 ±0.8 27.29 ±1.6 43.55 ±2.2 GAT+ 48.92 ±5.2 62.97 ±5.8 54.51 ±4.0 28.30 ±1.0 27.50 ±1.4 43.86 ±2.3 Impro. 12.4% ↑ 7.36% ↑ 4.1% ↑ 2.6% ↑ 0.7% ↑ 0.7% ↑", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Chuang Liu; Wenhang Yu; Kuang Gao; Xueqi Ma; Yibing Zhan; Jia Wu; Bo Du; Wenbin Hu
[ { "authors": "Jinheon Baek; Minki Kang; Sung Ju Hwang", "journal": "", "ref_id": "b0", "title": "Accurate learning of graph representations with graph multiset pooling", "year": "2021" }, { "authors": "Maria Filippo; Daniele Bianchi; Cesare Grattarola; Alippi", "journal": "", "ref_id": "b1", "title": "Spectral clustering with graph neural networks for graph pooling", "year": "2020" }, { "authors": "David Buterez; Jon Paul Janet; Steven J Kiddle; Dino Oglic; Pietro Liò", "journal": "", "ref_id": "b2", "title": "Graph neural networks with adaptive readouts", "year": "2022" }, { "authors": "Yuzhou Chen; Yulia R Gel", "journal": "", "ref_id": "b3", "title": "Topological pooling on graphs", "year": "2023" }, { "authors": "Frederik Diehl", "journal": "", "ref_id": "b4", "title": "Edge contraction pooling for graph neural networks", "year": "2019" }, { "authors": "Alexandre Duval; Fragkiskos Malliaros", "journal": "", "ref_id": "b5", "title": "Higher-order clustering and pooling for graph neural networks", "year": "2022" }, { "authors": "David Duvenaud; Dougal Maclaurin; Jorge Aguilera-Iparraguirre; Rafael Gómez-Bombarelli; Timothy Hirzel; Alán Aspuru-Guzik; Ryan P Adams", "journal": "", "ref_id": "b6", "title": "Convolutional networks on graphs for learning molecular fingerprints", "year": "2015" }, { "authors": "Federico Errica; Marco Podda; Davide Bacciu; Alessio Micheli", "journal": "", "ref_id": "b7", "title": "A fair comparison of graph neural networks for graph classification", "year": "2020" }, { "authors": "H Gao; Y Liu; S Ji", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b8", "title": "Topology-aware graph pooling networks", "year": "2021" }, { "authors": "Hongyang Gao; Shuiwang Ji", "journal": "", "ref_id": "b9", "title": "Graph u-nets", "year": "2019" }, { "authors": "Xing Gao; Wenrui Dai; Chenglin Li; Hongkai Xiong; Pascal Frossard", "journal": "IEEE Trans Neural Netw Learn Syst", "ref_id": "b10", "title": "ipool-information-based pooling in hierarchical graph neural networks", "year": "2021" }, { "authors": "Daniele Grattarola; Daniele Zambon; Filippo ; Maria Bianchi; Cesare Alippi", "journal": "IEEE Trans Neural Netw Learn Syst", "ref_id": "b11", "title": "Understanding pooling in graph neural networks", "year": "2022" }, { "authors": "Weihua Hu; Matthias Fey; Marinka Zitnik; Yuxiao Dong; Hongyu Ren; Bowen Liu; Michele Catasta; Jure Leskovec", "journal": "", "ref_id": "b12", "title": "Open graph benchmark: Datasets for machine learning on graphs", "year": "2020" }, { "authors": "Jingjia Huang; Zhangheng Li; Nannan Li; Shan Liu; Ge Li", "journal": "", "ref_id": "b13", "title": "Attpool: Towards hierarchical feature representation in graph convolutional networks via attention mechanism", "year": "2019" }, { "authors": "Taisong Jin; Huaqiang Dai; Liujuan Cao; Baochang Zhang; Feiyue Huang; Yue Gao; Rongrong Ji", "journal": "China Inf Sci", "ref_id": "b14", "title": "Deepwalk-aware graph convolutional networks", "year": "2022" }, { "authors": "Kaveh Amir Hosein Khasahmadi; Parsa Hassani; Leo Moradi; Quaid Lee; Morris", "journal": "", "ref_id": "b15", "title": "Memory-based graph networks", "year": "2020" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b16", "title": "Semi-supervised classification with graph convolutional networks", "year": "2017" }, { "authors": "Boris Knyazev; Graham W Taylor; Mohamed Amer", "journal": "", "ref_id": "b17", "title": "Understanding attention and generalization in graph neural networks", "year": "2019" }, { "authors": "Solomon Kullback; Richard A Leibler", "journal": "Ann. Math. Statist", "ref_id": "b18", "title": "On information and sufficiency", "year": "1951" }, { "authors": "Junhyun Lee; Inyeop Lee; Jaewoo Kang", "journal": "", "ref_id": "b19", "title": "Self-attention graph pooling", "year": "2019" }, { "authors": "Maosen Li; Siheng Chen; Ya Zhang; Ivor Tsang", "journal": "", "ref_id": "b20", "title": "Graph cross networks with vertex infomax pooling", "year": "2020" }, { "authors": "Chuang Liu; Yibing Zhan; Xueqi Ma; Dapeng Tao; Bo Du; Wenbin Hu", "journal": "", "ref_id": "b21", "title": "Masked graph auto-encoder constrained graph pooling", "year": "2022" }, { "authors": "Chuang Liu; Yibing Zhan; Jia Wu; Chang Li; Bo Du; Wenbin Hu; Tongliang Liu; Dacheng Tao", "journal": "", "ref_id": "b22", "title": "Graph pooling for graph neural networks: Progress, challenges, and opportunities", "year": "2023" }, { "authors": "Yiqin Lv; Zhiliang Tian; Zheng Xie; Yiping Song", "journal": "", "ref_id": "b23", "title": "Multi-scale graph pooling approach with adaptive key subgraph for graph representations", "year": "2023" }, { "authors": "Xiaojun Ma; Ziyao Li; Guojie Song; Chuan Shi", "journal": "Sci China Inf Sci", "ref_id": "b24", "title": "Learning discrete adaptive receptive fields for graph convolutional networks", "year": "2023" }, { "authors": "Yao Ma; Suhang Wang; Charu C Aggarwal; Jiliang Tang", "journal": "", "ref_id": "b25", "title": "Graph convolutional networks with eigenpooling", "year": "2019" }, { "authors": "Zheng Ma; Junyu Xuan; Yu Guang Wang; Ming Li; Pietro Liò", "journal": "", "ref_id": "b26", "title": "Path integral based convolution and pooling for graph neural networks", "year": "2020" }, { "authors": "Christopher Morris; Nils M Kriege; Franka Bause; Kristian Kersting; Petra Mutzel; Marion Neumann", "journal": "", "ref_id": "b27", "title": "Tudataset: A collection of benchmark datasets for learning with graphs", "year": "2020" }, { "authors": "Nicolò Navarin; Dinh Van Tran; Alessandro Sperduti", "journal": "", "ref_id": "b28", "title": "Universal readout for graph convolutional neural networks", "year": "2019" }, { "authors": "Yunsheng Pang; Yunxiang Zhao; Dongsheng Li", "journal": "", "ref_id": "b29", "title": "Graph pooling via coarsened graph infomax", "year": "2021" }, { "authors": "Ekagra Ranjan; Soumya Sanyal; Partha Talukdar", "journal": "", "ref_id": "b30", "title": "Asap: Adaptive structure aware pooling for learning hierarchical graph representations", "year": "2020" }, { "authors": "Yu Rong; Yatao Bian; Tingyang Xu; Weiyang Xie; Wei Ying; Wenbing Huang; Junzhou Huang", "journal": "", "ref_id": "b31", "title": "Self-supervised graph transformer on large-scale molecular data", "year": "2020" }, { "authors": "Petar Veličković; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b32", "title": "Graph attention networks", "year": "2018" }, { "authors": "Oriol Vinyals; Samy Bengio; Manjunath Kudlur", "journal": "", "ref_id": "b33", "title": "Order matters: Sequence to sequence for sets", "year": "2016" }, { "authors": "Yu Guang; Wang ; Ming Li; Zheng Ma; Guido Montufar; Xiaosheng Zhuang; Yanan Fan", "journal": "", "ref_id": "b34", "title": "Haar graph pooling", "year": "2020" }, { "authors": "Zhengyang Wang; Shuiwang Ji", "journal": "IEEE Trans Pattern Anal Mach Intell", "ref_id": "b35", "title": "Second-order pooling for graph neural networks", "year": "2020" }, { "authors": "Boris Weisfeiler; Andrei Leman", "journal": "NTI, Series", "ref_id": "b36", "title": "The reduction of a graph to canonical form and the algebra which appears therein", "year": "1968" }, { "authors": "Jun Wu; Jingrui He; Jiejun Xu", "journal": "", "ref_id": "b37", "title": "Demo-net: Degree-specific graph neural networks for node and graph classification", "year": "2019" }, { "authors": "Junran Wu; Xueyuan Chen; Ke Xu; Shangzhe Li", "journal": "", "ref_id": "b38", "title": "Structural entropy guided graph hierarchical pooling", "year": "2022" }, { "authors": "Keyulu Xu; Weihua Hu; Jure Leskovec; Stefanie Jegelka", "journal": "", "ref_id": "b39", "title": "How powerful are graph neural networks", "year": "2019" }, { "authors": "Rongji Ye; Lixin Cui; Luca Rossi; Yue Wang; Zhuo Xu; Lu Bai; Edwin R Hancock", "journal": "", "ref_id": "b40", "title": "C2n-abdp: Cluster-to-node attentionbased differentiable pooling", "year": "2023" }, { "authors": "Zhitao Ying; Jiaxuan You; Christopher Morris; Xiang Ren; Will Hamilton; Jure Leskovec", "journal": "", "ref_id": "b41", "title": "Hierarchical graph representation learning with differentiable pooling", "year": "2018" }, { "authors": "Hao Yuan; Shuiwang Ji", "journal": "", "ref_id": "b42", "title": "Structpool: Structured graph pooling via conditional random fields", "year": "2020" }, { "authors": "Liang Zhang; Xudong Wang; Hongsheng Li; Guangming Zhu; Peiyi Shen; Ping Li; Xiaoyuan Lu; Syed Afaq; Ali Shah; Mohammed Bennamoun", "journal": "", "ref_id": "b43", "title": "Structure-feature based graph self-adaptive pooling", "year": "2020" }, { "authors": "Muhan Zhang; Zhicheng Cui; Marion Neumann; Yixin Chen", "journal": "", "ref_id": "b44", "title": "An end-to-end deep learning architecture for graph classification", "year": "2018" }, { "authors": "Zhen Zhang; Jiajun Bu; Martin Ester; Jianfeng Zhang; Zhao Li; Chengwei Yao; Dai Huifen; Zhi Yu; Can Wang", "journal": "IEEE Trans Knowl Data Eng", "ref_id": "b45", "title": "Hierarchical multi-view graph pooling with structure learning", "year": "2021" }, { "authors": "Jiong Zhu; Yujun Yan; Lingxiao Zhao; Mark Heimann; Leman Akoglu; Danai Koutra", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b46", "title": "Beyond homophily in graph neural networks: Current limitations and effective designs", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 275.52, 490.72, 250.03, 8.74 ], "formula_id": "formula_0", "formula_text": "f : G → Y,(1)" }, { "formula_coordinates": [ 3, 263.39, 599.31, 262.15, 10.81 ], "formula_id": "formula_1", "formula_text": "G ′ = POOL(G),(2)" }, { "formula_coordinates": [ 4, 244.77, 258.29, 280.77, 8.77 ], "formula_id": "formula_2", "formula_text": "Attention = p(q(X), X),(3)" }, { "formula_coordinates": [ 4, 208.72, 729.68, 316.83, 12.69 ], "formula_id": "formula_3", "formula_text": "h (l) v = f (l) h (l-1) v , h (l-1) u | u ∈ N (v) ,(4)" }, { "formula_coordinates": [ 5, 102.35, 373.8, 252.66, 11.87 ], "formula_id": "formula_4", "formula_text": "N (v) ⊆ V represents the neighborhood of node v, h(0)" }, { "formula_coordinates": [ 5, 204.55, 581.52, 320.99, 23.7 ], "formula_id": "formula_5", "formula_text": "h global = softmax q global • K √ d V = a • V ,(5)" }, { "formula_coordinates": [ 6, 91.04, 128.64, 434.51, 12.07 ], "formula_id": "formula_6", "formula_text": "idx (l) = TOP k (S (l) ); X (l+1) = X (l) (idx (l) , :) ⊙ S (l) (idx (l) , :); A (l+1) = A (l) (idx (l) , idx (l) ),(6)" }, { "formula_coordinates": [ 6, 186.7, 235.09, 94.67, 11.87 ], "formula_id": "formula_7", "formula_text": "y = softmax W (h(1)" }, { "formula_coordinates": [ 6, 311.32, 235.09, 214.22, 14.3 ], "formula_id": "formula_8", "formula_text": "(2) global + • • • + h (L)) global ) ,(7)" }, { "formula_coordinates": [ 6, 107.82, 261.96, 8.79, 6.12 ], "formula_id": "formula_9", "formula_text": "(l)" }, { "formula_coordinates": [ 6, 232.97, 307.36, 292.57, 26.8 ], "formula_id": "formula_10", "formula_text": "L sup = - 1 |D| G∈D y ⊤ G log ( y G ) ,(8)" }, { "formula_coordinates": [ 6, 230.55, 500.45, 294.99, 26.8 ], "formula_id": "formula_11", "formula_text": "L unif = 1 |D| G∈D KL (y unif , y G ) ,(9)" }, { "formula_coordinates": [ 6, 245.04, 570.35, 280.5, 9.65 ], "formula_id": "formula_12", "formula_text": "L total = L sup + λ * L unif ,(10)" }, { "formula_coordinates": [ 7, 352.11, 332.1, 173.43, 9.68 ], "formula_id": "formula_13", "formula_text": "Q = XW Q , K = XW K , V = XW V ," }, { "formula_coordinates": [ 7, 207.83, 378.31, 317.71, 25.24 ], "formula_id": "formula_14", "formula_text": "Attention(Q, K, V ) = softmax( QK T √ d )V .(11)" }, { "formula_coordinates": [ 7, 302.54, 474.53, 86.65, 12.2 ], "formula_id": "formula_15", "formula_text": "Q 1 K T 1 and Q 2 K T 2 ." } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b3", "b7" ], "table_ref": [], "text": "When mathematicians read mathematical texts, the first thing they do is try to uncover the important concepts-what is the work about? We can think of mathematics as an enormous web of definitions and concepts together with theorems that relate them. While it is hard for humans to learn how to produce proofs of theorems, it should be much easier for anyone to find all the definitions required for the theorems and to organize them in structured ways. Many concepts and theorems are written down where almost anyone can find them, but some of it \"exists nearly as folklore\" [Har20]. That is, the math has been done, but it isn't organized in a way that is accessible to everyone outside the field. Ultimately, we hope to aid the organization of the math that is in this state by starting with the basics: an undergraduate curriculum.\nMuch of math accessible to a typical undergraduate is very well-understood and even well-organized in textbooks and online. However, all is not perfect. When asked to write down the definition of a \"group,\" different mathematicians may write down different things. For example, the authors of the Wikipedia article for \"group\" 1 ease the reader into abstraction, beginning with an explanation of the group structure of the integers and a quote about the mysterious nature of mathematical definitions before writing down the group axioms in careful detail. At a higher level, the authors of the nLab article for \"group\" 2 jump right into \"monoid with inverses\" and offhandedly reference some slightly nontrivial properties in the same sentence. A category theorist probably doesn't need a careful treatment of the group axioms, but someone learning group theory who happened to come across the nLab might be confused by this definition, while the Wikipedia article may have been more accessible. By putting them together, we can ensure common understanding across all levels of mathematics.\nThis principle extends to learning formal mathematics, which involves getting a computer to \"do\" mathematics and verify all of the steps in a proof. No matter how abstract the nLab or Wikipedia entries may get, they were written for humans to read and understand. On the other hand, the definition of \"group\" in Lean, a formal theorem prover, 3 is written for an audience of computers. Yet if we want people to learn to do formal mathematics (and we do, as it shows great promise for modern mathematics [Har21]), they will have to learn to interpret at least some things that were mostly meant for computers. Some people describe writing formal proofs as feeding definitions to a computer, and they note that the computers will often \"whine\" when they don't \"understand\" something they've been given [Rob23]. If we can present formal mathematical concepts alongside the more familiar natural-language concepts, the communication between mathematician and computer will be greatly improved. Moreover, highlighting concepts that often appear in undergraduate curricula could help students taking and professors designing \"bilingual\" math courses that teach both natural and formal proof-writing simultaneously.\nThis document serves as a written companion to a presentation given at the EuroProofNet joint meeting between the Workshop on Natural Formal Mathematics and the Workshop on Libraries of Formal Proofs and Natural Mathematical Language in Cambridge on September 7, 2023.4 " }, { "figure_ref": [], "heading": "Math concepts and Wikidata", "publication_ref": [ "b8", "b4" ], "table_ref": [], "text": "Wikidata [VK14] is a knowledge graph that contains the structured data behind Wikipedia. A knowledge graph represents a network of real-world entities-that is, objects, events, situations, or concepts-and the relationships between them. This information is usually stored in a graph database and visualized as a graph structure, hence the term knowledge \"graph.\"\nWikipedia has a large number of articles on mathematical concepts, described in its \"Math Portal.\"5 We want to use WikiData as a knowledge repository for the math concepts we collect. Wikidata assigns a unique identification number of the form Qx...x (where x is a digit) to each concept it describes. Some of the concepts are connected with descriptive links about their relationships to each other. For example, according to Wikidata a \"book\" (Q571) is an \"instance of\" \"written media.\" These links are not present between every pair of items that \"should\" be linked because creating these links is a labor-intensive process.\nWe attempt to map each term in our corpora to its Wikidata identifier (ID), and base our organization on common mappings. That is, we present the terms that get mapped to the same Wikidata ID as the same mathematical concept. Some of the mappings were done using wikimapper,6 a Python package written by Jan-Kristoph Klie, a researcher at the Ubiquitous Knowledge Processing Lab of the Technical University of Darmstadt. Some of the mappings were done manually.\nThe library wikimapper takes in the name of an \"item\" and produces a Wikidata ID whose page has a title matching that name. This often produces undesirable mappings due to the overloading of words in English and especially in mathematical English. For example, the words \"group,\" \"ring,\" and \"field\" represent fundamentally important concepts in mathematics, but they also refer to everyday objects and can are often even used as verbs! When \"group\" is put through wikimapper, the output is Q654302 instead of the hoped-for Q83478. The former is a \"disambiguation page,\" or a central hub page listing different things that go by the same name or similar names.\nWe were able to almost completely resolve the disambiguation issue by appending parenthetical subject names to the ends of the terms. The parentheticals we used were \"mathematics,\" \"linear algebra,\" \"algebraic geometry,\" \"calculus,\" \"category theory,\" \"commutative algebra,\" \"field theory,\" \"game theory,\" \"topology,\" \"differential geometry,\" \"graph theory,\" \"invariant theory,\" \"group theory,\" \"module theory,\" \"order theory,\" \"probability,\" \"statistics,\" \"ring theory,\" \"representation theory,\" \"set theory,\" \"string theory, \"symplectic geometry,\" and \"tensor theory.\" We chose these fields as parentheticals from the Wikipedia page listing the glossaries of mathematics. 7 The name of the Wikipedia page describing mathematical groups is technically \"Group (mathematics).\" We were able to leverage this fact and the way that wikimapper can take a Wikipedia page name as input to drastically reduce the number of disambiguation pages returned. For the annotation of low resource domains, see [KEdCG20], wikimapper is not perfect but helps considerably with the mapping task." }, { "figure_ref": [], "heading": "Math Resources", "publication_ref": [], "table_ref": [], "text": "There are plenty of good sources of mathematics online. One of the issues beginners face is how to choose between these sources. We describe some of the sources we want to make available through MathGloss and we hope to be able to make other sources available in the near future. Currently, MathGloss consists of terms collected from several resources for mathematical knowledge mapped to Wikidata, either manually or using wikimapper." }, { "figure_ref": [], "heading": "Chicago", "publication_ref": [], "table_ref": [], "text": "The Chicago corpus8 consists of approximately 700 terms related to courses in mathematics taken by the first author at the University of Chicago. With respect to MathGloss, it represents a \"gold standard\" of definitions of mathematical concepts that are well-known enough to appear in an undergraduate mathematics curriculum. That is, each entry in the corpus is annotated with its status as a definition. This corpus is not exhaustive, rather it reflects the first author's interests and the topics covered within these interests by individual professors. Each concept in the corpus (e.g., \"group\") has its own Markdown file containing a definition of the term and links to the Markdown files corresponding to other terms. The links under the Chicago column on the MathGloss website lead to the content of these Markdown files, which contain links to other definitions." }, { "figure_ref": [ "fig_0" ], "heading": "French undergraduate curriculum in Lean 4", "publication_ref": [], "table_ref": [], "text": "This corpus consists of terms (translated from French) that are listed by the Ministére de l' Éducation Nationale et de la Jeunesse as concepts undergraduate mathematics students are expected to know by the end of their degree. The Lean Community has added links from these concepts to their representation in Lean 4,9 and the links on the MathGloss webpage lead to those Lean entries. Some terms do not have Lean counterparts yet, or the link to its counterpart is not included in the corpus. The mappings from terms to WikiData were done using wikimapper. There are 543 terms in total and 369 of them have Wikidata counterparts. Figure 1 below shows such a mapping for the term \"0-1 Law\". " }, { "figure_ref": [], "heading": "Multilingual Mathematics (MuLiMa)", "publication_ref": [], "table_ref": [], "text": "The translation of mathematical terms between (natural) languages should be much easier than it actually is, given that terms in math are supposed to be unambiguous in their definitions. However, mathematicians are really at liberty to choose any name for any concept. Often, this means words that refer to the same mathematical object in two different languages will not \"translate\" to each other. For example, the word in French for what we in English call a \"field\" is \"corps,\" which literally means \"body.\" Moreover, much of mathematics that was first written about in English has no real translation into other languages-mathematicians will sometimes just use the English term. Collecting translations of mathematical concepts is therefore an important task. Tim Hosgood, a researcher at the Topos Institute, is working on this problem. He created a cross-language dictionary10 for math with a similar structure to that of MathGloss. It has 305 terms at the moment, which were all manually mapped to Wikidata." }, { "figure_ref": [], "heading": "nLab", "publication_ref": [], "table_ref": [], "text": "The nLab11 is a wiki for higher mathematics, specifically category theory. It is not a resource intended for undergraduates, but we have included it here filtered along those terms which also appear in the other three corpora.\nThe terms are the titles of the pages hosted on the nLab, but some of these are pages about people or books. The filtration by other corpora should ensure that only mathematical concepts make it into the final table. There were more than 18,000 page titles at the time of writing, and we found that 5377 of these had Wikidata items. Fewer than 5377 terms were included in the final table because of the filtering via the other resources." }, { "figure_ref": [], "heading": "More online math", "publication_ref": [ "b0" ], "table_ref": [], "text": "So far we have only included four corpora, which are not comprehensive of all undergraduate mathematics. One place we hope to look next is open-source textbooks for different topics and use natural language processing (NLP) to find important terms and concepts there. Inspired by [CdPFS22], we were successful in extracting terms from the journal Theory and Applications of Categories (TAC)12 using NLP before we decided to focus on undergraduate rather than research mathematics." }, { "figure_ref": [], "heading": "Using MathGloss", "publication_ref": [], "table_ref": [], "text": "The table of terms in MathGloss can be found at https://mathgloss.github.io/MathGloss/database. Figure 2 shows the first few rows of the table of mappings. As an example of how to use MathGloss, let's say we want to find out more about 'abelian groups'. If we haven't already seen it in row three, a simple page search (Ctrl+F) can help find it in the table." }, { "figure_ref": [], "heading": "Figure 2: The main table of MathGloss", "publication_ref": [], "table_ref": [], "text": "Clicking on the link labeled \"Q181296\" takes us to the Wikidata entry for abelian group, which contains much useful information about abelian groups and their relationships to other kinds of algebraic objects. In particular, it has a link to the Wikipedia page for \"Abelian group.\" Clicking on the link labeled \"abelian group\" under the header \"Chicago\" takes us to a definition of abelian group, hosted on the MathGloss website. This page contains links to other relevant definitions, also on MathGloss. There is currently no Lean 4 link for abelian group in the provided list of undergraduate concepts, but if there were, clicking on it would take us to the entry in the Lean 4 documentation defining abelian groups. Since \"abelian group\" appears under the MuLiMa heading, going to the MuLiMa website will allow us to see its translation into several languages. Finally, if we click on the link under the nLab heading, we will see the nLab page for \"abelian group,\" which takes on a distinctly categorical point of view.\nThe lack of a link to an instance of \"abelian group\" in Lean 4 highlights the need to collect information from multiple resources. Certainly one can talk about abelian groups in Lean, but the list of undergraduate concepts we used just happens not to link there." }, { "figure_ref": [], "heading": "Tools for NLP", "publication_ref": [], "table_ref": [], "text": "At another stage in the project, we collected terms from the abstracts of articles published in Theory and Applications of Categories (TAC) using natural language processing (TAC). We did not include it in this iteration of MathGloss because as a corpus of research mathematics, it does not fit our goal of organizing undergraduate math. However, we hope to apply this technique to other corpora of undergraduate mathematics in the future. We describe the technique below.\nTo extract terms from TAC, we used the Python library spaCy13 to perform grammatical analysis on the text of the 755 abstracts from the articles in TAC.14 SpaCy performs syntactic parsing of sentences using the Universal Dependencies (UD)[NdMG + 16] framework. UD is an open-source project that works towards standardizing grammatical annotation to make linguistics research more consistent. The output of this analysis is in a UD-developed format called CoNLL-U,15 which we then inspect using a script from UD.\nA CoNLL-U file is a plaintext file that displays sentence analysis in a particular structure. It allows comment lines, which are indicated by \"#\", and in the CoNLL-U files we created, the comments include the text of the sentence, the number of its entry in the corpus, and the length (in \"tokens\") of the sentence. Figure 3 shows an example of a CoNLL-U file containing a sentence analyzed in this way after application of the \"detextor\" pipeline component described below. Each word in the sentence is written on its own line following the text of the sentence along with information about the word, for example its part of speech.\nFigure 3: Analysis of the definition of \"abelian group\" in CoNLL-U format At its simplest, spaCy takes in a section of text (called a Doc, short for \"document\"), and then uses its models to split the text into sentences, split sentences into tokens, and then assigns parts of speech and assigns Universal Dependency relations to each token. A token can be thought of as a generalization of a word: it can be a punctuation mark, or especially in our case, a piece of mathematical notation. The process of splitting a Doc into sentences is called \"sentencization,\" and the process of splitting a sentence into tokens is called \"tokenization.\" We used the smallest model provided by spaCy, called \"en core web sm,\" because of our limitations in computing power. Using larger models made no difference in output, but took more time and resources to generate that output.\nIt is possible to run spaCy on L A T E X code, but without making specific modifications for mathematical notation, the results are very poor due to incorrect sentencization (including both sentences that are too long and sentences that are too short) and the over-tokenization of the sentence. L A T E X code consists of many punctuation marks and commands that often resemble English words, but none of these things should be parsed as normal English. Without modifications to the default spaCy pipeline, each piece of the code that represents a mathematical expression is fragmented and treated separately. Our goal is to extract definitions and concepts from texts written in mathematical English, and L A T E X is an integral part of that.\nFirst-year mathematics students are taught that mathematical expressions should always be situated within a grammatically correct sentence. They should never stand on their own, and the statement \"x = y\" should be read \"x equals y,\" and can therefore be considered an independent clause. Within abstracts and definitions, one does not usually make declarative statements like \"x = y,\" so most of the L A T E X we encounter in our corpora should be thought of as nouns or as names of instances of mathematical objects. Because code itself is not \"natural language,\" one way to analyze it is to take each piece of L A T E X code to represent a single \"word.\"\nThis makes up our main approach to solving the tokenization problem. It is as simple as telling spaCy to treat everything in between two dollar signs as one token. The implementation of this pipeline component is done using the \"retokenizer\" context manager and requires that dollar signs (and for best results, hyphens) be padded with spaces. This new pipeline component, called \"detextor,\" is implemented after the \"tagger\" component, which assigns parts of speech to tokens. The detextor pipeline component represents only a partial solution to the problem of annotating L A T E X code as it struggles to capture the actual information conveyed by formulas. However, it greatly improves the accuracy of sentencization, which is crucial for annotating regular English words with their parts of speech.\nThe part-of-speech annotation forms the foundation of our term extraction. We are able to pick out what we expect to be mathematical terms or concepts with a simple heuristic. From the CoNLL-U files, we are able to compile lists of \"lemmas,\" or basic forms of words (e.g. \"be\" for the word \"is,\" or \"group\" for the word \"groups\") according to their part of speech. We suppose that the most frequently occurring nouns, adjective-noun phrases, and \"compounds\" are math terms. \"Compound\" is a UD annotation for certain multi-word phrases that represent a single thing16 . Adjective-noun phrases consist of any number of adjectives followed by a noun or by a compound. The upper ends of the frequency tables for these types of terms are almost exclusively populated by math concepts, but we do not yet know if the lists exclude some concepts.\nThis process is what we hope to use on other bodies of mathematical text when they do not themselves provide ready-made lists of terms. Unfortunately, it does not produce any kind of link to an explanation or definition of a given term. In the future we hope to be able to pinpoint the definitions of these extracted terms and give pointers to their locations in text. In other words, we want to perform entity linking on math text." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [ "b9", "b1" ], "table_ref": [], "text": "First, we hope to develop a method to more easily collect and map terms from different math resources. In particular, we want to include more theorem provers in our mappings. Presently we are looking into adding links to Agda, Coq, and Isabelle. At the Dagstuhl seminar on automated mathematics,17 there was discussion on how best to find and compile instances of undergraduate math concepts from these provers. Hopefully they will follow the example of Lean and produce such resources.\nOn the natural language side of collecting more terms from more resources, we want to use machine learning techniques to extract definitions of the terms we already know how to find. Some work has been done in this direction, even specifically tailored to mathematical text [VLSR20], but results do not look impressive [CdPS23].\nAnother future project is to come up with a way to verify that the mappings to Wikidata are indeed correct. The addition of subject parentheticals definitely reduces the number of disambiguation pages output by Wikimapper, but some still slip through the cracks. We could potentially do this by verifying that the terms we map to a Wikidata item have the same relations to other Wikidata items-this relies on the linking that already exists within corpora like Chicago Notes.\nCurrently, the process for performing mappings is labor-intensive even though it is automated to some degree. We hope to further automate the process to take advantage of the fluid nature of Wikidata, which is always adding more terms. Relatedly, there are some basic improvements that need to be made to the GitHub Pages website. As we add new resources, we want users to be able to select which ones they see on the webpage at any time and for there to be a better search function than pressing Ctrl+F. Moreover, the website needs some support for L A T E X in order to properly display the content in Chicago Notes that is also hosted there." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "MathGloss aims to help people from different backgrounds make sense of the diverse resources for mathematics available online through organization by individual concept. Should one encyclopedia's article for a particular construction be inscrutable, we would like it to be easy to find another article that is more closely aligned with the reader's background and therefore easier to understand. Moreover, MathGloss represents a step in the direction of bridging the gap between natural math as done by humans and formal math. By creating a knowledge graph of undergraduate mathematics, we hope to empower students, mathematicians, and those who use mathematics in their work to both better navigate the intricate web of definitions and theorems and to embrace the use of formal systems." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "://ncatlab.org/nlab/show/group" } ]
MathGloss is a project to create a knowledge graph (KG) for undergraduate mathematics from text, automatically, using modern natural language processing (NLP) tools and resources already available on the web. MathGloss is a linked database of undergraduate concepts in mathematics. So far, it combines five resources: (i) Wikidata, a collaboratively edited, multilingual knowledge graph hosted by the Wikimedia Foundation, (ii) terms covered in mathematics courses at the University of Chicago, (iii) the syllabus of the French undergraduate mathematics curriculum which includes hyperlinks to the automated theorem prover Lean 4, (iv) MuLiMa, a multilingual dictionary of mathematics curated by mathematicians, and (v) the nLab, a wiki for category theory also curated by mathematicians. MathGloss's goal is to bring together resources for learning mathematics and to allow every mathematician to tailor their learning to their own preferences. Moreover, by organizing different resources for learning undergraduate mathematics alongside those for learning formal mathematics, we hope to make it easier for mathematicians and formal tools (theorem provers, computer algebra systems, etc) experts to "understand" each other and break down some of the barriers to formal math.
MathGloss: Building mathematical glossaries from text
[ { "figure_caption": "Figure 1 :1Figure 1: Clicking through to Lean 4", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Lucy Horowitz; Valeria De Paiva
[ { "authors": "Jacob Collard; Valeria De Paiva; Brendan Fong; Eswaran Subrahmanian", "journal": "", "ref_id": "b0", "title": "Extracting mathematical concepts from text", "year": "2022" }, { "authors": "Jacob Collard; Valeria De Paiva; Eswaran Subrahmanian", "journal": "", "ref_id": "b1", "title": "Parmesan: mathematical concept extraction for education", "year": "2023" }, { "authors": "Kevin Hartnett", "journal": "", "ref_id": "b2", "title": "Building the mathematical library of the future", "year": "2020-10" }, { "authors": "Kevin Hartnett", "journal": "", "ref_id": "b3", "title": "Proof assistant makes jump to big league math", "year": "2021-07" }, { "authors": "Jan-Christoph Klie; Richard Eckart De Castilho; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "From Zero to Hero: Human-In-The-Loop Entity Linking in Low Resource Domains", "year": "2020-07" }, { "authors": " Ndmg", "journal": "", "ref_id": "b5", "title": "", "year": "" }, { "authors": "Joakim Nivre; Marie-Catherine De Marneffe; Filip Ginter; Yoav Goldberg; Jan Hajič; Christopher D Manning; Ryan Mcdonald; Slav Petrov; Sampo Pyysalo; Natalia Silveira; Reut Tsarfaty; Daniel Zeman", "journal": "European Language Resources Association (ELRA", "ref_id": "b6", "title": "Universal Dependencies v1: A multilingual treebank collection", "year": "2016-05" }, { "authors": "Siobhan A I Roberts", "journal": "", "ref_id": "b7", "title": "is coming for mathematics, too", "year": "2023-07" }, { "authors": "Denny Vrandečić; Markus Krötzsch", "journal": "Commun. ACM", "ref_id": "b8", "title": "Wikidata: A free collaborative knowledgebase", "year": "2014-09" }, { "authors": "Natalia Vanetik; Marina Litvak; Sergey Shevchuk; Lior Reznik", "journal": "European Language Resources Association", "ref_id": "b9", "title": "Automated discovery of mathematical definitions in text", "year": "2020-05" } ]
[]
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Pixel-wise or feature-based comparisons have always been the foundation of traditional template matching. It was usual practice to use methods like structural similarity indices and normalized cross-correlation. These techniques demonstrated susceptibility to noise, rotation, and size fluctuation, despite being effective in some situations. Their inability to adjust to different document layouts was a result of their reliance on pre-established norms and patterns. In order to overcome the shortcomings of conventional methodologies, feature-based approaches became more popular. But shifts in perspective and attitude brought difficulties. Template matching algorithms, which are divided into supervised and unsupervised learning approaches, underwent a dramatic change with the introduction of machine learning.\nUnder supervised learning, labelled data was used to learn template patterns through the use of algorithms such as decision trees and support vector machines. This improved flexibility to a variety of document structures. Unsupervised methods like topic modelling and clustering have become popular for classifying related documents without labelled data. Deep learning techniques, particularly Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs), have become more popular in recent years. Managing layout differences in documents, scaling for big datasets, and adapting to real-world circumstances where documents may have dynamic structures are additional challenges. Other factors to take into account are the interpretability of deep learning models and the requirement for labelled training data. Applications for similar document template recognition can be found in a variety of fields, including law, finance, and healthcare. Effective template identification simplifies document management in legal environments, supporting legal research and tasks like contract analysis. The effectiveness of information extraction and document management procedures is increased thanks to these algorithms. Additionally, they are essential in identifying fake document templates, protecting a variety of sectors from large financial losses. The suggested methodology uses a comprehensive strategy that includes template extraction, template comparison, structural similarity and optical character recognition (OCR) fraud detection to meet the issues of fraudulent document identification. The process starts with sophisticated ROI (region-of-interest) techniques for extracting templates.\nThe software uses image processing methods like edge identification and contour analysis to identify key areas in medical documents that have patient information, provider information, and billing amounts. These areas have been identified and are intentionally divided as possible templates. Before extraction, documents go through a number of preprocessing steps to improve the accuracy of template identification. Gaussian blurring lowers noise, adaptive thresholding increases contrast, and morphological operations are used to smooth and fine-tune image structures. Together, these pre-processing techniques yield crisp, well-defined images that provide a solid basis for the other processing stages. Key point and descriptor-based advanced feature matching techniques are employed by the template comparison algorithm. Recognized in the template and sample photos are important details like corners and distinct sections.\nThe first step in the multi-step process of detecting fraud is calculating the Structural Similarity Index (SSIM) between the sample images and the template's grayscale representations. A numerical measure of structural information similarity is offered by the SSIM. In later stages of fraud detection, the SSIM value serves as a standard that helps establish the legitimacy of the reviewed medical record. Optical character recognition (OCR) is integrated into the methodology to extract text from medical records. OCR technology converts machine-readable text from images of written or printed text. Text localization is a step in the OCR process where template matching information identifies regions of interest that are most likely to have textual content. Text extraction extracts textual material, such as patient names, addresses, and other relevant information, by using OCR techniques.\nAfter being extracted, the textual data is compared to a reference dataset, which provides reliable and accurate data as a basis. The dataset includes accurate patient data, provider data, and hospital-sourced billing records. A methodical comparison is carried out to look for discrepancies or inconsistencies between the information that was retrieved and the reference dataset. Any discrepancies are reported as possible indicators of false claims. A confidence thresholding strategy is used to increase the accuracy of fraud detection. A confidence score is assigned to each stage of the attribute comparison and OCR process. After template matching and OCR results are combined, the confidence ratings are compared to a preset threshold. A document is only considered possibly fraudulent if the cumulative confidence is higher than this level. This thresholding system strikes a balance in the approach by ensuring sensitivity to potential fraud while minimizing false positives. The methodology integrates flexibility by means of modifiable parameters. Depending on the characteristics of the input documents, parameters like the matching criteria and feature extraction parameters are dynamically altered. This adaptability takes into account the inherent variability in document structures and guarantees consistent performance across a broad range of medical document layouts.\nThe recognition of document templates has become an essential component in many fields, tackling problems related to data mining, document analysis, and information retrieval. The possibility of identical document templates with minute modifications, which present opportunities for fraudulent claims that can result in significant financial losses and erode trust in systems, highlights the importance of this task. In addition to helping to increase the effectiveness of information extraction and document management procedures, these algorithms are essential in identifying fraudulent document templates and averting significant financial losses across a range of industries. The detailed methodology and implementation that follows provides an organized way to deal with the difficulties associated with fraudulent document detection. The introductory section establishes the foundation for a thorough comprehension of the dynamic field of template recognition and its pragmatic uses." }, { "figure_ref": [], "heading": "II. LITERATURE SURVEY:", "publication_ref": [], "table_ref": [], "text": "Document template recognition is an important task with applications in various domains, such as information retrieval, data mining, and document analysis. There is a chance of identical document templates with minute modifications. These kinds of fraudulent claims can lead to significant financial losses and erode the trust in their systems. Recognizing similar document templates involves identifying patterns and structures within documents to categorise them based on their underlying templates. This literature survey explores the advancements in similar document template recognition, focusing on key methodologies, challenges, and applications." }, { "figure_ref": [], "heading": "Traditional Template Matching Techniques:", "publication_ref": [], "table_ref": [], "text": "Traditional template matching often involved methods based on pixel-wise or feature-based comparisons. Techniques such as normalised cross-correlation and structural similarity indices were commonly used. These methods are effective in certain scenarios but they are sensitive to size variation, rotation and noise. These approaches focused on predefined rules and patterns, making them limited in adaptability to diverse document layouts." }, { "figure_ref": [], "heading": "Feature-based Approaches:", "publication_ref": [], "table_ref": [], "text": "Feature-based methods have gained popularity for their ability to capture distinctive elements within documents. Keypoint detectors, such as SIFT (Scale-Invariant Feature Transform) and SURF (Speeded-Up Robust Features), have been used to extract discriminative features for template matching. These techniques enable the extraction of meaningful features that contribute to a document's template categorization. However, these methods may struggle with changes in orientation and perspective." }, { "figure_ref": [], "heading": "Machine Learning-based Matching:", "publication_ref": [], "table_ref": [], "text": "With the increased usage of machine learning, template matching algorithms have been divided into supervised and unsupervised learning techniques. The rise of machine learning techniques has marked a significant advancement in document template recognition. Supervised learning algorithms/techniques, including Support Vector Machines (SVM) and decision trees, have been applied to learn template patterns from labelled data, enhancing the adaptability of matching algorithms to diverse document structures. Unsupervised techniques such as clustering and topic modelling have also gained popularity for grouping similar documents without labelled information." }, { "figure_ref": [], "heading": "Deep Learning Approaches:", "publication_ref": [], "table_ref": [], "text": "Recent years have seen the rise of deep learning approaches, particularly Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). CNNs excel in extracting special features from images, making them suitable for tasks involving document layout analysis. RNNs, on the other hand, are adept at modelling sequential dependencies, which is crucial for recognizing templates in text-heavy documents, leading to improved template recognition accuracy." }, { "figure_ref": [], "heading": "Shape Matching and Graph-based Representations:", "publication_ref": [ "b2", "b3", "b4", "b5", "b6", "b7" ], "table_ref": [], "text": "Recent advancements in shape matching and graph-based representations have contributed to more powerful and advanced template matching. These techniques model the document structures as graphs and leverage graph matching algorithms have shown its advancement in capturing complex relationships between document elements, improving matching accuracy in scenarios where traditional methods may fall short. Despite the advancements, there are many challenges that persist in this field. Variability in document layouts, handling dynamic templates, and managing noise in data are ongoing concerns. Creating comprehensive labelled datasets for training remains a challenge, impacting the performance of supervised learning models. Challenges in similar document template matching include handling variations in document layouts, addressing scalability issues for large datasets, and adapting to real-world scenarios where documents may exhibit dynamic structures. The interpretability of deep learning models and the need for labelled training data are also noteworthy considerations.\nSimilar document template recognition finds applications across various industries, including legal, finance, and healthcare. In legal settings, for instance, efficient template recognition streamlines document management, aiding in tasks such as contract analysis and legal research. These algorithms contribute to improved efficiency in document management and information extraction processes. These can detect the fraud document templates and can protect major financial losses in various industries.\nYang et al. proposed a hybrid matching method for document image template recognition, combining local features and global features to improve matching accuracy [3]. The proposed algorithm demonstrates superior performance compared to traditional matching methods. However, they may be computationally expensive due to the combination of multiple feature extraction methods.\nLiu et al. introduced a robust document template matching method based on SIFT (Scale-Invariant Feature Transform) features, which are invariant to scale and rotation [4]. The matching method utilises SIFT features, which are invariant to scale and rotation, for robust matching. This achieves high matching accuracy and robustness against noise and distortions. However, Scale-Invariant Feature Transform (SIFT) feature extraction can be computationally expensive for large document images and cannot be used for large scaling.\nDeng et al. proposed a template matching approach for document image classification, utilising a combination of correlation matching and structural matching [5]. The proposed method demonstrates promising performance in classifying various types of document images. It uses a combination of correlation matching and structural matching for improved classification, such that different types of document images can be classified. However, it may struggle with complex document layouts with significant variations in structure.\nWu et al. proposed a context-aware document template matching method that incorporates contextual information into the matching process to improve accuracy [6]. It has shown superior performance which can handle variations in document layouts and enhances the matching performance. However, extracting contextual information can be challenging for documents with complex layouts or noisy content.\nLu et al. introduces a hierarchical document template matching approach based on graph matching [7]. The proposed method utilises graph structures to represent document templates and their relationships, enabling efficient matching and recognition. It utilises graph structures to represent document templates and their relationships for efficient matching. However, Graph matching can be computationally expensive for large and complex graph structures.\nLi et al. proposed a deep template matching method for document image classification, employing deep learning techniques to extract and match features from document images [8]. It achieves high classification accuracy and shows superior performance than Traditional Template matching methods. However, it requires training deep learning models, which can be time-consuming and computationally expensive." }, { "figure_ref": [], "heading": "III.METHODOLOGY AND IMPLEMENTATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Template Extraction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Mechanism for Template Extraction", "publication_ref": [], "table_ref": [], "text": "Advanced region-of-interest (ROI) approaches are used in the implementation of the template extraction procedure. The programme recognises important areas inside medical papers by combining a number of image processing techniques, such as contour analysis and edge identification. These sections, which contain vital data including patient specifics, provider details, and bill amounts, are purposefully separated as possible templates." }, { "figure_ref": [], "heading": "Pre-processing Steps", "publication_ref": [], "table_ref": [], "text": "To guarantee the best possible template identification accuracy, documents go through a rigorous sequence preprocessing stages before they are extracted as templates. These procedures include morphological operations to smooth and fine-tune picture structures, Gaussian blurring for noise reduction, and adaptive thresholding to boost contrast. Together, these pre-processing procedures help produce clear, well-defined pictures that serve as a strong basis for further processing phases." }, { "figure_ref": [], "heading": "Template Comparison", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Algorithm for Template Comparison", "publication_ref": [], "table_ref": [], "text": "Advanced feature matching approaches based on key points and descriptors are used by the template comparison algorithm. In both the template and sample photos, key points such as corners and distinguishing sections are recognised. Descriptors, which describe local characteristics around key points, are then compared using advanced algorithms like the Scale-Invariant Feature Transform (SIFT) or Speeded Up Robust characteristics (SURF). This method considerably improves the robustness of template matching, especially when coping with rotation, scaling, and lighting fluctuations." }, { "figure_ref": [], "heading": "Techniques for Accounting Variations", "publication_ref": [], "table_ref": [], "text": "The template comparison technique uses histogram-based analysis to adjust for variances in design components and content. The method discovers similarities in colour patterns and textures by comparing histograms of colour distribution within the template and sample pictures. This strategy is especially useful when there are small changes in document design components." }, { "figure_ref": [], "heading": "Fraud Detection", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Structural Similarity Index (SSIM) Computation", "publication_ref": [], "table_ref": [], "text": "Fraud detection is a multi-step process that ensures reliable identification of potentially fraudulent claims. The Structural Similarity Index (SSIM) between the grayscale representations of the template and sample images is computed first. The Structural Similarity Index (SSIM) is a statistic that measures how similar two photographs are. It assesses picture structure information, brightness, and contrast, offering a comprehensive assessment of similarity. A result of 1 shows that the photos are perfectly matched. The SSIM value ranges from -1 to 1, with 1 denoting identical pictures. The SSIM formula is seen in Figure 1, emphasising its function in assessing the structural similarity between grayscale representations of the template and sample pictures. This index acts as a quantitative metric, allowing the system to locate prospective matches based on structural information similarity. A greater SSIM implies a stronger likeness, whereas a lower SSIM may indicate conflicts or changes in the document's content.\nThe SSIM value is used as a criterion in the succeeding phases of fraud detection to determine whether a probable match has been found. Furthermore, the SSIM helps to the total trust evaluation by assisting in determining the validity of the examined medical document. Where, Secondly, optical character recognition (OCR) is used to extract textual information, with a specific emphasis on consumer details. This step aims to scrutinize the contents of medical documents, focusing on patient information, provider details, and billing amounts." }, { "figure_ref": [], "heading": "𝑆𝑆𝐼𝑀(𝑥", "publication_ref": [], "table_ref": [], "text": "µx" }, { "figure_ref": [], "heading": "OCR for Textual Information Extraction", "publication_ref": [], "table_ref": [], "text": "Optical character recognition (OCR) is a critical technology for extracting textual information from medical records. OCR technology turns written or printed text from pictures into machine-readable text. OCR is used in this approach to identify areas of interest (ROI) during the template extraction step.\nThe OCR process includes the following steps: a. Text Localization: The algorithm employs template matching information to pinpoint locations of interest within the sample picture. These areas are likely to include textual content.\nb. Text Extraction: In this approach, OCR techniques, such as those supplied by the 'easyocr' library, are applied to the indicated regions to extract textual material. This contains patient names, addresses, and other pertinent information. c. Confidence Scores: During OCR, each recognised text segment is awarded a confidence score that reflects the algorithm's confidence in the recognition's correctness. This score is determined by criteria such as the text's clarity and the algorithm's internal confidence measures." }, { "figure_ref": [], "heading": "Comparison with Reference Dataset", "publication_ref": [], "table_ref": [], "text": "After extracting the textual information, the programme compares the resulting characteristics to a reference dataset. This dataset provides as a foundation of authentic and correct data. A thorough collection of valid patient details, provider information, and billing records acquired from hospitals may be included in the reference dataset. The retrieved attributes, such as patient names, addresses, and billing amounts, are compared to the reference dataset in a methodical manner. Inconsistencies or deviations between the retrieved information and the reference dataset are reported as potential signs of fraudulent claims." }, { "figure_ref": [], "heading": "Confidence Thresholding", "publication_ref": [], "table_ref": [], "text": "A confidence thresholding approach is used to improve the reliability of fraud detection. A confidence score is applied to each phase of the OCR process and attribute comparison. The combined confidence ratings from template matching and OCR results are compared against a threshold. A document is identified as possibly fake only when the cumulative confidence reaches this threshold. This thresholding system maintains sensitivity to possible fraud while minimising false positives, ensuring a balanced approach.\nThe system improves its capacity to identify potentially fraudulent claims by assessing both the structural and textual components of medical papers by incorporating OCR technology and reference dataset comparison." }, { "figure_ref": [], "heading": "Flexibility", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Adaptive Parameters", "publication_ref": [], "table_ref": [], "text": "The technique includes adjustable settings to increase flexibility. These parameters, which include matching criteria and feature extraction parameters, are dynamically changed based on the input documents' properties. This versatility provides dependable performance across a wide range of medical document layouts." }, { "figure_ref": [], "heading": "IV. RESULTS WITH DISCUSSION", "publication_ref": [], "table_ref": [], "text": "The algorithm performs template matching to identify a document type based on predefined templates, checks for potential fraud by comparing the structural similarity using SSIM between the sample document and the best-matched template, and then uses OCR (Optical Character Recognition) to extract text from the sample document and check if certain attributes are present in the dataset given. The algorithm helps in finding the highest matching score of the sample with the template. If the highest matching score is above a threshold (0.6), it considers the document a potential match with a specific template.\nAfter identifying the best-matched template, the code calculates the Structural Similarity Index (SSI) between the template and the sample image using the SSIM function from the skimage. metrics module. The SSI measures the structural similarity between two images. If the SSI is below a certain threshold (0.8), it indicates a significant difference between the template and the sample image. If the SSI is below the threshold, the code prints \"Fraud,\" suggesting that the sample document significantly differs from the expected structure.\nIn Fig. 1, we can observe that as the score is below the threshold of 0.8, it is identified as Potential fraud. This could be due to minor structural changes or alignment in the document provided, warning the institution that it could potentially be fraudulent. In the last part of this paper, the detection of true and real documents is done based on the structural similarities done on the basis of threshold values and on checking for the data in the dataset, if proved true, \"REAL DOCUMENT\" is printed indicating that the document is real and isn't subjected to any form of fraud.\nIn Fig. 7, an example of a real document along with its data base in Fig. 6. This architecture overview summarizes the key components and steps of the script, emphasizing its fraud detection capabilities through a combination of template matching, image analysis, and OCR as shown in Fig. 9. " }, { "figure_ref": [], "heading": "CONCLUSION:", "publication_ref": [], "table_ref": [], "text": "In conclusion, the literature survey underscores the dynamic evolution of similar document template recognition, encompassing a spectrum of techniques from traditional methods to state-of-the-art deep learning approaches. The surveyed literature highlights the significance of this field in various industries, showcasing its role in streamlining document management in legal settings, aiding in financial data analysis, and contributing to fraud detection mechanisms. While advancements in shape matching, graphbased representations, and deep learning have propelled the accuracy of template matching, challenges such as handling variations in document layouts persist. The need for comprehensive labelled datasets, scalability for large datasets, and adaptability to real-world scenarios emerge as ongoing concerns. The references provided in the survey offer a comprehensive overview of the field, drawing from seminal works that have shaped the trajectory of similar document template recognition research." } ]
This research presents a thorough approach to medical document verification that incorporates cutting-edge methods for fraud detection, template extraction, and comparison. The process starts with the extraction of the template using advanced regionof-interest (ROI) techniques that include edge identification and contour analysis. By using adaptive thresholding and morphological operations, preprocessing procedures guarantee template clarity. By using advanced feature matching with key points and descriptors, the template comparison algorithm improves robustness. The SSIM computation and OCR for textual information extraction are used in fraud detection. By quantifying structural similarity, the SSIM facilitates the identification of possible matches. Critical areas such as patient details, provider information, and billing amounts are the focus of OCR. Reliable fraud detection is ensured by confidence thresholding and comparing the extracted data with a reference dataset. Flexible parameters allow the system to adapt dynamically to different document layouts. This methodology addresses complexity in template extraction, comparison, fraud detection, and adaptability to different document structures, offering a strong approach to medical document verification.
[ { "figure_caption": "Fig. 1 PotentialFig. 2 Fig. 3 Fig. 41234Fig.1 Potential Fraud document case 4.1.2 Potential Fraud Cases: Error in data The code utilizes EasyOCR to extract text from the sample image after the template matching and SSI analysis. It converts the extracted text to lowercase and checks if certain attributes (text patterns) are present in a dataset. If all the required attributes are found in the detected text, it prints \"REAL DOCUMENT,\" indicating that the document is valid. Otherwise, it prints \"Error in data: Potential Fraud.\"", "figure_data": "", "figure_id": "fig_2", "figure_label": "1234", "figure_type": "figure" }, { "figure_caption": "Fig. 55Fig.5 Fraudulent Case 4.1.4 Valid or Real Documents", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 Fig. 767Fig.6 Snippet of Database", "figure_data": "", "figure_id": "fig_4", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Fig 8 .8Fig 8. Architectural diagram", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" } ]
Bommareddy Revanth; Srinivasa Reddy; Batta Venkata Rahul; Hemanth Raju
[ { "authors": "N Chen; D Blostein", "journal": "International Journal of Document Analysis and Recognition (IJDAR)", "ref_id": "b0", "title": "A survey of document image classification: problem statement, classifier architecture and performance evaluation", "year": "2007" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Document Image Classification: Progress over two decades", "year": "2018" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "A Hybrid Matching Method for Document Image Template Recognition", "year": "2014" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "Robust Document Template Matching Based on SIFT Features", "year": "2016" }, { "authors": "", "journal": "", "ref_id": "b4", "title": "Template Matching for Document Image Classification", "year": "2017" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "Context-Aware Document Template Matching for Efficient Document Processing", "year": "2018" }, { "authors": "W Lu; X Zhang; H Lu; F Li", "journal": "Journal of Visual Communication and Image Representation", "ref_id": "b6", "title": "Deep hierarchical encoding model for sentence semantic matching", "year": "2020" }, { "authors": "J Li; Z Mei; T Zhang", "journal": "", "ref_id": "b7", "title": "A method for document image enhancement to improve template-based classification", "year": "2020-07" }, { "authors": "Z Guo; K Guo; B Nan; Y Tian; R G Iyer; Y Ma; . . Chawla; N V ", "journal": "", "ref_id": "b8", "title": "Graph-based molecular representation learning", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b9", "title": "Structural similarity -Wikipedia", "year": "2019-07-04" } ]
[]
10.18653/v1/2023.acl-long.572
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2" ], "table_ref": [], "text": "The Transformer architecture, since it launched in 2017 [Vaswani et al., 2017], has been widely adopted in natural language processing, computer vision, and many other areas. Nowadays, various large language models (LLMs) are trained using the Transformer architecture.\nHowever, the lack of deep understanding and comprehensive interpretation hinders further improvements on the Transformer architecture. Moreover, the lack of interpretability also limits applications in real-world [Yang et al., 2023]. As a result, the vanilla Transformer architecture is still widely used today, over six years after its debut, even in the age of AI (artificial intelligence) rush. One reason is that interpreting the Transformer is a low-level, complex, demanding, time-consuming, and non-profitable task that fewer people would like to take.\nIn Chen [2023], a family of Extractors is proposed to replace the multi-head self-attention in the Transformer in a drop-in fashion. Specifically, a type of the Extractor called the higher-performance Extractor (HE) is capable of outperforming the multi-head self-attention with fewer arithmetic operations and the same number of trainable parameters. And a more powerful type of the Extractor called the super high-performance Extractor (SHE) achieves a much better performance than the multi-head self-attention.\nIn this paper, we first interpret the Transformer architecture, as well as what the self-attention and the Extractor actually do, based on our understanding and experiences. These interpretations are further proved and verified. Then, we propose an improvement on the SHE. Experimental results demonstrate that with the improvement the SHE can achieve a better performance. To our best knowledge, this is the first time that the Transformer architecture is comprehensively interpreted in plain words.\nOur contributions are summarized as follows.\n• We comprehensively interpret the Transformer architecture (as well as what the self-attention and the Extractor actually do) in plain words. • We prove and further verify the interpretations.\n• We propose an improvement on the SHE, which is a high-performance replacement of the multi-head selfattention in the Transformer.\n• We evaluate the performance of the Transformers equipped with the improved SHE in text generation." }, { "figure_ref": [ "fig_0" ], "heading": "Interpretation of the Transformer", "publication_ref": [ "b2", "b2" ], "table_ref": [], "text": "In this paper, we use text generation as an example task for the Transformer. As discussed in Chen [2023], the Transformer architecture is employed to build models to predict or infer the probabilities of the next token given a variable-length sequence of tokens.\nAlternatively, we can view those variable-length sequences as fixed-length sequences if we introduce a padding token whose embedding is always a zero vector. That is to say, we can virtually add padding tokens at the beginning of a variable-length sequence to make its length equal to a given number l, where l is the maximum sequence length that a model supports and the length of the context window in text generation. Since the embeddings of padding tokens are zero vectors, padding tokens have no effect on the output of the Transformer. In this way, we convert the \"variable-length\" problem into a \"fixed-length\" problem, meaning that we can regard all the input sequences to the Transformer as fixed-length sequences.\nIn text generation, the outputs of the Transformer are probabilities for choosing the next token. Since the last component of the Transformer is virtually a softmax regression [Chen, 2023], these probabilities come from the input vector of the softmax regression, whereas the input vector of the softmax regression comes from the output of the last Transformer layer. The number of elements in these vectors is d, where d is the internal \"dimension\" of the Transformer. To facilitate discussions, we introduce the term \"Transformer core\" to refer to the stack of m Transformer layers, as illustrated in Fig. 1. In the following discussions, we will focus on the Transformer core, for the interpretations of both softmax regression and embedding are pretty clear." }, { "figure_ref": [], "heading": "Lemma 1", "publication_ref": [], "table_ref": [], "text": "The Transformer core implements a matrix function f : R t×d → R t×d , where t is the length of the input sequence.\nProof. The Transformer inputs a sequence of indices of t tokens. The embedding layer maps the input sequence into a t × d matrix, which is the input of the Transformer core. Thus, the domain of the function is R t×d . On the other hand, the Transformer core actually outputs a t × d matrix, although in the phase of inference only the last row of this matrix is used by the succedent softmax regression. Hence, the codomain of the function is also R t×d . □" }, { "figure_ref": [], "heading": "Lemma 2", "publication_ref": [], "table_ref": [], "text": "Each layer (e.g., the k-th layer) in the Transformer core implements a matrix function f [k] : R t×d → R t×d , where k = 1, 2, • • • , m and m is the number of layers in the Transformer core.\nProof. There are m serially-connected layers in the Transformer core. These layers are identical in terms of architecture, although the values of their parameters may be different. They all input t × d matrices and output t × d matrices, meaning that both their domains and their codomains are R t×d . As the values of their parameters may be different, they may implement different functions. □" }, { "figure_ref": [ "fig_0" ], "heading": "Proposition 1", "publication_ref": [ "b3" ], "table_ref": [], "text": "The Transformer core implements a composite function\nf = f [m] • f [m-1] • • • • • f [1] .\nProof. All the layers in the Transformer core are serially connected, meaning that the output of the k-th layer is the input of the (k + 1)-th layer, where k = 1, 2, • • • , m -1. And all the domains and codomains of f\n[1] , f [2] , • • • , f [m] are the same. □\nAs shown in Fig. 1, a standard Transformer layer consists of two serially-connected sublayers: the self-attention sublayer or the Extractor sublayer as the first sublayer (sublayer 1) and the feed-forward network (FFN) sublayer as the second sublayer (sublayer 2). Please note that the order of the two sublayers may alter and a Transformer layer may consist of only one sublayer. Besides, it is also possible for the two sublayers to be connected in parallel, as proposed in Chowdhery et al. [2023] and Zhong et al. [2022]. In order to keep it consistent with the vanilla Transformer, in this paper we refer the self-attention sublayer or the Extractor sublayer to sublayer 1 and refer the FFN sublayer to sublayer 2." }, { "figure_ref": [], "heading": "Proposition 2", "publication_ref": [], "table_ref": [], "text": "Sublayer 1 performs dimensionality reduction.\nProof. The input of sublayer 1 is the input of a Transformer layer, which is a t × d matrix. The output of sublayer 1 is also a t × d matrix. The i-th row of the output matrix are computed using the 1-st, 2-nd , • • • , and i-th rows of the input matrix, where i = 1, 2, • • • , t. Those rows of the input matrix can be regarded as an id -vector. And the i-th row of the output matrix can be regarded as a d-vector. In this sense, sublayer 1 reduces id-vectors to d-vectors. This is what the self-attention sublayer does. As we discussed earlier, with padding tokens we can virtually extend an input i × d matrix to an l × d matrix, where i = 1, 2, • • • , t and t ≤ l. In this sense, sublayer 1 reduces ld-vectors to d-vectors. This is what the Extractor does. In either case, sublayer 1 reduces dimensionalities. □\nAlternately, from the aspect of encoding, the aforementioned dimensionality reduction can be viewed as an encoding process, for it converts id-vectors or ld-vectors into d-vectors. In this sense, an output d-vector is a \"code\" that ideally corresponds to an input id-vector or ld-vector. Both the self-attention and the Extractor can be regarded as encoders." }, { "figure_ref": [ "fig_0" ], "heading": "Proposition 3", "publication_ref": [], "table_ref": [], "text": "Sublayer 2 performs transformation.\nProof. The input and output of sublayer 2 are both t × d matrices. The i-th row of its output matrix is computed only using the i-th row of its input matrix, where i = 1, 2, • • • , t. And each row of the input and output matrices is a d-vector.\nIn this sense, sublayer 2 transforms d-vectors to d-vectors. □\nAlternately, from the aspect of mapping, sublayer 2 maps each row of its input matrix, or \"code\", into a corresponding output row vector.\nAs shown in Fig. 1, there is a residual connection in each sublayer. In practice, both pre-layer normalizations and dropouts are commonly applied within the residual connections. Since pre-layer normalization is a feature scaling technique in nature and dropout is a regularization technique and they are not quite relevant to the interpretation of the Transformer architecture, they will not be considered in this section, in order to keep the discussions simpler." }, { "figure_ref": [], "heading": "Proposition 4", "publication_ref": [], "table_ref": [], "text": "The residual connections in sublayers contribute to expediting the training of Transformer models.\nProof. In the phase of training, we usually use random numbers whose absolute values are close to zeros to initialize the weights of Transformer models to avoid divergence. And for the same reason, the initial learning rate used in training is small, too. Without residual connections, this may cause the absolute values of the elements in the output matrices of the sublayers to be small in the early phase of training, resulting in smaller update steps for weights in each training iteration, especially when the number of layers is large. Whereas with residual connections, the absolute values of the elements in the output matrices of the sublayers are larger in general in the early phase of training, resulting in larger update steps for weights, which expedites the training of Transformer models. □" }, { "figure_ref": [], "heading": "Proposition 5", "publication_ref": [], "table_ref": [], "text": "With the residual connection, sublayer 1 adjusts the row vectors of its input matrix based on the same and prior row vectors of its input matrix.\nProof. The i-th row vector in the output matrix of the self-attention or the Extractor is computed using the 1-st, 2-nd, • • • , and i-th row vectors of its input matrix, where i = 1, 2, • • • , t. With the residual connection, the i-th row vector in the output matrix of sublayer 1 is the resultant vector of the i-th row vector in the input matrix of sublayer 1 and the i-th\nZhe Chen row vector in the output matrix of the self-attention or the Extractor, meaning that the i-th row vector in the input matrix of sublayer 1 is adjusted by the i-th row vector in the output matrix of the self-attention or the Extractor. □" }, { "figure_ref": [ "fig_0" ], "heading": "Proposition 6", "publication_ref": [ "b2" ], "table_ref": [], "text": "With the residual connection, sublayer 2 adjusts the row vectors of its input matrix based on the same row vectors of its input matrix.\nProof. The i-th row vector in the output matrix of the FFN is computed just using the i-th row vector of its input matrix, where i = 1, 2, • • • , t. With the residual connection, the i-th row vector in the output matrix of sublayer 2 is the resultant vector of the i-th row vector in the input matrix of sublayer 2 and the i-th row vector in the output matrix of the FFN, meaning that the i-th row vector in the input matrix of sublayer 2 is adjusted by the i-th row vector in the output matrix of the FFN. □ Proposition 7\nThe Transformer core maps its input matrix into output matrix by driving the row vectors in its input matrix towards the row vectors in its output matrix layer by layer.\nProof. According to Lemma 1, the transformer core maps a t × d matrix into a t × d matrix. Moreover, according to Proposition 5 and Proposition 6, both sublayers in a Transformer layer adjust the row vectors in its input matrix. Therefore, each layer in the Transformer core drives the row vectors in its input matrix towards the row vectors in its output matrix. Hence, the Transformer core drives the row vectors in its input matrix towards the row vectors in its output matrix, layer by layer. □\nIn summary, the Transformer first converts a sequence of token indices (as well as their positions) into a matrix via embedding. Then, the Transformer core maps this matrix into another matrix by driving the row vectors in the input matrix towards the row vectors in its output matrix layer by layer, as illustrated in Fig. 1. The row vectors in the output matrix represent the predictions that the Transformer makes. Finally, the softmax regression in the Transformer maps each row vector in the output matrix into probabilities for choosing the next token in a vocabulary. Please note that in the phase of inference only the last row vector in the output matrix is used whereas in the phase of training all the row vectors may be used.\nThe self-attention sublayer in the Transformer provides a way to reduce multiple d-vectors (i.e., multiple row vectors in the input t × d matrix) to one d-vector. It generates dynamic weights (by the self-attention mechanism) to weight the multiple d-vectors, resulting in much diversified outputs or \"codes\", which is its advantage. The Extractor sublayer, on the other hand, uses static weights to weight the multiple d-vectors (i.e., the row vectors in the virtual input l × d matrix) and employs dynamic element-wise multiplications (as what the \"adjustment\" part does) to diversify its outputs or \"codes\". Thus, the Extractor is able to reduce the computational complexity of sublayer 1 while maintaining the performance of the Transformer. Moreover, another advantage of the Extractor sublayer is that it does not require positional embeddings since it uses position-relevant static weights to weight the multiple d-vectors or the multiple d-vectors are computed using position-relevant weights. As an example, a type of the Extractor called the HE (as proposed in Chen [2023]) is able to outperform the multi-head self-attention with fewer arithmetic operations and the same number of trainable parameters." }, { "figure_ref": [ "fig_1" ], "heading": "Improvement of the Extractor", "publication_ref": [ "b2", "b2" ], "table_ref": [], "text": "In Chen [2023], a type of the Extractor called the SHE is proposed to replace the multi-head self-attention in a drop-in fashion. Although it outperforms the self-attention, we find that its performance can be further improved. In this section, we propose an improvement to the SHE.\nThe Fig. 3 in Chen [2023] illustrates the SHE. An imperfection of the SHE is that the SHE does not scale the output of its \"extraction\" part (i.e., the resultant vector of a number of vectors) in accordance with the length of the sequence (i.e., the number of vectors). This results in that the variances of the elements in the output matrix of the \"extraction\" part vary with the length of the sequence, causing the variances of the elements in the output matrix of sublayer 1 to vary with the length of the sequence, which deteriorates the performance of the Transformer. To address this issue, we propose to standardize the variances of the elements in the output matrix of the \"extraction\" part by multiplying 1 √ i , as shown in Fig. 2 and Eq. ( 1).\nx\n[out_ext] i = 1 √ i i j=1 x [in_sub1] j W [ext] i-j+1(1)\nwhere x [out_ext] i is the i-th row vector in the output matrix X [out_ext] ,\nX [out_ext] ∈ R t×d , x [out_ext] i ∈ R 1×d , x [in_sub1]\nj is the j-th row vector in the input matrix X\n[in_sub1] , X [in_sub1] ∈ R t×d , x [in_sub1] j ∈ R 1×d , W [ext] 1 , W [ext] 2 , • • • , W [ext] l are weight matrices, W [ext] 1 , W [ext] 2 , • • • , W [ext] l ∈ R d×d , and i = 1, 2, • • • , t.\nWe call this improved version of SHE iSHE in this paper." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first experimentally validate our interpretation of the Transformer. Then, we evaluate the performance of the proposed iSHE." }, { "figure_ref": [], "heading": "Validation of the Interpretation of the Transformer", "publication_ref": [ "b5" ], "table_ref": [], "text": "Since Proposition 7 interprets the core idea of the Transformer, we focus on the validation of Proposition 7 in the following experiment. In order to faithfully plot d-vectors in the Euclidean plane, we let d = 2. However, d is one of the major hyperparameters that decide the capacity of a Transformer model. Thus, it is reasonable to employ a small vocabulary when d is small. On the other hand, in order to train a Transformer model even with few layers and a very small vocabulary, lots of training examples are required, meaning that the training dataset should be large enough. So we generate a dataset with over 220,000 training examples and only three tokens in the vocabulary. To be exact, the three tokens are \"0\", \"1\", and \";\", respectively. This dataset simply contains 2 14 binary numbers (ranging from 0 to 2 14 -1) separated by semicolons.\nWith this dataset, two Transformer models are trained. One model employs the 1-head self-attention sublayer, whereas the other employs the SHE sublayer. The hyperparameters and settings for training these Transformer models are listed in Table 1. All the biases in these models are disabled, since in general they do not contribute to improving the performance of the Transformer. The parameter-free version of pre-layer normalization is used. And dropout is not used. Fig. 3 and Fig. 4 show the output row vectors of all the sublayers in the Transformer core when the Transformer core inputs an row vector. It can be seen that as the starting point the input row vector is driven to the end point (i.e., the The most related work we are aware of is Molina [2023]. This work connects the ideas proposed in previous works and interprets the Transformer based on the geometric interpretation of layer normalization. However, layer normalization is not an indispensable ingredient to the Transformer, as the Transformer can work without layer normalizations. In contrast, our interpretations do not rely on layer normalization." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, the Transformer architecture is comprehensively interpreted in plain words. Although we focus on the decoder of the Transformer, the interpretations can also be applied to the encoder.\nMoreover, a type of the Extractor, namely the SHE, is improved without introducing additional trainable parameters. The improved SHE, or the iSHE, achieves a better performance by simply introducing a multiplier factor to the output of the \"extraction\" part. Therefore, we strongly recommend to replace the SHE with the proposed iSHE." }, { "figure_ref": [], "heading": "Evaluation of the Performance of the proposed iSHE", "publication_ref": [ "b2" ], "table_ref": [], "text": "In order to evaluate the performance of the proposed iSHE, nine Transformer models with either the self-attention sublayer (with 1, 2, 4, 8, 16, 32, and 64 heads, respectively) or the Extractor sublayer (the SHE or the iSHE) are trained using exactly the same dataset that is used in Chen [2023] for text generation. This dataset is composed of the top 100 books on English children's literature available at gutenberg.org, a library of free ebooks. The raw text of the books is tokenized using the Hugging Face BPE (byte-pair encoding) tokenizer with a vocabulary size of 5000, resulting in a total of 8.4M tokens.\nThe models are implemented and trained using the PyTorch 2.1 framework on an NVIDIA GeForce RTX 4050 GPU (graphics processing unit). The hyperparameters and settings for this experiment are listed in Table 2. As what we did in the previous experiment, all the biases in these models are disabled and dropout is not used. And the parameter-free Interpretation of the Transformer and Improvement of the Extractor version of pre-layer normalization is used. The weights are initialized randomly following a normal distribution with a mean of zero and a standard deviation of 0.01. All the models are initialized and trained with the same random seed.\nZhe Chen\nWe use training cost (i.e., the average training loss over a batch) as the evaluation matric since training cost equals perplexity in this task. Perplexity measures how well a probability model predicts. The lower the perplexity, the better the model predicts. Fig. 5 shows the median, the first quartile, and the third quartile of the training costs for every non-overlapping 1000 batches. This figure evidently indicates that the model with the proposed iSHE sublayer outperforms all the other models. Please note that such a performance gain does not cost a single extra trainable parameter. Furthermore, both the iSHE and the SHE do not require the positional embedding that is required by the self-attention, saving l • d trainable parameters compared with the self-attention in this case." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "There are quite a few works related to interpreting the Transformer. Most of them focus on interpreting the attention matrix, specific network components, model parameters, or hidden representations." } ]
It has been over six years since the Transformer architecture was put forward. Surprisingly, the vanilla Transformer architecture is still widely used today. One reason is that the lack of deep understanding and comprehensive interpretation of the Transformer architecture makes it more challenging to improve the Transformer architecture. In this paper, we first interpret the Transformer architecture comprehensively in plain words based on our understanding and experiences. The interpretations are further proved and verified. These interpretations also cover the Extractor, a family of drop-in replacements for the multi-head self-attention in the Transformer architecture. Then, we propose an improvement on a type of the Extractor that outperforms the self-attention, without introducing additional trainable parameters. Experimental results demonstrate that the improved Extractor performs even better, showing a way to improve the Transformer architecture.
Interpretation of the Transformer and Improvement of the Extractor
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the Transformer.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The proposed iSHE.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Median, first quartile, and third quartile of the training costs of the models with different types of sublayer 1.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Zhe Chen
[ { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b0", "title": "Attention is all you need", "year": "2017" }, { "authors": "Sen Yang; Shujian Huang; Wei Zou; Jianbing Zhang; Xinyu Dai; Jiajun Chen", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Local interpretation of transformer based on linear decomposition", "year": "2023" }, { "authors": "Zhe Chen", "journal": "", "ref_id": "b2", "title": "Attention is not all you need anymore", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "Journal of Machine Learning Research", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2023" }, { "authors": "Yaofeng Desmond Zhong; Tongtao Zhang; Amit Chakraborty; Biswadip Dey", "journal": "", "ref_id": "b4", "title": "A neural ODE interpretation of transformer layers", "year": "2022" }, { "authors": "Raul Molina", "journal": "", "ref_id": "b5", "title": "Traveling words: A geometric interpretation of transformers", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 313.37, 584.6, 111.58, 10.81 ], "formula_id": "formula_0", "formula_text": "f = f [m] • f [m-1] • • • • • f [1] ." }, { "formula_coordinates": [ 3, 72, 630.72, 468, 21.72 ], "formula_id": "formula_1", "formula_text": "[1] , f [2] , • • • , f [m] are the same. □" }, { "formula_coordinates": [ 6, 244.85, 366.84, 295.81, 29.69 ], "formula_id": "formula_2", "formula_text": "[out_ext] i = 1 √ i i j=1 x [in_sub1] j W [ext] i-j+1(1)" }, { "formula_coordinates": [ 6, 333.5, 406.58, 165.2, 14.02 ], "formula_id": "formula_3", "formula_text": "X [out_ext] ∈ R t×d , x [out_ext] i ∈ R 1×d , x [in_sub1]" }, { "formula_coordinates": [ 6, 71.75, 421.65, 469.49, 29.19 ], "formula_id": "formula_4", "formula_text": "[in_sub1] , X [in_sub1] ∈ R t×d , x [in_sub1] j ∈ R 1×d , W [ext] 1 , W [ext] 2 , • • • , W [ext] l are weight matrices, W [ext] 1 , W [ext] 2 , • • • , W [ext] l ∈ R d×d , and i = 1, 2, • • • , t." } ]
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b2", "b3", "b4", "b0", "b5", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b18", "b19", "b0", "b20", "b21", "b22", "b23", "b25", "b0", "b1" ], "table_ref": [], "text": "Semantic segmentation refers to the task of assigning pixel-level category labels in an image, which has achieved significant progress in the last few years [2][3][4][5]. It is worth noting that prevailing models usually require large-scale Figure 1. (a) Considering the driving scenario, we observe that the object location is relatively stable according to the distance from the camera. Therefore, we propose a Depth-guided Contextual Filter (DCF) which is aware of the semantic categories distribution in terms of Near, Middle, and Far view to facilitate cross-domain mixing. (b) Since we explicitly take the semantic layout into consideration, our method achieves better segmentation edges and yields significant improvement on small-scale categories such as Traffic Sign, Pole, and Rider, compared to the competitive MIC (second row) with the vanilla mixing strategy [1].\ntraining datasets with high-quality annotations, such as ADE20K [6], to achieve good performance and but such pixel-level annotations in real-world are usually unaffordable and time-consuming [7]. One straightforward idea is to train networks with synthetic data so that the pixel-level annotations are easier to obtain [8,9]. However, the network trained with synthetic data usually results in poor scalability when being deployed to a real-world environment due to mul-tiple factors, such as weather, illumination, and road design. Therefore, researchers resort to unsupervised domain adaptation (UDA) to further tackle the variance between domains. One branch of UDA methods attempts to mitigate the domain shift by aligning the domain distributions [10][11][12][13][14]. Another potential paradigm to heal the domain shift is self-training [15][16][17][18][19], which recursively refine the target pseudo-labels. Taking one step further, recent DACS [20] and follow-up works [1,[21][22][23][24][25][26] combine self-training and ClassMix [27] to mix images from both source and target domain. In this way, these works could craft highly perturbed samples to assist training by facilitating learning shared knowledge between two domains. Specifically, cross-domain mixing aims to copy the corresponding regions of certain categories from a source domain image and paste them onto an unlabelled target domain image. We note that such a vanilla strategy leads to pasting a large amount of objects to the unrealistic depth position. It is because that every category has its own position distribution. For instance, the background classes such as \"sky\" and \"vegetation\" usually appear farther away, while the classes that occupy a small number of pixels such as \"traffic signs\" and \"pole\", usually appear closer as shown in Figure 1 (a). Such crafted training data compromise contextual learning, leading to sub-optimal location prediction performance, especially for small objects.\nTo address these limitations, we observe the real-world depth distribution and find that semantic categories are easily separated (disentangled) in the depth map since they follow a similar distribution under certain scenarios, e.g., urban. Therefore, we propose a new depth-aware framework, which contains Depth Contextual Filter (DCF) and a cross-task encoder. In particular, DCF removes unrealistic classes mixed with the real-world target training samples based on the depth information. On the other hand, multimodal data could improve the performance of deep representations and the effective use of the deep multi-task features to facilitate the final predictions is crucial. The proposed cross-task encoder contains two specific heads to generate intermediate features for each task and an Adaptive Feature Optimization module (AFO). AFO encourages the network to optimize the fused multi-task features in an end-to-end manner. Specifically, the proposed AFO adopts a series of transformer blocks to capture the information that is crucial to distinguish different categories and assigns high weights to discriminative features and vice versa.\nThe main contributions are as follows: (1) We propose a simple Depth-Guided Contextual Filter (DCF) to accurately explicitly leverage the key semantic categories distribution hidden in the depth map, enhancing the realism of cross-domain information mixing and refining the crossdomain layout mixing. (2) We propose an Adaptive Feature Optimization module (AFO) that enables the cross-task encoder to exploit the discriminative depth information and embed it with the visual feature which jointly facilitates semantic segmentation and pseudo depth estimation. (3) Albeit simple, the effectiveness of our proposed methods has been verified by extensive ablation studies. Despite the pseudo depth, our method still achieves competitive accuracy on two commonly used scene adaptation benchmarks, namely 77.7 mIoU on GTA→Cityscapes and 69.3 mIoU on Synthia→Cityscapes." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Unsupervised Domain Adaptation", "publication_ref": [ "b11", "b27", "b12", "b9", "b28", "b10", "b29", "b30", "b14", "b34", "b15", "b35", "b36", "b19", "b37", "b38", "b39", "b40", "b23", "b27", "b42", "b43", "b45", "b46", "b47", "b48", "b49", "b51" ], "table_ref": [], "text": "Unsupervised domain adaptation (UDA) aims to train a model on a label-rich source domain and adapt the model to a label-scarce target domain. Some methods propose learning the domain-invariant knowledge by aligning the source and target distribution at different levels. For instance, AdaptSeg-Net [12], ADVENT [28], and CLAN [13] adversarially align the distributions in the feature space. CyCADA [10] diminishes the domain shift at both pixel-level and feature-level representation. DALN [29] proposes a discriminator-free adversarial learning network and leverages the predicted discriminative information for feature alignment. Both Wu et al. [11] and Yue et al. [30] learn domain-invariant features by transferring the input images into different styles, such as rainy and foggy, while Zhao et al. [31] and Zhang et al. [32] diversify the feature distribution via normalization and adding noise respectively. Another line of work refines pseudo-labels gradually under the iterative self-training framework, yielding competitive results. Following the motivation of generating highly reliable pseudo labels for further model optimization, CBST [15] adopts class-specific thresholds on top of self-training to improve the generated labels. Feng et al.\n[33] acquire pseudo labels with high precision by leveraging the group information. PyCDA [34] constructs pseudo-labels in various scales to further improve the training. Zheng et al. [35] introduce memory regularization to generate consistent pseudo labels. Other works propose either confidence regularization [16,17] or category-aware rectification [36,37] to improve the quality of pseudo labels. DACS [20] proposes a domain-mixed self-training pipeline to mix cross-domain images during training, avoiding training instabilities. Kim et al. [38], Li et al. [39] and Wang et al. [40] combine adversarial and self-training for further improvement. Chen et al. [41] establish a deliberated domain bridging (DDB) that aligns and interacts with the source and target domain in the intermediate space. SePiCo [24] and PiPa [25] adopt contrastive learning to align the domains. Liu et al. [42] addresses the label shift problem by adopting class-level feature alignment for conditional distribution alignment. Researchers also attempted to perform entropy minimization [28,43], and image translation [44,45]. consistency regularization [46][47][48][49]. Recent multi-target domain adaptation (MTDA) methods enable a single model to adapt a labeled source domain to multiple unlabeled target domains [50][51][52]. However, the above methods usually ignore the rich multi-modality information, which can be easily obtained from the depth sensor and other sensors." }, { "figure_ref": [], "heading": "Depth Estimation and Multi-task Learning in Semantic Segmentation", "publication_ref": [ "b52", "b53", "b54", "b55", "b56", "b57", "b58", "b59", "b60", "b61", "b62", "b20", "b63" ], "table_ref": [], "text": "Semantic segmentation and geometric information are shown to be highly correlated [53][54][55][56][57][58][59]. Recently depth estimation has been increasingly used to improve the learning of semantics within the context of multi-task learning, but the depth information should be exploited more precisely to help the domain adaptation. SPIGAN [60] pioneered the use of geometric information as an additional supervision by regularizing the generator with an auxiliary depth regression task. DADA [61] introduces an adversarial training framework based on the fusion of semantic and depth predictions to facilitate the adaptation. GIO-Ada [62] leverages the geometric information on both the input level and output level to reduce domain shift. CTRL [63] encodes task dependencies between the semantic and depth predictions to capture the cross-task relationships. CorDA [21] bridges the domain gap by utilizing self-supervised depth estimation on both domains. Wu et al. [64] propose to further support semantic segmentation by depth distribution density. Our work follows a similar spirit to leverage depth knowledge as auxiliary supervision. It is worth noting that our work is primarily different from existing works in the following two aspects: (1) from the data perspective, we explicitly delineate the depth distribution to refine data augmentation and construct realistic training samples to enhance contextual learning. (2) from the network perspective, our proposed multi-task learning network not only adopts auxiliary supervision for learning more robust deep representations but also facilitates the multi-task feature fusion by iterative deploying of transformer blocks to jointly learn the rich multi-task information for improving the final predictions." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "In the common UDA setting, label-rich synthetic data is used as source S and label-scarce real-world data is treated as target T . For example, we have n number of labeled training samples in the source domain,\nx S 1 , y S 1 , z S 1 , . . . , x S n , y S n , z S n\nsampled from source domain data X S , Y S , Z S , where x S i and y S i are the i-th sample and corresponding ground truth for semantic segmentation. z S i is the label for the depth estimation task. Similarly, we have m number of unlabeled target images sampled from target domain data X T , Z T , which is denoted by\nx T 1 , z T 1 , . . . , x T m , z T m\n, where x T i is the i-th" }, { "figure_ref": [], "heading": "Algorithm 1 Depth-guided Contextual Filter Algorithm with Cross-Image Mixing and Self Training", "publication_ref": [], "table_ref": [], "text": "Input: Source domain: (x S , y S , z S ∼ X S , Y S , Z S ), Target domain:\n(x T , z T ∼ X T , Z T ). Semantic network F θ . 1: Initialize network parameters θ randomly. 2: for iteration = 1 to n do 3: ŷT ← F θ x T , Generate pseudo label 4:\nPre-calculate the density value p for each class i at each depth interval from the target depth map z T , 5:\nŷM ← M ⊙ y S + (1 -M) ⊙ ŷT , Randomly select 50% categories and copy the category ground truth label from the source image to target pseudo label Re-calculate the density value p after the mixing, 7:\nx M ← M ⊙ x S + (1 -M) ⊙ x T ,\nCalculate the depth density distribution difference before and after mixing, 8:\nFilter the category once the difference exceeds the threshold," }, { "figure_ref": [], "heading": "9:", "publication_ref": [], "table_ref": [], "text": "Re-generate the depth-aware binary mask M DCF , 10:\nŷF ← M DCF ⊙ y S + 1 -M DCF ⊙ ŷT , Generate the filtered training samples with new DCF mask x F ← M DCF ⊙ x S + 1 -M DCF ⊙ x T , 11: Compute predictions ȳS ← argmax F θ x S , ȳF ← argmax F θ x F , 12:\nCompute loss for the batch: ℓ ← L ȳS , y S , ȳF , ŷF ." }, { "figure_ref": [], "heading": "13:", "publication_ref": [], "table_ref": [], "text": "Compute ∇ θ ℓ by backpropagation." }, { "figure_ref": [], "heading": "14:", "publication_ref": [ "b64" ], "table_ref": [], "text": "Perform stochastic gradient descent. 15: end for 16: return F θ unlabeled sample in the target domain and z T i is the label for the depth estimation task. Since depth annotation is not supported by common public datasets, we adopt pseudo depth that can be easily generated by the off-the-shelf model [65]." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Depth-guided Contextual Filter", "publication_ref": [], "table_ref": [], "text": "In UDA, recent works [1, 21-23, 25, 27] apply the strategy to generate cross-domain augmented samples by mixing pixels. The typical mixing is to copy one set of pixels from a source domain image and paste such pixels to one set of pixels from a target domain image. Due to the different layouts between source and target domain data, it is challenging for such a vanilla method to craft high-quality cross-domain mixing samples for training. To decrease noisy signals and simulate augmented training samples with real-world layouts, we propose Depth-guided Contextual Filter to reduce the noisy pixels that are naively mixed across domains. Based on the hypothesis that most semantic categories usually fall under a finite depth range, we introduce DCF, which divides the target depth map z T into a few discrete depth intervals (I z 1, ..., I z n). The implementation of DCF is represented as pseudo-code in Algorithm 1, where the image x S and the corresponding semantic labels y S are sampled from source domain data. The image x T and the depth label z T are from target domain data. Pseudo label ŷT is then generated:\nŷT = F θ x T .(1)\nFor a given real-world target input image x T combined with the pseudo label ŷT and target depth map z T , the density value at each depth interval (I z1 , ..., I zn ) for each class i ∈ (1, . . . , C) can be pre-calculated. For example, the density value for class i at the depth interval I z1 is calculated as p(i, I z1 ). All the density values make up the depth distribution in the target domain image. Then we randomly select half of the categories on the source images. In practice, we apply a binary mask M to denote the corresponding pixels. Then naive cross-domain mixed image x M ix and the mixed label ŷMix can be formulated as:\nx M ix = M ⊙ x S + (1 -M) ⊙ x T ,(2)\nŷMix = M ⊙ y S + (1 -M) ⊙ ŷT ,(3)\nwhere ⊙ denotes the element-wise multiplication of between the mask and the image. The naively mixed images are visualized in Figure 2. It could be observed that due to the depth distribution difference between two domains, pixels of \"Building\" category are mixed from the source domain to the target domain, creating unrealistic images. Training with such training samples will compromise contextual learning. Therefore, we propose to filter the pixels that do not match the depth density distribution in the mixed image. After the naive mixing, we re-calculate the density value for each class at each depth interval. For example, the new density value for class i at the depth interval I z1 is denoted as p (i, I z1 ).\nThen we calculate the depth density distribution difference for each pasted category and denote the difference for class i at the depth interval I z1 as diff z1 i . Once diff z1 i exceeds the threshold of that category i, these pasted pixels are removed. After performing DCF, we confirm the final realistic pixels to be mixed and construct a depth-aware binary mask M DCF , which is changed dynamically based on the depth layout of the current target image.\nThe filtered mixing samples are then generated. In practice, we directly apply the updated depth-aware mask to replace the original mask. Therefore, the new mixed sample and the label are as follows:\nx F = M DCF ⊙ x S + 1 -M DCF ⊙ x T ,(4)\nŷF = M DCF ⊙ y S + 1 -M DCF ⊙ ŷT .(5)\nThe filtered samples could be visualized in Figure 2. Because large objects such as \"sky\" and \"terrain\" usually aggregate and occupy a large amount of pixels and small objects only occupy a small amount of pixels in a certain depth range, we set different filtering thresholds for each category. DCF uses pseudo semantic labels for the target domain as there is no ground truth available. Since the label prediction is not stable in the early stage, we apply a warmup strategy to perform DCF after 10000 iterations. Examples of the input images, naively mixed samples and filtered samples are presented in Figure 2. The sample after the process of the DCF module has the pixels from the source domain that match the depth distribution of the target domain, helping the network to better deal with the domain gap." }, { "figure_ref": [ "fig_2" ], "heading": "Multi-task Scene Adaptation Framework", "publication_ref": [], "table_ref": [], "text": "In order to exploit the relation between segmentation and depth learning, we introduce a multi-task scene adaptation framework including a high resolution semantic encoder, and a cross-task shared encoder with a feature optimization module, which is depicted in Figure 3. The proposed framework incorporates and optimizes the fusion of depth information for improving the final semantic predictions. image that is half of the full resolution. To reduce the domain gap between scene adaptation and supervised learning while maintaining the GPU memory consumption, we adopt a highresolution encoder to encode HR image crops into deep HR features. Then a semantic decoder is used to generate the HR semantic predictions ȳhr . We adopt the cross entropy loss for semantic segmentation:" }, { "figure_ref": [ "fig_2" ], "heading": "High Resolution Semantic Prediction. Most supervised methods use high resolution images for training, but common scene adaptation methods usually use random crops of the", "publication_ref": [ "b60", "b62", "b70" ], "table_ref": [], "text": "L S hr x S , y S = E -y S log ȳS hr ,(6)\nL F hr x F , y F = E -ŷ F log ȳF hr ,(7)\nwhere ȳS hr and ȳT hr are high resolution semantic predictions. y S is the one-hot semantic label for the source domain and ŷF is the one-hot pseudo label for the depth-aware fused domain.\nAdaptive Feature Optimization. In addition to the high resolution encoder, We use another cross-task encoder to encode input images which are shared for both tasks. Depth maps are rich in spatial depth information, but a naive concatenation of depth information directly to visual information causes some interference, e.g. categories at similar depth positions are already well distinguished by visual information, and attention mechanisms can help the network to select the crucial part of the multitask information. In the proposed multi-task learning framework, the visual semantic feature and depth feature is generated by a visual head and a depth head, respectively. As shown in Figure 3, after applying batch normalization, an Adaptive Feature Optimization module then concatenates the normalized input visual feature and the input depth feature to create a fused multi-task feature:\nf in f use = CONCAT f in vis , f in depth ,(8)\nwhere CONCAT (, ) denotes the concatenation operation. The fused feature is then fed into a series of transformer blocks to capture the key information between the two tasks.\nThe attention mechanism adaptively adjusts the extent to which depth features are embedded in visual features.\nf out f use = W T rans f in f use ,(9)\nwhere W T rans is the transformer parameter. The learned output of the transformer blocks is a weight map γ which is multiplied back to the input visual feature and depth feature, resulting in an optimized feature for each task.\nγ = σ W Conv ⊗ f out f use ,(10)\nwhere W Conv denotes the convolution parameter, ⊗ denotes the convolution operation and σ represents the sigmoid function. The weight matrix γ performs adaptive optimization of the muti-task features. And then the fused feature f out f use is fed into different decoders for predicting different final tasks, i.e., the visual and the depth task. The output features are essentially multimodal features containing crucial depth information.\nf out vis = f out vis ⊙ γ,(11)\nf out depth = f out depth ⊙ γ,(12)\nwhere ⊙ represents element-wise multiplication. The optimized visual and depth feature is then fed into the multimodal communication module for further processing. The multimodal communication module refines the learning of key information between two tasks by iterative use of transformer blocks. the inference is merely based on the visual input when the feature optimization is fished. The final semantic prediction ȳS vis and depth prediction zS can be generated from the final visual feature f f inal vis and depth feature f f inal depth by the visual head and depth head . Similar to the high resolution predictions, we use the cross entropy loss for the semantic loss calculation:\nL S vis x S , y S = E -y S log ȳS vis ,(13)\nL F vis x F , y F = E -ŷ F log ȳF vis . (14\n)\nWe also employ the berHu loss for depth regression at source domain:\nL S depth z S = E berHu zS -z S ,(15)\nwhere z and z are predicted and ground truth semantic maps. Following [61,63], we deploy the reversed Huber criterion [71], which is defined as :\nber\nHu (e z ) = |e z | , |e z | ≤ H (ez) 2 +H 2 2H , |e z | > H H = 0.2 max (|e z |) ,(16)\nwhere H is a positive threshold and we set it to 0.2 of the maximum depth residual. Finally, the overall loss function is:\nL = L S hr + L S vis + λ depth L S depth + L F hr + L F vis ,(17)\nwhere hyperparameter λ depth is the loss weight. Considering that our main task is semantic segmentation and the depth estimation is the auxiliary task, we empirically λ depth = 0.1 × 10 -2 . We also designed the ablation studies to change the weight of depth task λ depth to the level of 10 -1 or 10 -3 ." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b0", "b19", "b20", "b21", "b22", "b45", "b20", "b64", "b8", "b20", "b71", "b21", "b4", "b0", "b19", "b21", "b22", "b19", "b22", "b19" ], "table_ref": [], "text": "Datasets. We evaluate the proposed framework on two scene adaptation settings, i.e., GTA → Cityscapes and SYN-THIA → Cityscapes, following common protocols [1,[20][21][22][23]46]. Particularly, the GTA5 dataset [8] is the synthetic dataset collected from a video game, which contains 24,966 images annotated by 19 classes. Following [21], we adopt depth information generated by Monodepth2 [65] model which is trained merely on GTA image sequences. SYN-THIA [9] is a synthetic urban scene dataset with 9,400 training images and 16 classes. Simulated depth information provided by SYNTHIA is used. GTA and SYNTHIA serve as source domain datasets. The target domain dataset is Cityscapes, which is collected from real-world street-view images. Cityscapes contains 2,975 unlabeled training images and 500 validation images. The resolution of Cityscapes is 2048 × 1024 and the common protocol downscales the size to 1024 × 512 to save memory. Following [21], the stereo depth estimation from [72] is used. We leverage the Intersection Over Union (IoU) for per-class performance and the mean Intersection over Union (mIoU) over all classes to report the result. The code is based on Pytorch [73]. We will make our code open-source for reproducing all results.\nExperimental Setup. We adopt DAFormer [22] network with MiT-B5 backbone [5] for the high resolution encoder and DeepLabV2 network with ResNet-101 backbone for the cross-task encoder to reduce the memory consumption. All backbones are initialized with ImageNet pretraining.\nOur training procedure is based on self-training methods with cross-domain mixing [1,20,22,23] and enhanced by our proposed Depth-guided Contextual Filter. Following [ 20,23], the input image resolution is half of the full resolution for the cross-task encoder and full resolution for high resolution encoder. We utilize the same data augmentation e.g., color jitter and Gaussian blur and empirically set pseudo labels threshold 0.968 following [20]. We train the network with batch size 2 for 40k iterations on a Tesla V100 GPU." }, { "figure_ref": [ "fig_3" ], "heading": "Comparison with SOTA", "publication_ref": [ "b0", "b22", "b0", "b0" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Results on GTA→Cityscapes. We show our results on GTA → Cityscapes in Table 1 and highlight the best results in bold. It could be observed that our method yields significant performance improvement over the state-of-the-art method MIC [1] from 75.9 mIoU to 77.7 mIoU. Usually, classes that occupy a small number of pixels are difficult to adapt and have a comparably low IoU performance. However, our method demonstrates competitive IoU improvement on most categories especially on small objects such as +5.7 on \"Rider\", +5.4 on \"Fence\", +5.2 on \"Wall\", +4.4 on \"Traffic Sign\" and +3.4 on \"Pole\". The result shows the effectiveness of the proposed contextual filter and cross-task learning framework in the contextual learning. Our method also increases the mIoU performance of classes that aggregate and occupy a large amount of pixels in an image by a smaller margi such as +1.8 on \"Pedestrain\" and +1.1 on \"Bike\", probably because the rich texture and color information contained in the visual feature already has the ability to recognize these relatively easier classes. The above observations are also qualitatively reflected in Figure 4, where we visualize the segmentation results of the proposed method and the comparison with previous strong transformer-based methods HRDA [23], and MIC [1]. The qualitative results highlighted by white dash boxes show that the proposed method largely improved the prediction quality of challenging small object \"Traffic Sign\" and large category \"Terrain\". Results on Synthia→Cityscapes. We show our results on SYNTHIA → Cityscapes in Table 1 and the results show the consistent performance improvement of our method, increasing from 67.3 to 69.3 (+2.0 mIoU) compared to the state-of-the-art method MIC [1]. Especially, our method significantly increases the IoU performance of the challenging class \"SideWalk\" from 50.5 to 63.1 (+12.6 mIoU). It is also noticeable that our method remains competitive in segmenting most individual classes and yields a significant increase of +6.8 on \"Road\", +6.6 on \"Bus\", +3.9 on \"Pole\" +3.7 on \"Road\", +3.2 on \"Wall\" and +2.9 on \"Truck\"." }, { "figure_ref": [], "heading": "Ablation Study on Different Scene Adaptation Frameworks", "publication_ref": [ "b21", "b22", "b1", "b76" ], "table_ref": [ "tab_3" ], "text": "We combine our method with different scene adaptation architectures on GTA→Cityscapes. Table 4 shows that our method achieves consistent and significant improvements across different methods with different network architectures. Firstly, our method improves the state-of-the-art performance by +1.8 mIoU. Then we evaluate the proposed method on two strong methods based on transformer backbone, yielding +3.2 mIoU and +2.3 mIoU performance increase on DAFormer [22] and HRDA [23], respectively. Secondly, we evaluate our method on DeepLabV2 [2] architecture with ResNet-101 [77] backbone. We show that we improve the performance of the CNN-based cross-domain mixing method, i.e., DACS by +4.1 mIoU. The ablation study verifies the effectiveness of our method in leveraging depth information to enhance cross-domain mixing not only on Transformer-based networks but also on CNN-based architecture." }, { "figure_ref": [], "heading": "Ablation Study on Different Components of the Proposed Method", "publication_ref": [ "b22", "b21", "b0" ], "table_ref": [ "tab_2" ], "text": "In order to verify the effectiveness of our proposed components, we train four different models from M1 to M4 and show the result in Table 3. \"ST Base\" means the self training baseline with semantic segmentation branch and depth regression branch. \"Naive Mix\" denotes the cross-domain mixing strategy. \"DCF\" represents the proposed depth-aware mixing (Depth-guided Contextual Filter). \"AFO\" denotes the proposed Adaptive Feature Optimization module and we used two different method to perform AFO. Firstly, we [23] 73.8 76.1 +2.3 DAFormer [22] MIC [1] 75.9 77.7 +1.8 leverage channel attention (CA) that could select useful information along the channel dimension to perform the feature optimization. In this method, the fused feature is adaptively optimized by SENet [78], the output is a weighted vector which is multiplied back to the visual and depth feature. We use \"AFO (CA)\" to denote this method. Secondly, we leverage the iterative use of transformer block to adaptively optimize the multi-task feature. In this case, the output of the transformer block is a weighted map. The Multimodal Communication (MMC) module is then used to incorporate rich knowledge from the depth prediction. We denote this method as \"AFO (Trans + MMC)\". M1 is the self training baseline with depth regression based on DAFormer architecture. M2 adds the cross-domain mixing strategy for improvement and shows a competitive result of 76.0 mIoU. M3 is the model with the Depth-guided Contextual Filter, increasing the performance from 76.0 to 77.1 mIoU (+1.1 mIoU), which demonstrates the effectiveness of transferring the mixed training images to real-world layout with the help of the depth information. M4 adds the multi-task framework that leverages Channel Attention (CA) mechanism to fuse the discriminative depth feature into the visual feature. The segmentation result is increased by a small margin (+0.2 mIoU), which means CA could help the network to adaptively learn to focus or to ignore information from the auxiliary task to some extent. M5 is our proposed depth-aware multi-task model with both Depth-guided Contextual Filter and Adaptive Feature Optimization (AFO) module. Compared to M3, M5 has a mIoU increase of +0.6 from 77.1 to 77.7, which shows the effectiveness of multi-modal feature optimization using transformers to facilitate contextual learning." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we introduce a new depth-aware scene adaptation framework that effectively leverages the guidance of depth to enhance data augmentation and contextual learning. The proposed framework not only explicitly refines the cross-domain mixing by stimulating real-world layouts with the guidance of depth distributions of objects, but also introduced a cross-task encoder that adaptively optimizes the multi-task feature and focused on the discriminative depth feature to help contextual learning. By integrating our depth-aware framework into existing self-training methods based on either transformer or CNN, we achieve state-ofthe-art performance on two widely used benchmarks and a significant improvement on small-scale categories. Extensive experimental results verify our motivation to transfer the training images to real-world layouts and demonstrate the effectiveness of our multi-task framework in improving scene adaptation performance." } ]
Scene segmentation via unsupervised domain adaptation (UDA) enables the transfer of knowledge acquired from source synthetic data to real-world target data, which largely reduces the need for manual pixel-level annotations in the target domain. To facilitate domain-invariant feature learning, existing methods typically mix data from both the source domain and target domain by simply copying and pasting the pixels. Such vanilla methods are usually sub-optimal since they do not take into account how well the mixed layouts correspond to real-world scenarios. Real-world scenarios are with an inherent layout. We observe that semantic categories, such as sidewalks, buildings, and sky, display relatively consistent depth distributions, and could be clearly distinguished in a depth map. Based on such observation, we propose a depth-aware framework to explicitly leverage depth estimation to mix the categories and facilitate the two complementary tasks, i.e., segmentation and depth learning in an end-to-end manner. In particular, the framework contains a Depth-guided Contextual Filter (DCF) for data augmentation and a cross-task encoder for contextual learning. DCF simulates the real-world layouts, while the cross-task encoder further adaptively fuses the complementing features between two tasks. Besides, it is worth noting that several public datasets do not provide depth annotation. Therefore, we leverage the off-the-shelf depth estimation network to generate the pseudo depth. Extensive experiments show that our proposed methods, even with pseudo depth, achieve competitive performance on two widely-used benchmarks, i.e. 77.7 mIoU on GTA→Cityscapes and 69.3 mIoU on Synthia→Cityscapes.
Transferring to Real-World Layouts: A Depth-aware Framework for Scene Adaptation
[ { "figure_caption": "Copy the corresponding category region from the source image to the target image 6:", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Source domain images x S and x T are mixed together, using the ground truth label y S . The mixed images are de-noised by our proposed Depth-guided Contextual Filter (DCF) and then trained by the network. We illustrate DCF with a set of practical sample. As illustrated, the unrealistic \"Building\" pixels from the source image are mixed pasted to the target image, leading to a noisy mixed sample. The proposed DCF removes these pixels and maintain mixed pixels of \"Traffic Sign\" and \"Pole\" shown in the white dotted boxes, enhancing the realism of cross-domain mixing. (Best viewed when zooming in.)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. The proposed multi-task learning framework. The input images x F are mixed from the source image x S and target domain x T according to the depth (Please refer to Figure2). Then we are fed x S and x F into the high resolution encoder to generate high resolution predictions. To enhance multi-modal learning, the visual and depth feature created by the cross-task encoder are fused and fed into the proposed Adaptive Feature Optimization module (AFO) for multimodal communication. Finally, the multimodal communication via several transformer blocks incorporates and optimizes the fusion of depth information, improving the final visual predictions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Qualitative results on GTA → Cityscapes. From left to right: Target Image, Ground Truth, the visual results predicted by HRDA, MIC and Ours. We deploy the white dash boxes to highlight different prediction parts. The proposed method could predict clear edges. More examples are shown in the supplementary materials.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison with previous UDA methods on GTA → Cityscapes. We present pre-class IoU and mIoU. The best accuracy in every column is in bold. Our results are averaged over 3 random seeds. Further methods are shown in the supplementary materials.", "figure_data": "MethodRoad SW Build Wall Fence Pole TLTS Veg. Terrain SkyPR Rider Car Truck Bus Train Motor Bike mIoUAdvEnt [28]89.4 33.1 81.0 26.626.827.2 33.5 24.7 83.936.778.8 58.7 30.5 84.838.544.51.731.632.445.5MRNet [35]89.1 23.9 82.2 19.520.133.5 42.2 39.1 85.333.776.4 60.2 33.7 86.036.143.35.922.830.845.5APODA [66]85.6 32.8 79.0 29.525.526.8 34.6 19.9 83.740.677.9 59.2 28.3 84.634.649.28.032.639.645.9CBST [15]91.8 53.5 80.5 32.721.034.0 28.9 20.4 83.934.280.9 53.1 24.0 82.730.335.9 16.025.942.845.9PatchAlign [67]92.3 51.9 82.1 29.225.124.5 33.8 33.0 82.432.882.2 58.6 27.2 84.333.446.32.229.532.346.5MRKLD [16]91.0 55.4 80.0 33.721.437.3 32.9 24.5 85.034.180.8 57.7 24.6 84.127.830.1 26.926.042.347.1BL [39]91.0 44.7 84.2 34.627.630.2 36.0 36.0 85.043.683.0 58.6 31.6 83.335.349.73.328.835.648.5DT [68]90.6 44.7 84.8 34.328.731.6 35.0 37.6 84.743.385.3 57.0 31.5 83.842.648.51.930.439.049.2Uncertainty [17] 90.4 31.2 85.1 36.925.637.5 48.8 48.5 85.334.881.1 64.4 36.8 86.334.952.21.729.044.650.3DACS [20]89.9 39.7 87.9 30.739.538.5 46.4 52.8 88.044.088.8 67.2 35.8 84.545.750.20.027.334.052.1BAPA [69]94.4 61.0 88.0 26.839.938.3 46.1 55.3 87.846.189.4 68.8 40.0 90.260.459.00.045.154.257.4ProDA [36]87.8 56.0 79.7 46.344.845.6 53.5 53.5 88.645.282.1 70.7 39.2 88.845.559.41.048.956.457.5DAFormer [22]95.7 70.2 89.4 53.548.149.6 55.8 59.4 89.947.992.5 72.2 44.7 92.374.578.2 65.155.961.868.3CAMix [49]96.0 73.1 89.5 53.950.851.7 58.7 64.9 90.051.292.2 71.8 44.0 92.878.782.3 70.954.164.370.0HRDA [23]96.4 74.4 91.0 61.651.557.1 63.9 69.3 91.348.494.2 79.0 52.9 93.984.185.7 75.963.967.573.8MIC [1]97.4 80.1 91.7 61.256.959.7 66.0 71.3 91.751.494.3 79.8 56.1 94.685.490.3 80.464.568.575.9CorDA † [21]94.7 63.1 87.6 30.740.640.2 47.8 51.6 87.647.089.7 66.7 35.9 90.248.957.50.039.856.056.6FAFS † [70]93.4 60.7 88.0 43.532.140.3 54.3 53.0 88.244.590.0 69.5 35.8 88.734.153.9 41.351.754.758.8DBST † [70]94.3 60.0 87.9 50.543.042.6 50.8 51.3 88.045.989.7 68.9 41.8 88.045.863.80.050.055.858.8Ours †97.5 80.7 92.1 66.462.363.1 67.7 75.7 91.852.493.9 81.6 61.8 94.788.390.0 81.265.869.677.7", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with previous UDA methods on SYNTHIA → Cityscapes. We present pre-class IoU and mIoU. mIoU are averaged over 16 categories, respectively. The best accuracy in every column is in bold. Our results are averaged over 3 random seeds. Further methods are shown in the supplementary materials.", "figure_data": "MethodRoad SW Build Wall* Fence* Pole* TLTS Veg. SkyPR Rider Car Bus Motor Bike mIoUMaxSquare [43]77.4 34.0 78.75.60.227.75.89.8 80.7 83.2 58.5 20.5 74.1 32.111.029.939.3AdaptSegNet [12] 84.3 42.7 77.5---4.77.0 77.9 82.5 54.3 21.0 72.3 32.218.932.346.7AdvEnt [28]85.6 42.2 79.78.70.425.95.48.1 80.4 84.1 57.9 23.8 73.3 36.414.233.041.2ASA [74]91.2 48.5 80.43.70.321.75.55.2 79.5 83.6 56.4 21.0 80.3 36.220.032.941.7CBST [15]68.0 29.9 76.310.81.433.9 22.8 29.5 77.6 78.3 60.6 28.3 81.6 23.518.839.842.6CLAN [13]81.3 37.0 80.1---16.1 13.7 78.2 81.5 53.4 21.2 73.0 32.922.630.747.8SP-Adv [75]84.8 35.8 78.6---6.2 15.6 80.5 82.0 66.5 22.7 74.3 34.119.227.348.3Uncertainty [17]87.6 41.9 83.114.71.736.2 31.3 19.9 81.6 80.6 63.0 21.8 86.2 40.723.653.147.9APODA [66]86.4 41.3 79.3---22.6 17.3 80.3 81.6 56.9 21.0 84.1 49.124.645.753.1IAST [76]81.9 41.5 83.317.74.632.3 30.9 28.8 83.4 85.0 65.5 30.8 86.5 38.233.152.749.8DAFormer [22]84.5 40.7 88.441.56.550.0 55.0 54.6 86.0 89.8 73.2 48.2 87.2 53.253.961.760.9HRDA [23]85.2 47.7 88.849.54.857.2 65.7 60.9 85.3 92.9 79.4 52.8 89.0 64.763.964.965.8MIC [1]86.6 50.5 89.347.97.859.4 66.7 63.4 87.1 94.6 81.0 58.9 90.1 61.967.164.367.3DADA † [61]89.2 44.8 81.46.80.326.28.6 11.1 81.8 84.0 54.7 19.3 79.7 40.714.038.842.6CorDA † [21]93.3 61.6 85.319.65.137.8 36.6 42.8 84.9 90.4 69.7 41.8 85.6 38.432.653.955.0Ours †93.4 63.1 89.851.19.161.4 66.9 64.0 88.0 94.5 80.9 56.6 90.9 68.563.766.669.3", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study of different components of our proposed framework on GTA→Cityscapes. The results are averaged over 3 random seeds.", "figure_data": "Method ST Base. Naive Mix. DCF. AFO. (CA) AFO. (Trans + MMC) mIoU↑M1✓73.1M2✓✓76.0M3✓✓✓77.1M4✓✓✓✓77.3M5✓✓✓✓77.7", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Compatibility of the proposed method on different UDA methods and backbones on GTA→Cityscapes. Our results are averaged over 3 random seeds.", "figure_data": "BackboneUDA Methodw/ow/Diff.DeepLabV2 [2] DACS [20]52.1 56.2 +4.1DAFormer [22] DAFormer [22] 68.3 71.5 +3.2DAFormer [22] HRDA", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Mu Chen; Zhedong Zheng; Yi Yang; Reler; Aaii
[ { "authors": "Lukas Hoyer; Dengxin Dai; Haoran Wang; Luc Van Gool", "journal": "CVPR", "ref_id": "b0", "title": "MIC: Masked image consistency for context-enhanced domain adaptation", "year": "2023" }, { "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b1", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": "2017" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b2", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "Vijay Badrinarayanan; Alex Kendall; Roberto Cipolla", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b3", "title": "Segnet: A deep convolutional encoder-decoder architecture for image segmentation", "year": "2017" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "NeurIPS", "ref_id": "b4", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Bolei Zhou; Hang Zhao; Xavier Puig; Tete Xiao; Sanja Fidler; Adela Barriuso; Antonio Torralba", "journal": "International Journal of Computer Vision", "ref_id": "b5", "title": "Semantic understanding of scenes through the ade20k dataset", "year": "2019" }, { "authors": "Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele", "journal": "", "ref_id": "b6", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "Vibhav Stephan R Richter; Stefan Vineet; Vladlen Roth; Koltun", "journal": "", "ref_id": "b7", "title": "Playing for data: Ground truth from computer games", "year": "2016" }, { "authors": "German Ros; Laura Sellart; Joanna Materzynska; David Vazquez; Antonio M Lopez", "journal": "", "ref_id": "b8", "title": "The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes", "year": "2016" }, { "authors": "Judy Hoffman; Eric Tzeng; Taesung Park; Jun-Yan Zhu; Phillip Isola; Kate Saenko; Alexei Efros; Trevor Darrell", "journal": "", "ref_id": "b9", "title": "Cycada: Cycle-consistent adversarial domain adaptation", "year": "2018" }, { "authors": "Zuxuan Wu; Xin Wang; Joseph E Gonzalez; Tom Goldstein; Larry S Davis", "journal": "", "ref_id": "b10", "title": "Ace: Adapting to changing environments for semantic segmentation", "year": "2019" }, { "authors": "Yi-Hsuan Tsai; Wei-Chih Hung; Samuel Schulter; Kihyuk Sohn; Ming-Hsuan Yang; Manmohan Chandraker", "journal": "", "ref_id": "b11", "title": "Learning to adapt structured output space for semantic segmentation", "year": "2018" }, { "authors": "Yawei Luo; Liang Zheng; Tao Guan; Junqing Yu; Yi Yang", "journal": "", "ref_id": "b12", "title": "Taking a closer look at domain shift: Category-level adversaries for semantics consistent domain adaptation", "year": "2019" }, { "authors": "Harsh Rangwani; K Sumukh; Mayank Aithal; Arihant Mishra; Venkatesh Jain; Babu Radhakrishnan", "journal": "", "ref_id": "b13", "title": "A closer look at smoothness in domain adversarial training", "year": "2022" }, { "authors": "Yang Zou; Zhiding Yu; Jinsong Kumar; Wang", "journal": "", "ref_id": "b14", "title": "Unsupervised domain adaptation for semantic segmentation via class-balanced self-training", "year": "2018" }, { "authors": "Yang Zou; Zhiding Yu; Xiaofeng Liu; Jinsong Kumar; Wang", "journal": "", "ref_id": "b15", "title": "Confidence regularized self-training", "year": "2019" }, { "authors": "Zhedong Zheng; Yi Yang", "journal": "International Journal of Computer Vision", "ref_id": "b16", "title": "Rectifying pseudo label learning via uncertainty estimation for domain adaptive semantic segmentation", "year": "2021" }, { "authors": "Yanchao Yang; Stefano Soatto", "journal": "", "ref_id": "b17", "title": "Fda: Fourier domain adaptation for semantic segmentation", "year": "2020" }, { "authors": "Ruihuang Li; Shuai Li; Chenhang He; Yabin Zhang; Xu Jia; Lei Zhang", "journal": "", "ref_id": "b18", "title": "Class-balanced pixel-level self-labeling for domain adaptive semantic segmentation", "year": "2022" }, { "authors": "Viktor Wilhelm Tranheden; Juliano Olsson; Lennart Pinto; Svensson", "journal": "", "ref_id": "b19", "title": "Dacs: Domain adaptation via crossdomain mixed sampling", "year": "2021" }, { "authors": "Qin Wang; Dengxin Dai; Lukas Hoyer; Luc Van Gool; Olga Fink", "journal": "", "ref_id": "b20", "title": "Domain adaptive semantic segmentation with self-supervised depth estimation", "year": "2021" }, { "authors": "Lukas Hoyer; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b21", "title": "Daformer: Improving network architectures and training strategies for domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "Lukas Hoyer; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b22", "title": "Hrda: Contextaware high-resolution domain-adaptive semantic segmentation", "year": "2022" }, { "authors": "Binhui Xie; Shuang Li; Mingjia Li; Chi Harold Liu; Gao Huang; Guoren Wang", "journal": "", "ref_id": "b23", "title": "Sepico: Semantic-guided pixel contrast for domain adaptive semantic segmentation", "year": "2022" }, { "authors": "Mu Chen; Zhedong Zheng; Yi Yang; Tat-Seng Chua", "journal": "", "ref_id": "b24", "title": "Pipa: Pixel-and patch-wise self-supervised learning for domain adaptative semantic segmentation", "year": "2022" }, { "authors": "Zhengkai Jiang; Yuxi Li; Ceyuan Yang; Peng Gao; Yabiao Wang; Ying Tai; Chengjie Wang", "journal": "", "ref_id": "b25", "title": "Prototypical contrast adaptation for domain adaptive segmentation", "year": "2022" }, { "authors": "Wilhelm Viktor Olsson; Juliano Tranheden; Lennart Pinto; Svensson", "journal": "", "ref_id": "b26", "title": "Classmix: Segmentation-based data augmentation for semi-supervised learning", "year": "2021" }, { "authors": "Tuan-Hung Vu; Himalaya Jain; Maxime Bucher; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b27", "title": "Advent: Adversarial entropy minimization for domain adaptation in semantic segmentation", "year": "2019" }, { "authors": "Lin Chen; Huaian Chen; Zhixiang Wei; Xin Jin; Xiao Tan; Yi Jin; Enhong Chen", "journal": "", "ref_id": "b28", "title": "Reusing the task-specific classifier as a discriminator: Discriminator-free adversarial domain adaptation", "year": "2022" }, { "authors": "Xiangyu Yue; Yang Zhang; Sicheng Zhao; Alberto Sangiovanni-Vincentelli; Kurt Keutzer; Boqing Gong", "journal": "", "ref_id": "b29", "title": "Domain randomization and pyramid consistency: Simulationto-real generalization without accessing target domain data", "year": "2019" }, { "authors": "Yuyang Zhao; Zhun Zhong; Zhiming Luo; Gim ; Hee Lee; Nicu Sebe", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b30", "title": "Source-free open compound domain adaptation in semantic segmentation", "year": "2022" }, { "authors": "Xinyu Zhang; Dongdong Li; Zhigang Wang; Jian Wang; Errui Ding; Javen Qinfeng Shi; Zhaoxiang Zhang; Jingdong Wang", "journal": "", "ref_id": "b31", "title": "Implicit sample extension for unsupervised person re-identification", "year": "2022" }, { "authors": "Minghao Hao Feng; Jinming Chen; Dong Hu; Haifeng Shen; Deng Liu; Cai", "journal": "IEEE Transactions on Image Processing", "ref_id": "b32", "title": "Complementary pseudo labels for unsupervised domain adaptation on person re-identification", "year": "2021" }, { "authors": "Qing Lian; Fengmao Lv; Lixin Duan; Boqing Gong", "journal": "", "ref_id": "b33", "title": "Constructing self-motivated pyramid curriculums for crossdomain semantic segmentation: A non-adversarial approach", "year": "2019" }, { "authors": "Zhedong Zheng; Yi Yang", "journal": "", "ref_id": "b34", "title": "Unsupervised scene adaptation with memory regularization in vivo", "year": "2020" }, { "authors": "Pan Zhang; Bo Zhang; Ting Zhang; Dong Chen; Yong Wang; Fang Wen", "journal": "", "ref_id": "b35", "title": "Prototypical pseudo label denoising and target structure learning for domain adaptive semantic segmentation", "year": "2021" }, { "authors": "Qiming Zhang; Jing Zhang; Wei Liu; Dacheng Tao", "journal": "NeurIPS", "ref_id": "b36", "title": "Category anchor-guided unsupervised domain adaptation for semantic segmentation", "year": "2019" }, { "authors": "Myeongjin Kim; Hyeran Byun", "journal": "", "ref_id": "b37", "title": "Learning texture invariant representation for domain adaptation of semantic segmentation", "year": "2020" }, { "authors": "Yunsheng Li; Lu Yuan; Nuno Vasconcelos", "journal": "", "ref_id": "b38", "title": "Bidirectional learning for domain adaptation of semantic segmentation", "year": "2019" }, { "authors": "Haoran Wang; Tong Shen; Wei Zhang; Ling-Yu Duan; Tao Mei", "journal": "", "ref_id": "b39", "title": "Classes matter: A fine-grained adversarial approach to cross-domain semantic segmentation", "year": "2020" }, { "authors": "Lin Chen; Zhixiang Wei; Xin Jin; Huaian Chen; Miao Zheng; Kai Chen; Yi Jin", "journal": "", "ref_id": "b40", "title": "Deliberated domain bridging for domain adaptive semantic segmentation", "year": "2022" }, { "authors": "Yahao Liu; Jinhong Deng; Jiale Tao; Tong Chu; Lixin Duan; Wen Li", "journal": "", "ref_id": "b41", "title": "Undoing the damage of label shift for crossdomain semantic segmentation", "year": "2022" }, { "authors": "Minghao Chen; Hongyang Xue; Deng Cai", "journal": "", "ref_id": "b42", "title": "Domain adaptation for semantic segmentation with maximum squares loss", "year": "2019" }, { "authors": "Shaohua Guo; Qianyu Zhou; Ye Zhou; Qiqi Gu; Junshu Tang; Zhengyang Feng; Lizhuang Ma", "journal": "", "ref_id": "b43", "title": "Label-free regional consistency for image-to-image translation", "year": "2021" }, { "authors": "Jinyu Yang; Weizhi An; Sheng Wang; Xinliang Zhu; Chaochao Yan; Junzhou Huang", "journal": "", "ref_id": "b44", "title": "Label-driven reconstruction for domain adaptation in semantic segmentation", "year": "2020" }, { "authors": "Nikita Araslanov; Stefan Roth", "journal": "", "ref_id": "b45", "title": "Self-supervised augmentation consistency for adapting semantic segmentation", "year": "2021" }, { "authors": "Jaehoon Choi; Taekyung Kim; Changick Kim", "journal": "", "ref_id": "b46", "title": "Selfensembling with gan-based data augmentation for domain adaptation in semantic segmentation", "year": "2019" }, { "authors": "Luke Melas; -Kyriazi ; Arjun K Manrai", "journal": "", "ref_id": "b47", "title": "Pixmatch: Unsupervised domain adaptation via pixelwise consistency training", "year": "2021" }, { "authors": "Qianyu Zhou; Zhengyang Feng; Qiqi Gu; Jiangmiao Pang; Guangliang Cheng; Xuequan Lu; Jianping Shi; Lizhuang Ma", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b48", "title": "Context-aware mixup for domain adaptive semantic segmentation", "year": "2022" }, { "authors": "Pritish Behnam Gholami; Ognjen Sahu; Konstantinos Rudovic; Vladimir Bousmalis; Pavlovic", "journal": "IEEE Transactions on Image Processing", "ref_id": "b49", "title": "Unsupervised multitarget domain adaptation: An information theoretic approach", "year": "2020" }, { "authors": "Antoine Saporta; Tuan-Hung Vu; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b50", "title": "Multi-target adversarial frameworks for domain adaptation in semantic segmentation", "year": "2021" }, { "authors": "Seunghun Lee; Wonhyeok Choi; Changjae Kim; Minwoo Choi; Sunghoon Im", "journal": "", "ref_id": "b51", "title": "Adas: A direct adaptation strategy for multi-target domain adaptive semantic segmentation", "year": "2022" }, { "authors": "Zhenyu Zhang; Zhen Cui; Chunyan Xu; Zequn Jie; Xiang Li; Jian Yang", "journal": "", "ref_id": "b52", "title": "Joint task-recursive learning for semantic segmentation and depth estimation", "year": "2018" }, { "authors": "Dan Xu; Wanli Ouyang; Xiaogang Wang; Nicu Sebe", "journal": "", "ref_id": "b53", "title": "Padnet: Multi-tasks guided prediction-and-distillation network for simultaneous depth estimation and scene parsing", "year": "2018" }, { "authors": "Menelaos Kanakis; David Bruggemann; Suman Saha; Stamatios Georgoulis; Anton Obukhov; Luc Van Gool", "journal": "", "ref_id": "b54", "title": "Reparameterizing convolutions for incremental multi-task learning without task interference", "year": "2020" }, { "authors": "Simon Vandenhende; Stamatios Georgoulis; Luc Van Gool", "journal": "", "ref_id": "b55", "title": "Mti-net: Multi-scale task interaction networks for multi-task learning", "year": "2020" }, { "authors": "Trevor Standley; Amir Zamir; Dawn Chen; Leonidas Guibas; Jitendra Malik; Silvio Savarese", "journal": "", "ref_id": "b56", "title": "Which tasks should be learned together in multi-task learning?", "year": "2020" }, { "authors": "Zhenyu Zhang; Zhen Cui; Chunyan Xu; Yan Yan; Nicu Sebe; Jian Yang", "journal": "", "ref_id": "b57", "title": "Pattern-affinitive propagation across depth, surface normal and semantic segmentation", "year": "2019" }, { "authors": "Yaxiong Wang; Yunchao Wei; Xueming Qian; Li Zhu; Yi Yang", "journal": "IEEE", "ref_id": "b58", "title": "Ainet: Association implantation for superpixel segmentation", "year": "2021" }, { "authors": "Kuan-Hui Lee; German Ros; Jie Li; Adrien Gaidon", "journal": "", "ref_id": "b59", "title": "Spigan: Privileged adversarial learning from simulation", "year": "2018" }, { "authors": "Tuan-Hung Vu; Himalaya Jain; Maxime Bucher; Matthieu Cord; Patrick Pérez", "journal": "", "ref_id": "b60", "title": "Dada: Depth-aware domain adaptation in semantic segmentation", "year": "2019" }, { "authors": "Yuhua Chen; Wen Li; Xiaoran Chen; Luc Van Gool", "journal": "", "ref_id": "b61", "title": "Learning semantic segmentation from synthetic data: A geometrically guided input-output adaptation approach", "year": "2019" }, { "authors": "Suman Saha; Anton Obukhov; Pani Danda; Menelaos Paudel; Yuhua Kanakis; Stamatios Chen; Luc Georgoulis; Van Gool", "journal": "", "ref_id": "b62", "title": "Learning to relate depth and semantics for unsupervised domain adaptation", "year": "2021" }, { "authors": "Quanliang Wu; Huajun Liu", "journal": "NeurIPS", "ref_id": "b63", "title": "Unsupervised domain adaptation for semantic segmentation using depth distribution", "year": "2022" }, { "authors": "Clément Godard; Oisin Mac Aodha; Michael Firman; Gabriel J Brostow", "journal": "", "ref_id": "b64", "title": "Digging into self-supervised monocular depth estimation", "year": "2019" }, { "authors": "Jihan Yang; Ruijia Xu; Ruiyu Li; Xiaojuan Qi; Xiaoyong Shen; Guanbin Li; Liang Lin", "journal": "", "ref_id": "b65", "title": "An adversarial perturbation oriented domain adaptation approach for semantic segmentation", "year": "2020" }, { "authors": "Yi-Hsuan Tsai; Kihyuk Sohn; Samuel Schulter; Manmohan Chandraker", "journal": "", "ref_id": "b66", "title": "Domain adaptation for structured output via discriminative patch representations", "year": "2019" }, { "authors": "Zhonghao Wang; Mo Yu; Yunchao Wei; Rogerio Feris; Jinjun Xiong; Wen-Mei Hwu; Thomas S Huang; Honghui Shi", "journal": "", "ref_id": "b67", "title": "Differential treatment for stuff and things: A simple unsupervised domain adaptation method for semantic segmentation", "year": "2020" }, { "authors": "Yahao Liu; Jinhong Deng; Xinchen Gao; Wen Li; Lixin Duan", "journal": "", "ref_id": "b68", "title": "Bapa-net: Boundary adaptation and prototype alignment for cross-domain semantic segmentation", "year": "2021" }, { "authors": "Adriano Cardace; Luca De Luigi; Pierluigi Zama Ramirez; Samuele Salti; Luigi Di; Stefano ", "journal": "", "ref_id": "b69", "title": "Plugging self-supervised monocular depth into unsupervised domain adaptation for semantic segmentation", "year": "2022" }, { "authors": "Iro Laina; Christian Rupprecht; Vasileios Belagiannis; Federico Tombari; Nassir Navab", "journal": "", "ref_id": "b70", "title": "Deeper depth prediction with fully convolutional residual networks", "year": "2016" }, { "authors": "Christos Sakaridis; Dengxin Dai; Simon Hecker; Luc Van Gool", "journal": "", "ref_id": "b71", "title": "Model adaptation with synthetic and real data for semantic dense foggy scene understanding", "year": "2018" }, { "authors": "Adam Paszke; Sam Gross; Soumith Chintala; Gregory Chanan; Edward Yang; Zachary Devito; Zeming Lin; Alban Desmaison; Luca Antiga; Adam Lerer", "journal": "", "ref_id": "b72", "title": "Automatic differentiation in pytorch", "year": "2017" }, { "authors": "Wei Zhou; Yukang Wang; Jiajia Chu; Jiehua Yang; Xiang Bai; Yongchao Xu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b73", "title": "Affinity space adaptation for semantic segmentation across domains", "year": "2020" }, { "authors": "Yuhu Shan; Chee Meng Chew; Wen Feng; Lu ", "journal": "Neurocomputing", "ref_id": "b74", "title": "Semanticaware short path adversarial training for cross-domain semantic segmentation", "year": "2020" }, { "authors": "Ke Mei; Chuang Zhu; Jiaqi Zou; Shanghang Zhang", "journal": "", "ref_id": "b75", "title": "Instance adaptive self-training for unsupervised domain adaptation", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b76", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Jie Hu; Li Shen; Gang Sun", "journal": "", "ref_id": "b77", "title": "Squeeze-and-excitation networks", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 60.49, 628.9, 119.11, 12.2 ], "formula_id": "formula_0", "formula_text": "x S 1 , y S 1 , z S 1 , . . . , x S n , y S n , z S n" }, { "formula_coordinates": [ 3, 98.37, 702.62, 91.25, 12.2 ], "formula_id": "formula_1", "formula_text": "x T 1 , z T 1 , . . . , x T m , z T m" }, { "formula_coordinates": [ 3, 314.62, 114.1, 230.74, 69.88 ], "formula_id": "formula_2", "formula_text": "(x T , z T ∼ X T , Z T ). Semantic network F θ . 1: Initialize network parameters θ randomly. 2: for iteration = 1 to n do 3: ŷT ← F θ x T , Generate pseudo label 4:" }, { "formula_coordinates": [ 3, 335.46, 233.65, 144.32, 10.31 ], "formula_id": "formula_3", "formula_text": "x M ← M ⊙ x S + (1 -M) ⊙ x T ," }, { "formula_coordinates": [ 3, 310.63, 341.25, 234.48, 82.83 ], "formula_id": "formula_4", "formula_text": "ŷF ← M DCF ⊙ y S + 1 -M DCF ⊙ ŷT , Generate the filtered training samples with new DCF mask x F ← M DCF ⊙ x S + 1 -M DCF ⊙ x T , 11: Compute predictions ȳS ← argmax F θ x S , ȳF ← argmax F θ x F , 12:" }, { "formula_coordinates": [ 4, 136.36, 500.29, 150.67, 11.72 ], "formula_id": "formula_5", "formula_text": "ŷT = F θ x T .(1)" }, { "formula_coordinates": [ 4, 94.58, 654.84, 192.45, 11.03 ], "formula_id": "formula_6", "formula_text": "x M ix = M ⊙ x S + (1 -M) ⊙ x T ,(2)" }, { "formula_coordinates": [ 4, 94.72, 671.29, 192.31, 11.03 ], "formula_id": "formula_7", "formula_text": "ŷMix = M ⊙ y S + (1 -M) ⊙ ŷT ,(3)" }, { "formula_coordinates": [ 4, 339.42, 346.53, 206.36, 11.03 ], "formula_id": "formula_8", "formula_text": "x F = M DCF ⊙ x S + 1 -M DCF ⊙ x T ,(4)" }, { "formula_coordinates": [ 4, 339.56, 362.98, 206.22, 11.03 ], "formula_id": "formula_9", "formula_text": "ŷF = M DCF ⊙ y S + 1 -M DCF ⊙ ŷT .(5)" }, { "formula_coordinates": [ 5, 98.5, 416.74, 188.53, 12.69 ], "formula_id": "formula_10", "formula_text": "L S hr x S , y S = E -y S log ȳS hr ,(6)" }, { "formula_coordinates": [ 5, 96.8, 433.19, 190.23, 12.69 ], "formula_id": "formula_11", "formula_text": "L F hr x F , y F = E -ŷ F log ȳF hr ,(7)" }, { "formula_coordinates": [ 5, 98.26, 702.12, 188.77, 12.69 ], "formula_id": "formula_12", "formula_text": "f in f use = CONCAT f in vis , f in depth ,(8)" }, { "formula_coordinates": [ 5, 373.92, 395.62, 171.86, 12.69 ], "formula_id": "formula_13", "formula_text": "f out f use = W T rans f in f use ,(9)" }, { "formula_coordinates": [ 5, 373.83, 475.69, 171.95, 12.69 ], "formula_id": "formula_14", "formula_text": "γ = σ W Conv ⊗ f out f use ,(10)" }, { "formula_coordinates": [ 5, 391.52, 593.7, 154.26, 12.69 ], "formula_id": "formula_15", "formula_text": "f out vis = f out vis ⊙ γ,(11)" }, { "formula_coordinates": [ 5, 384.53, 612.18, 161.25, 12.69 ], "formula_id": "formula_16", "formula_text": "f out depth = f out depth ⊙ γ,(12)" }, { "formula_coordinates": [ 6, 95.84, 378.4, 191.19, 12.69 ], "formula_id": "formula_17", "formula_text": "L S vis x S , y S = E -y S log ȳS vis ,(13)" }, { "formula_coordinates": [ 6, 94.56, 404.5, 188.32, 12.69 ], "formula_id": "formula_18", "formula_text": "L F vis x F , y F = E -ŷ F log ȳF vis . (14" }, { "formula_coordinates": [ 6, 282.88, 406.89, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 90.79, 459.08, 196.24, 12.69 ], "formula_id": "formula_20", "formula_text": "L S depth z S = E berHu zS -z S ,(15)" }, { "formula_coordinates": [ 6, 99.52, 531.72, 187.51, 41.88 ], "formula_id": "formula_21", "formula_text": "Hu (e z ) = |e z | , |e z | ≤ H (ez) 2 +H 2 2H , |e z | > H H = 0.2 max (|e z |) ,(16)" }, { "formula_coordinates": [ 6, 63.94, 631.28, 223.09, 12.69 ], "formula_id": "formula_22", "formula_text": "L = L S hr + L S vis + λ depth L S depth + L F hr + L F vis ,(17)" } ]
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b4", "b11", "b20", "b26", "b15", "b6", "b3", "b37", "b0", "b28", "b34" ], "table_ref": [], "text": "Machine learning models that are trained on personal data may discriminate against groups with sensitive attributes. Broadly speaking, there are three major paradigms to address this problem. The first paradigm assumes fairness can be measured. Then, the minimization of unfairness metrics is integrated in the empirical risk minimization as a multiobjective optimization problem (Aghaei, Azizi, and Vayanos 2019;Berk et al. 2017). The second paradigm assumes that discrimination arises from the use of protected (i.e., sensitive) attributes and those correlated to them. Therefore, removing sensitive information from the input data can support learning fair models (Creager et al. 2019). The third paradigm builds on the assumption that discrimination arises from biased labeling processes (e.g., through biased domain knowledge or biased human feedback). Corresponding approaches aim at identifying and correcting label bias (Jiang and Nachum 2019), such as the adaptive sensitive reweighting of instances (Krasanakis et al. 2018).\nThese paradigms do not deal directly with the issue that, by definition, minority groups are smaller than the majority. The effects of under-represented data samples in the learn-ing process are 'overridden' by the prevalence of data samples from the majority group. The under-representation negatively affects the sensitivity of the fairness metrics and can hide undesirable correlations between attributes in the minority group. That is, leading to a representation bias.\nWe propose a reweighting scheme to mitigate predictive quality issues arising from the imbalance between sensitive groups. We do so by mapping the data into a latent space where the data distribution becomes non-discriminatory with respect to the sensitive attribute. Simultaneously, the empirical risk for the classification task at hand is minimized. Our method addresses representation bias by weighting the samples from the majority group. It aims to maintain the class-wise discriminatory information of the data samples from the majority group that are further away from the minority group, but downplay their importance. Hence, the majority and minority groups become similar in distribution and almost non-discriminatory in classification. We use the critic of a Wasserstein Generative Adversarial Network (WGAN) with gradient penalty (Gulrajani et al. 2017) to approximate distances between samples from the minority and reweighted majority groups in the latent space.\nThe rationale for our method is that if subgroups are sufficiently represented in a non-discriminatory way, bias in prediction would be substantially reduced, if not eliminated (Chai and Wang 2022). Reweighting instances has been adopted in methods for learning from imbalanced datasets (Bao et al. 2020;Zhang et al. 2019), which focus on optimizing the performance under a class imbalance, without considering representation bias. Our method is different from existing adversarial methods (Adel et al. 2019;Madras et al. 2018;Wadsworth, Vera, and Piech 2018) in exploiting the competition between the reweighing component and the discriminator of the GAN framework, as an additional discriminator has been generally used to decorrelate feature embeddings from sensitive information.\nWe perform experiments on different datasets and compare with four state-of-the-art fairness-aware methods. Our method outperforms its competitors in mitigating bias while maintaining high prediction quality, as demonstrated by the experimental evaluation of image and tabular benchmark datasets. Hence, our method inherently addresses fairness as well as prediction quality issues that might arise from learning on imbalanced datasets with respect to sensitive groups.\nWe summarize our contribution as follows: (1) We formulate a novel data transformation and sample-based reweighting method for mitigating representation bias related to sensitive groups in classification tasks. (2) We show theoretically that by closing the Wasserstein distance gap between sensitive groups in the latent space during training, our reweighting approach leads to predictions that adhere to demographic parity. (3) We provide a thorough evaluation of the proposed technique on image and tabular benchmark datasets and show the viability of our approach with respect to robustness to fairness, accuracy and label noise. Code is available at https://anonymous.4open.science/r/wasserstein reweight-46E6/." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b28", "b34", "b9", "b23", "b6", "b26", "b15", "b3", "b19" ], "table_ref": [], "text": "Adversarial methods Existing adversarial fairness methods (Adel et al. 2019;Madras et al. 2018;Wadsworth, Vera, and Piech 2018) use in an in-processing fashion a discriminator to decorrelate the embeddings and the sensitive attribute. The authors of (Choi et al. 2019) and (Kim et al. 2019) propose to minimize mutual information between the biased labels and the embedding through adversarial training. These works disentangle the sensitive attribute in the latent space, yet they do not consider the under-representation of sensitive groups. Our work considers representation bias in the decorrelation process by reweighting to align the distributions of the sensitive groups instead of only adjusting the encoder.\nReweighting methods Fairness with Adaptive Weights (Chai and Wang 2022) also constrains the sum of weights among sensitive groups to be equal, assigning weights to a sample based on its misclassification likelihood. Adaptive sensitive reweighting to mitigate bias (Krasanakis et al. 2018) assigns weights to samples based on their alignment with the unobserved true labeling. As highlighted example, Adversarial reweighting for domain adaptation (Gu et al. 2021) aims to align the distributions of the source and target domains, yet it deals with the domain adaptation problem. In our work, we extend the concept of reweighting based on the Wasserstein distance to the fairness domain.\nImbalanced classification There are two main imbalanced classification methods: resampling and cost-sensitive learning. Resampling methods achieve balance between class groups by oversampling the group with a small size (the minority group in fairness settings) or undersampling the group with a large size (the majority group in fairness settings) or both. For instance, (Bao et al. 2020) carries out classification using clustering centers in latent space to balance among the groups, which is equivalent to undersampling all groups. Cost-sensitive learning assigns higher weights to samples from groups with small sizes during training such that the costs of misclassifying these samples are higher than that from groups with large size. There are various methods on such weighting schemes. For example, in (Huang et al. 2019), the authors balance the representations of groups by constraining the embedding to keep intercluster margins both within and between classes. Note that these works deal with class imbalance while our work focuses on imbalance regarding sensitive attributes." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "Throughout this work, we consider binary classifiers that produce estimations ŷ ∈ {0, 1} for a given a dataset D\n= {(x 1 , y 1 ), . . . , (x n , y n )|x i ∈ X ⊆ R d , y i ∈ Y = {0, 1}}\n, where x i represents vectors of attributes, and y i is the target label of data instance i. Let the first component of x i describe the sensitive attribute s i = x i,1 ∈ {0, 1}. The values of the sensitive attributes s i distinguish between the majority group having n p many samples and the minority (i.e., sensitive or under-represented) group having n u many samples. Without loss of generality, we assume that ∀i :\n1 ⩽ i ⩽ n p ⇒ s i = 1.∀i : n p + 1 ⩽ i ⩽ n ⇒ s i = 0." }, { "figure_ref": [], "heading": "Fairness notions", "publication_ref": [ "b35", "b22", "b17", "b10" ], "table_ref": [], "text": "Disparate treatment (Zafar et al. 2017) occurs when the classifier makes different predictions on individuals from different sensitive groups when the input features are identical. To mitigate it, the classifier should achieve calibration across the sensitive groups: P (ŷ|x, s) = P (ŷ|x). Disparate impact (Kamiran and Calders 2016) evaluates the difference in positive outcome rate between groups and is eliminated when the predictive outcome ŷ is independent of s: P (ŷ|s = 0) = P (ŷ|s = 1). Nevertheless, eliminating disparate impact does not ensure a fair classifier. Since the sample distribution among sensitive groups is not naturally even, the classifier might focus on the majority group while ignoring decisions on the minority group. Furthermore, even if zero disparate impact is achieved, we might sacrifice the classifier's performance since statistical features of different sensitive groups usually vary. Disparate mistreatment (Hardt, Price, and Srebro 2016) occurs when the misclassification rates (false positives and false negatives) of different sensitive groups are different. In this case, the measurement of disparate mistreatment requires labeled data. Earlier works, including (Chouldechova 2016), state that there is usually tension among the disparate mistreatment criteria. Disparate FPR (false positive rate) and Disparate FNR (false negative rate) are commonly used to reduce disparate mistreatment: P (ŷ ̸ = y|y = 1, s) = P (ŷ ̸ = y|y = 1) and P (ŷ ̸ = y|y = 0, s) = P (ŷ ̸ = y|y = 0)." }, { "figure_ref": [], "heading": "Wasserstein distance methods", "publication_ref": [ "b36", "b15" ], "table_ref": [], "text": "Definition The Wasserstein distance between two distributions µ and ν is defined by\nW (µ, ν) = min π∈Π E (x,x ′ )∼π [∥x -x ′ ∥ p ],\nwhere Π is the set of couplings of µ and ν, i.e., Π = {π| π(x, x ′ )dx ′ = µ(x), π(x, x ′ )dx = ν(x ′ )}, and p ⩾ 1. In the later sections of this work, p = 2. Following the Kantorovich-Rubinstein duality, we have the dual form of the Wasserstein distance of\nW (µ, ν) = max ∥f ∥ L ⩽1 E x∼µ [f (x)] -E x ′ ∼ν [f (x ′ )],\nwhere the maximization is over all 1-Lipschitz functions f : R d → R.\nFairness-aware classification Wasserstein distance, also known as Optimal Transport (OT) distance, is a metric in Figure 1: Architecture of our approach. The arrows show the computational flow for the minority (resp. majority) group in the classification task (e.g., predicting whether a person in the image is wearing a hat). Representation bias is indicated by blue and red triangles. Both minority and majority groups are mapped onto a latent space by the feature extractor. Then, majority group instances are reweighted to match the minority group distribution, aiming to decrease the distance with respect to the sensitive attribute.\nthe space of measures with finite moments that can be used to evaluate how two distributions are different from one another. A valuable application of the properties of this metric is the computation of the barycenter of two distributions. Such technique has been leveraged in fairness-aware classification methods (Zehlike, Hacker, and Wiedemann 2020;Jiang et al. 2019) to enforce statistical parity. Notably, Wasserstein Fair Classification (WFC) (Jiang et al. 2019) quantile matches the predictions of the sensitive group to the predictions of the barycenter of all groups. Fairness with Continuous Optimal Transport (Chiappa and Pacchiano 2021) introduces a stochastic-gradient fairness method based on a dual formulation of continuous OT instead of discrete OT to improve performance.\nGenerative methods Wasserstein GANs (WGANs) (Arjovsky, Chintala, and Bottou 2017) are based on minimising the Wasserstein distance between a real and a generated distribution by weight clipping to enforce a Lipschitz constraint on the critic, improving the performance of plain GANs. WGANs with Gradient Penalty (WGAN-GP) (Gulrajani et al. 2017) is a relaxed version of the Lipschitz constraint, which follows that functions are 1-Lipschitz if the gradients are of norm at most 1 everywhere." }, { "figure_ref": [], "heading": "Our Adversarial Reweighting Approach Problem formulation", "publication_ref": [], "table_ref": [], "text": "Consider a feature extractor F ϕ : X → Z ⊆ R k that maps raw data from the dataset D into a latent feature space. The transformation function can be viewed as an embedding component for more effective comparison of instances in latent space. A binary classifier C θ : Z → Y with parameters θ maps the results of the transformation F ϕ (x) to a binary label ŷ ∈ Y = {0, 1}. For simplicity, we demonstrate our method in the scenario with binary sensitive attribute. However, it is straight-forward to extend our method to handle a multi-categorical sensitive attribute or multi-sensitive attributes (see Appendices).\nAs part of our problem definition, we indicate the training objective that minimizes the weighted empirical risk as:\nmin θ n i=1 w i L(y i , (C θ • F ϕ )(x i )), with w i ⩾ 0 (1)\nwith L representing cross-entropy loss. The overall pipeline we seek to develop is illustrated in Figure 1. The feature extractor is optional and may not be needed given low dimensionality of the data.\nIf all training samples receive the same weights, the classifier will tend to focus more on the majority group leading to representation bias. We seek to maintain the weights for samples from the minority group (i.e., ∀i : s i = 0 ⇒ w i = 1) while lowering the weights for samples from the majority group, such that the sum of weights is the same for both groups:\nnp i=1 w i = n u (2)\nTo avoid information loss by assigning zero weights to some samples from the majority group, we introduce a regularization constraint to our risk minimization term:\nnp i=1 (w i - n u n p ) 2 ⩽ T n u(3)\nThe sum is minimal (namely zero) if ∀i : w i = nu np . Thus, by adjusting the value of T we can balance between similarity and dissimilarity of the weights of samples from the majority group.\nTogether, Equations ( 1), (2), and (3) constrain the problem space. On their own, however, they do not fully account for within and between sample group differences, thus not always improving group-based fairness metrics. The problem is now defined to seek a weighting scheme that fulfills the Equations ( 1), (2), and (3) while mitigating the representation bias in a robust way." }, { "figure_ref": [], "heading": "Adversarial reweighting", "publication_ref": [], "table_ref": [], "text": "The goal of our weighting scheme is to determine weights such that the majority and minority group weighted distributions become similar and, hence, the classifier is less prone to biased and unfair predictions. Our adversarial learning of data weights in the majority group targets to pay more attention to samples in the majority group that are closer to the minority group during the training, without completely loosing the information contained in other samples of the majority group. We measure the similarity of weighted distributions by the Wasserstein distance in the latent space for the reason that approximating the Wasserstein distance in the latent space is computationally less demanding in a lowdimensional space.\nIn the following, we first show that enforcing a small Wasserstein distance in the latent space ensures small distance in the prediction space. Then, we discuss the detailed adversarial reweighting model.\nTheoretical proof of enforcing demographic parity We show that enforcing the Wasserstein distance, indicated as W (•, •), being small in the latent space enforces it to be small in the prediction space as well. Proposition 1. Given two measures µ and ν over a metric space (Z, d Z ) and a K-Lipschitz function\nC : (Z, d Z ) → (Y, d Y ), we have that W (C # µ, C # ν) ≤ K • W (µ, ν)\nWhere C # µ is the push-forward measure along the function C. For details of proof, please refer to Appendices.\nNote that, since we deal with classifiers over a finite dataset, the K-Lipschitz condition for a (binary) classifier C: Z → {0, 1} amounts to asking that for every z and\nz ′ such that C(z) ̸ = C(z ′ ) we have that 1 K ≤ d Z (z, z ′\n) because the set {0, 1} is endowed with the discrete metric. Given that we consider only finite datasets, we can always find such a K. Note that this result is only valid for a given dataset and it does not generalize unless we assume that the condition\n1 K ≤ d Z (z, z ′ ) for every z and z ′ such that C(z) ̸ = C(z ′ )\nholds true for the new data as well.\nThus, we have that if the Wasserstein distance is close to 0 in the latent space, it will be close to 0 in the prediction space. Note that W (C # µ, C # ν) = 0 means that C # µ = C # ν and this implies demographic parity. Indeed, defining the distribution ζ := 1 2 µ + 1 2 ν describes the probability of being sampled from either the majority or the minority groups. Then, since µ and ν are discrete, we have that \nC # ζ = 1 2 C # µ + 1 2 C # ν. Therefore, C # µ = C # ν implies that the probability of C(z) = 1 is irrespective of\nW (µ, ν) ≈ max θ D (E z∼µ [D(z; θ D )] -E z ′ ∼ν [D(z ′ ; θ D )]) (4)\nWe define the (weighted) empirical distributions of the minority group P U and the majority group P P (w) using the Dirac delta function δ(•) as:\nP U = 1 n u n i=np+1 δ(F (x i )),(5)\nP P (w) = 1 n u np i=1 w i δ(F (x i )), with np i=1 w i = n u (6)\nThen, we optimize the weights by minimizing the Wasserstein distance between the minority and reweighted majority distributions, whereby Equations ( 2) and (3) define the solution space for the weights W = {w : w = (w 1 , w 2 , ..., w np ) T , w i ⩾ 0,\nnp i=1 w i = n u , np i=1 (w i - nu np ) 2 ⩽ T n u }: min w∈W W (P U , P P (w))(7)\nBecause of Proposition 1, we know that such minimization contributes to reducing the disparity between majority and minority groups.\nIf f is a measurable function and µ = α i δ(x i ) a discrete distribution, we have that f # µ = α i δ(f (x i )). Hence, combining Equations ( 4) and ( 7) results into a minmax problem, yields:\nmin w∈W max θ D np i=1 w i D(z p i ; θ D ) - nu i=1 D(z u i ; θ D )(8)\nIn Equation ( 8), the discriminator is trained to maximize the average of its outputs on the minority and majority group; adversarially, the weights for samples from the majority group are learned to minimize the (reweighted) average of the outputs of the discriminator. As a result, the samples from the majority group with smaller discriminator outputs (closer to the minority group) will be assigned higher weights. Therefore, defining the reweighted crossentropy loss on the (reweighted) data distribution in Equation (1) mitigates the representation bias regarding the minority groups." }, { "figure_ref": [], "heading": "Training algorithm", "publication_ref": [ "b15" ], "table_ref": [], "text": "To train the feature extractor F ϕ and the classifier network C θ , the network parameters (ϕ, θ) and learn the weights w with D are updated by fixing others. We alternately train the following two steps.\nUpdating ϕ and θ while fixing w and θ D . ϕ and θ are updated to minimize the loss in Equation ( 1) for S steps batch-wise while w and θ D are fixed.\nUpdating w and θ D while fixing ϕ and θ. Embeddings of training data on both majority and minority groups are acquired through the feature extractor F while ϕ and θ are fixed. w in Equation ( 8) is learned: Equation ( 8) is a minmax optimization problem, the weights w and the parameters θ D of the discriminator could be optimized alternatively. We could first fix w i = nu np for all i and optimize θ D to maximize the objective function in Equation (8) using the gradient penalty technique, as in WGAN-GP (Gulrajani et al. 2017). Then, fixing the discriminator, we optimize w. We denote\nd i = D(F θ (x i ); θ D ) and d = (d 1 , d 2 , ...d np ) T .\nThe optimization problem for w becomes a constrained least squares problem:\nmin w d T w, s.t.w i ⩾ 0, np i=1 w i = n u , np i=1 (w i - n u n p ) 2 ⩽ T n u (9)" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b6", "b0", "b30" ], "table_ref": [], "text": "We evaluate the performance of our reweighting approach on three benchmark datasets comparing it against eight methods using four different metrics: Accuracy, Disparate Impact, Disparate FPR and Disparate FPR. Four fairness-agnostic methods help us to better understand issues with unfairness. Inspired by (Chai and Wang 2022) we compare against (1) Baseline (Neural Network (NN) based classification without fairness constraints); (2) Simple Reweighing: NN classification with assigning same balancing weights to samples the majority group; (3) Undersampling forms the training dataset by balancing group sizes via undersampling from the majority group; (4) Oversampling balances group sizes by repeating sampling from the minority group.\nWe choose four further competing methods mentioned in earlier sections: (5) Adaptive sensitive reweighting (ASR) reweights samples to balance target class occurrences. ( 6) Wasserstein fair classification (WFC) matches quantiles of the predictive distribution of the sensitive group to the allgroup Wasserstein barycenter. ( 7) The Fair Adversarial Discriminative (FAD) model (Adel et al. 2019) decorrelates the sensitive information from the embeddings by adjusting the encoding/feature extraction process using adversarial training. ( 8) Fairness with Adaptive Weights (FAW) constrains the sum of weights. They (i) are designed to address bias, (ii) follow conceptually similar strategies, and (iii) can also be flexibly applied to different modalities (tabular and images).\nOur networks are trained on an Intel(r) Core(TM) i7-8700 CPU. The networks in our experiments are built based on Pytorch (Paszke et al. 2019) and the optimization in Equation ( 9) is performed with the python package CVXPY (Diamond and Boyd 2016)." }, { "figure_ref": [], "heading": "Data and training details", "publication_ref": [ "b27", "b18", "b12", "b24", "b15", "b25", "b5" ], "table_ref": [], "text": "Image dataset We test three datasets based on CelebA (Liu et al. 2015) which contain 70% male images vs. 30% female images, 80% male images vs. 20% female images and 90% male images vs. 10% female images, respectively. We use three different distributions for the sensitive attribute to analyze in which imbalance situation our method is more suitable. We maintain the class imbalance in the three datasets constant, namely 70% not wearing a hat and 30% wearing a hat. For more details on CelebA, please refer to Appendices. The classification task is to identify whether the person in the picture is wearing a hat.\nFor the feature extractor F , we apply ResNet-18 (He et al. 2016) architecture, pre-trained on ImageNet (Deng et al. 2009), without the last fully-connected layer for simplicity. For the feature extractor F ϕ and classifier C θ , we use the stocastic gradient descent (SGD) algorithm (Shamir and Zhang 2012) with a momentum of 0.9 to update ϕ and θ. For the discriminator D, we use a similar architecture as the one in (Gulrajani et al. 2017) with three fully connected layers of 512, 256 and 128, 64, and 1 node, respectively; and without the last sigmoid function. We apply the Adam algorithm (Kingma and Ba 2014) to update θ D with a learning rate of 0.0001. Following (Gulrajani et al. 2017), we adjust the learning rate η by η = 0.01 (1+10p) -0.75 , where p is the training progress linearly changing from 0 to 1. We set the batch size to n p 700, 800, 900, and n u 300, 200, and 100, respectively. We update ϕ and θ for 4 steps and then update θ D for 1 step. Note that we choose a relatively high batch-size because estimating the true Wasserstein distance between distributions (a) male samples assigned with the lowest weights (b) male samples assigned with the highest weights Tabular dataset For experiments on tabular data, we use the Adult dataset (Kohavi 1996) and the UCI German Credit Risk dataset (Dua and Graff 2017) (For more details of the datasets, please refer to Appendices). Note that tabular datasets generally need more preprocessing than image datasets (Borisov et al. 2022). Note we are aware that gradient boosting would be more adequate for tabular data, but could not find any related approach for mitigating representation bias based on boosting. We normalize the continuous features and use one-hot encoding to deal with the categorical features. We train the model for 50 epochs with batchsizes of 1000 and 500 for the male and female samples. For more details of the experiment, please refer to Appendices." }, { "figure_ref": [ "fig_1" ], "heading": "Analysis results", "publication_ref": [], "table_ref": [ "tab_0", "tab_2", "tab_3", "tab_4" ], "text": "Performance comparison From Table 1 to Table 3, we can see that there is no sacrifice of accuracy with our approach. At the same time, the disparate impact concerning the sensitive attribute is mitigated, which is a crucial advantage of our optimization over related approaches. To better understand the performance of our method, we break down the accuracy concerning the male and female groups and show it in Tables 4 and5 in Appendices.\nWFC did not perform so well accuracy-wise. We think it is because it adjusts the prediction results by aligning the Wasserstein distance between the predictions over the sensitive groups, which could reduce the accuracy rate. While ASR is a strong competitor, it requires multiple times of training until a convergence of the neural networks is reached, making it more expensive than the other approaches. Similarly, this holds true for FAW. FAD fails to deal with the imbalance problem during the decorrelation, but it maintains a high accuracy rate. Figure 2 shows the samples from the male groups, which are assigned the lowest and highest weights, respectively. We see that male samples more distant from the female distribution are down-weighted, balancing and harmonizing the male and female distributions. Male samples closer to the female group are assigned relatively high weights, which provides further information for the classification task." }, { "figure_ref": [ "fig_2", "fig_6" ], "heading": "Embeddings and reweighting visualization", "publication_ref": [], "table_ref": [], "text": "We visualize the learned weights of the majority group vs. the minority group for the 70% male vs. 30% female dataset of CelebA. So we show the t-SNE embeddings of the original and reweighted embeddings in Figure 3. On the left, in Figure 3a, we can see that the male and female groups are not aligning well, leading to discrimination against the female group, as described in earlier sections. Our proposed reweighting method aligns the extracted embeddings of the female group to that of the male group, as shown in Figure 3b, before the classification step. These visualizations, of course, only partially explain our approach's success in dealing with the problem of representation bias concerning a sensitive attribute. In addition, note that the original Wasserstein distance between the two distributions before reweighting is 15.87, and after reweighting, the distance is 0.23. For more details, please refer to Appendices.\nClassification with noisy label Since our method ensures the demographic parity of the predictions, it should not be sensitive to noise labeling (possible biased labeling). We apply half the noise corruption to the majority group and half to the minority group. We show the performance of our method and baseline (NN-based classification without fairness constraints) on accuracy and disparate impact under different ratios of noise corruption in Figure 4. The disparate impact (a) t-SNE before reweighting in the latent space (b) t-SNE after reweighting in the latent space Figure 3: t-SNE of extracted embeddings before and after reweighting of the instances in a setting of 70 % male and 30 % female samples remains low when the noise ratio changes. Moreover, again, we show no sacrifice of accuracy when applying our method. For more details of performance on other fairness metrics, please refer to Figure 6 in Appendices.\nSensitivity to the choice of hyper-parameters We have also analyzed the sensitivity of our method to the hyperparameter T mentioned earlier, in Figure 9 in Appendices, where the plots indicate that the performance of our adversarial reweighting scheme has low sensitivity to the choice of the hyper-parameter. In our experiments, we set T at 5. For analysis of other datasets, see Figure 10 in Appendices." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation tests", "publication_ref": [], "table_ref": [], "text": "Ablation test for MMD and JS-divergence dissimilarity measures. We also conducted an ablation test for the Jenson-Shannon-divergence (JS) and maximum-mean discrepancy Note that we enforce Demographic Parity which is selection rate parity (which our method can implicitly mitigate) and we believe this is the reason why our method is so robust to noisy labeling.\n(MMD) instead of Wasserstein distance to learn the weights in our framework on the CelebA dataset, with 90% male and 10% female samples. In Figures 5 and8 (see Appendices for Figure 8), the performance of our method using the Wasserstein distance is better than JS and MMD. Wasserstein distance may be more suitable to measure their distance than the JS divergence when the distributions are more disjoint. MMD with kernels may be unable to capture very complex distances in high dimensional spaces compared to Wasserstein distance. The Wasserstein distance is better for accuracy and disparate impact but not necessarily better at Disparate FPR and Disparate FNR.\nAblation test for assigning weights to both groups or only to the minority group. Our method, by design, aims at reweighting the majority group to close the gap to the Wasserstein distance between samples with different sensitivity attribute values. Therefore, we test various reweighting schemes for both groups in an ablation test by alternatively assigning weights to one while treating another group with fixed weights -the Wasserstein distance between the two groups is 0.17 for the dataset 70% male and 30% female images from CelebA. We also test the assignment of weights to the minority group. In this situation, the Wasserstein distance after reweighting is 1.29. As we mentioned earlier, the Wasserstein distance of our method after reweighting is 0.23. This suggests that the difference between reweighting both and only the majority group is marginal, while the difference between reweighting only the majority and the minority group is significant. This is why we reweight the majority group in our method. For more details, please refer to Appendices." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [ "b29", "b33" ], "table_ref": [], "text": "Our work is conceptually different from previous works on fairness-aware machine learning because it balances and harmonizes protected groups, as defined by sensitive attributes, minimizes the empirical risk, and achieves competitive predictive quality regarding accuracy, fairness, and robustness. Theoretical literature explores the inherent balance between fairness and utility, and numerous experiments have demonstrated the trade-off in practice. Nevertheless, these trade-off discussions are often based on a fixed distribution that does not align with our current situation. We argue that an ideal distribution exists where fairness and utility are in harmony. Our data reweighing combined with classifier training lets us move beyond a biased distribution and release the trade-off. One limitation in our work lies in that WGAN-GP might fail to approximate the Wasssertein distance correctly (Mallasto, Montúfar, and Gerolin 2019;Stanczuk et al. 2021). Based on our experiments, our method can mitigate the need to discard sensitive attributes or impose specific fairness constraints, thus avoiding the issue of determining critical hyperparameters such as regularization factors. Our approach still permits the inclusion of further regularizations or constraints during empirical risk minimization -though we have not yet found the need to explore such approaches." }, { "figure_ref": [], "heading": "Appendices Mathematical Details", "publication_ref": [], "table_ref": [], "text": "We prove that enforcing the Wasserstein distance, indicated as W (•, •), being small in the latent space enforces it to be small in the prediction space as well.\nProof. The proof goes as follows 1 :\nW (C # µ, C # ν) = sup f ∈Lip1(Y ) Y f dC # µ - Y f dC # ν (Kantorovich duality) = sup f ∈Lip1(Y ) Z f • Cdµ - Z f • Cdν (Property of the push-forward) = sup f ∈Lip1(Y ) K • Z f • C K dµ - Z f • C K dν ≤ sup h∈Lip1(Z) K • Z hdµ - Z hdν ( f • C K is 1-Lipschitz) = K • W (µ, ν)(4)\nWhere Lip 1 (Y ) indicates the set of 1-Lipschitz functions f : Y → R.\n1 The proof is almost verbatim the comment from the user Christian Bueno about statistical divergence change under a Lipschitz push-forward map at https: //mathoverflow.net/questions/314201/how-does-a-statisticaldivergence-change-under-a-lipschitz-push-forward-map." }, { "figure_ref": [], "heading": "Dataset Details", "publication_ref": [], "table_ref": [], "text": "CelebA CelebA contains 202,600 face images, each endowed with 40 attributes. When we try to construct the datasets from CelebA based on our needs, we maintain the class imbalance in the three datasets constant, which is 70% not wearing hats and 30% wearing hats, since class imbalance is not our priority in this paper." }, { "figure_ref": [], "heading": "Adult dataset", "publication_ref": [], "table_ref": [], "text": "The Adult dataset was drawn from the 1994 United States Census Bureau data. It used personal information such as education level and working hours per week to predict whether an individual earns more or less than $50,000 per year. The dataset is imbalanced -the instances made less than $50,000 constitute 25% of the dataset, and the instances made more than $50,000 include 75% of the dataset. As for gender, it is also imbalanced. We use age, years of education, capital gain, capital loss, hours-perweek, etc., as continuous features, and education level, gender, etc., as categorical features.\nUCI German Credit Risk dataset This dataset contains 1000 entries with 20 categorial/symbolic attributes. In this dataset, each entry represents a person who takes credit from a bank. Each person is classified as having good or bad credit risks according to their attributes." }, { "figure_ref": [], "heading": "Training Details and Results", "publication_ref": [], "table_ref": [], "text": "Details of WGAN-GP adaptation for our method In the original design of WGAN-GP of the training for one batch, the sizes of generated and original samples are equal for the Gradient Penalty as a regularizer to be applied. Here we need to make some changes: we control the sum of the majority group by the weights in one batch by the Wasserstein distance to let it be equal to the sample size of the minority group in one batch. Then we send them for further computation of the regularizer." }, { "figure_ref": [], "heading": "Repetition", "publication_ref": [], "table_ref": [], "text": "We repeat experiments on each dataset five times. Before each repetition, we randomly split data into training data and test data for the computation of the standard errors of the metrics.\nCelebA training For the CelebA dataset, since the original data is highly dimensional image data, we use ResNet18 and remove the last layer as a feature extractor. The dimension of the latent space is 512. Note that we use relatively large batch sizes during the training, and we control the sizes of the majority and minority constant during each batch. Papers mention that the large batch size could cause the potential failure of the approximation using neural networks to evaluate the distributions. Our training dataset has 10000 samples, and the test dataset has 2000 samples for all three datasets. From Figure 6, we can see that our method has its limitation regarding Disparate FPR and Disparate DNR." }, { "figure_ref": [ "fig_5" ], "heading": "Convergence of the training loss", "publication_ref": [ "b15", "b15" ], "table_ref": [ "tab_3", "tab_4" ], "text": "We try to show the stability of training of our method. Figure 7 shows our method's and baseline's convergence for the CelebA dataset with 90% male and 10% female. Breakdown of accuracy on sensitive groups We could see that the method sacrifices the accuracy of the majority group for the accuracy of the minority group in Table 4 and5 than the image datasets, we could avoid using feature extractors. However, we use one-hot encoding to deal with the From Figure 10, we can see that the metrics are not sensitive to the change of T for the Adult dataset. Figure 8 shows the sensitivity of different distance measures on the Adult dataset.\nFor the feature extractor F ϕ and the classifier C θ , we also apply fully connected layers. For the discriminator D, we use the same architecture in (Gulrajani et al. 2017), without the last sigmoid function. We apply SGD algorithm with a momentum of 0.9 to update ϕ and θ. The learning rate of θ is ten times that of ϕ. θ D is updated by Adam algorithm with a learning rate 0.0001. Following (Gulrajani et al. 2017), we adjust the learning rate η of θ by η = 0.01 (1+10p) -0.75 , where p is the training progress linearly changing from 0 to 1. We update ϕ and θ for 2 steps then update θ D for 1 step.\nMulti-categorical sensitive attribute situation It is straight-forward to extend our method to handle a multicategorical sensitive attribute or multi-sensitive attributes by using one subgroup as reference group and reweighing other subgroups alternatively (and in turn) to reach a state of demographic parity. We also use Adult dataset to demonstrate this. However, we choose race here as the sensitive attribute. race is { ′ Amer -Indian -Eskimo ′ : 0, ′ Asian -P ac -Islander ′ : 1, ′ Black ′ : 2, ′ Other ′ : 3, ′ W hite ′ : 4} in the dataset. We use ′ Asian -P ac -Islander ′ as the reference subgroup and reweighs samples from other subgroups. We report the disparate impact between the subgroup ′ W hite ′ and ′ Black ′ . Before and after applying our method, the disparate impact is 15.1% and 1.7% and the accuracy is 83.1% and 82.8%." } ]
The unequal representation of different groups in a sample population can lead to discrimination of minority groups when machine learning models make automated decisions. To address these issues, fairness-aware machine learning jointly optimizes two (or more) metrics aiming at predictive effectiveness and low unfairness. However, the inherent underrepresentation of minorities in the data makes the disparate treatment of subpopulations less noticeable and difficult to deal with during learning. In this paper, we propose a novel adversarial reweighting method to address such representation bias. To balance the data distribution between the majority and the minority groups, our approach deemphasizes samples from the majority group. To minimize empirical risk, our method prefers samples from the majority group that are close to the minority group as evaluated by the Wasserstein distance. Our theoretical analysis shows the effectiveness of our adversarial reweighting approach. Experiments demonstrate that our approach mitigates bias without sacrificing classification accuracy, outperforming related state-of-the-art methods on image and tabular benchmark datasets.
Adversarial Reweighting Guided by Wasserstein Distance for Bias Mitigation
[ { "figure_caption": "the fact that z is sampled from the majority or minority groups.Adversarial reweighting model We approximate the computation of the Wasserstein distance by a neural network discriminator D using the gradient penalty technique ofWGAN-GP (Gulrajani et al. 2017):", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Samples from the male group with the lowest and highest weights. Samples with the lowest weights tend to wear suits and have short hair, while samples with the highest weights tend to have longer hair.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Change of accuracy and disparate impact under different noise ratios on CelebA 90% male and 10% female. Note that we enforce Demographic Parity which is selection rate parity (which our method can implicitly mitigate) and we believe this is the reason why our method is so robust to noisy labeling.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Sensitivity of different distance measures on CelebA dataset with 90% male and 10% female", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6: Change of disparate FPR and disparate FNR under different noise ratios on CelebA 90% male and 10% female.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: convergence of training loss", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 8: sensitivity of different distance measures on Adult dataset", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10: Sensitivity of metrics to the change of T on Adult dataset", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Experiment Results on CelebA (a) Experimental results of classifier (Wearing Hat) on dataset (30% female and 70% male)", "figure_data": "methodsbaselinesimple methods reweighing undersampling oversamplingASRstate-of-the-art methods WFC FADFAWours ARAccuracy rate (%)95.1 (0.7)94.9 (0.6)95.3 (0.4)94.7 (0.9)93.5 (0.6) 93.6 (0.5) 95.0 (0.7)93.7 (0.6)94.7 (0.7)Disparate Impact (%)6.0 (0.8)6.0 (0.4)5.7 (0.2)4.9 (0.7)5.1 (0.7)5.3 (2.4)5.5 (2.4)4.7 (0.4)0.8 (0.5)Disparate FPR (%)-29.1 (1.4) -31.3 (9.2)-31.8 (8.1)-24.7 (7.5)-26.2 (8.1)4.5 (1.1)-25.7 (5.8) -13.7 (2.1) -17.0 (5.1)Disparate FNR (%)7.3 (2.9)8.1 (4.2)7.1 (3.6)7.9 (1.9)8.2 (1.8)7.6 (3.9)6.9 (1.2)10.0 (1.1)6.5 (1.9)(b) Experimental results of classifier (Wearing Hat) on the dataset (20% female and 80% male)methodsbaselinesimple methods reweighing undersampling oversamplingASRstate-of-the-art methods WFC FADFAWours ARAccuracy rate (%)95.3 (0.9)93.7 (0.5)95.0 (0.4)94.9 (1.6)93.1 (0.7) 93.3 (0.8) 95.0 (0.5)93.6 (0.8)95.3 (0.9)Disparate Impact (%)3.7 (0.7)3.9 (0.9)3.1 (0.3)3.4 (0.3)4.3 (0.9)5.1 (1.4)18.2 (1.4)3.8 (0.4)0.7 (0.4)Disparate FPR (%)-16.0 (1.7) -32.1 (7.3)-28.9 (4.1)-17.4 (5.0)-2.0 (4.4)4.0 (1.7)-21.0 (2.6) -19.7 (1.1) -22.2 (5.3)Disparate FNR (%)10.3 (2.1)16.6 (5.4)11.9 (3.7)10.9 (3.7)8.1 (0.8)-7.2 (2.1)10.5 (1.4)10.0 (1.1)9.0 (3.7)(c) Experimental results of classifier (Wearing Hat) on dataset (10% female and 90% male)methodsbaselinesimple methods reweighing undersampling oversamplingASRstate-of-the-art methods WFC FADFAWours ARAccuracy rate (%)95.0 (0.5)93.7 (0.7)94.2 (0.8)94.1 (1.4)94.5 (0.5) 92.3 (0.4) 94.6 (0.7)92.5 (0.7)95.3 (0.2)Disparate Impact (%)1.9 (0.6)3.3 (0.3)2.1 (0.3)1.5 (0.3)2.0 (0.4)7.9 (2.6)27.2 (1.0)1.9 (0.5)0.2 (0.3)Disparate FPR (%)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results of classifier on Adult dataset (sensitive attribute is gender)", "figure_data": "methodsbaselinesimple methods reweighing undersampling oversamplingASRstate-of-the-art methods WFC FADFAWours ARAccuracy rate (%)83.1 (0.4) 82.5 (0.3)82.1 (0.3)84.7 (0.9)81.6 (0.3) 81.8 (0.5) 82.4 (0.5) 81.2 (0.6) 83.0 (0.1)Disparate Impact (%) 17.8 (0.3) 21.0 (0.4)18.7 (0.5)18.6 (0.4)0.4 (0.2)2.5 (1.0)5.7 (1.4)1.7 (0.4)1.3 (0.5)Disparate FPR (%)17.0 (1.0)2.3 (4.7)9.2 (1.3)8.4 (1.6)27.2 (4.5) -9.8 (0.7) -8.7 (1.8) -8.5 (2.4) -10.5 (1.1)Disparate FNR (%)6.1 (0.6)12.1 (3.5)4.2 (0.8)12.7 (0.7)2.3 (1.2) 22.4 (1.3) 3.2 (0.7)4.0 (1.7)7.2 (0.7)", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results of the classifier on German Credit dataset (sensitive attribute is sex)", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": ". breakdown of accuracy on 70% male and 30% female CelebA dataset Ablation test for assigning weights We try to assign weights only to the minority group. We could not close the Wasserstein distance gap and assign weights only to the majority group. Assigning weights to both groups could achieve similar results as our method. However, we might need an additional statistical test to claim so.", "figure_data": "methodsaccuracy (%) male group female group totalbaseline95.294.895.1our method 94.694.994.7", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "breakdown of accuracy on 90% male and 10% female CelebA dataset", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Xuan Zhao; Simone Fabbrizzi; Paula Reyero Lobo; Siamak Ghodsi; Klaus Broelemann; Steffen Staab; Gjergji Kasneci
[ { "authors": "T Adel; I Valera; Z Ghahramani; A Weller", "journal": "", "ref_id": "b0", "title": "One-Network Adversarial Fairness", "year": "2019" }, { "authors": "S Aghaei; M J Azizi; P Vayanos", "journal": "", "ref_id": "b1", "title": "Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making", "year": "2019" }, { "authors": "M Arjovsky; S Chintala; L Bottou", "journal": "", "ref_id": "b2", "title": "Wasserstein GAN", "year": "2017" }, { "authors": "F Bao; Y Deng; Y Kong; Z Ren; J Suo; Q Dai", "journal": "", "ref_id": "b3", "title": "Learning Deep Landmarks for Imbalanced Classification", "year": "2020" }, { "authors": "R Berk; H Heidari; S Jabbari; M Joseph; M Kearns; J Morgenstern; S Neel; A Roth", "journal": "", "ref_id": "b4", "title": "A Convex Framework for Fair Regression", "year": "2017" }, { "authors": "V Borisov; T Leemann; K Seßler; J Haug; M Pawelczyk; G Kasneci", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b5", "title": "Deep neural networks and tabular data: A survey", "year": "2022" }, { "authors": "J Chai; X Wang", "journal": "", "ref_id": "b6", "title": "Fairness with Adaptive Weights", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b7", "title": "", "year": "" }, { "authors": "S Chiappa; A Pacchiano", "journal": "", "ref_id": "b8", "title": "Fairness with Continuous Optimal Transport", "year": "2021" }, { "authors": "J Choi; C Gao; J C E Messou; J.-B Huang", "journal": "", "ref_id": "b9", "title": "Why Can't I Dance in the Mall? Learning to Mitigate Scene Bias in Action Recognition", "year": "2019" }, { "authors": "A Chouldechova", "journal": "", "ref_id": "b10", "title": "Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments", "year": "2016" }, { "authors": "E Creager; D Madras; J.-H Jacobsen; M A Weis; K Swersky; T Pitassi; R Zemel", "journal": "", "ref_id": "b11", "title": "Flexibly Fair Representation Learning by Disentanglement", "year": "2019" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b12", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "S Diamond; S Boyd", "journal": "Journal of Machine Learning Research", "ref_id": "b13", "title": "CVXPY: A Pythonembedded modeling language for convex optimization", "year": "2016" }, { "authors": "D Dua; C Graff", "journal": "", "ref_id": "b14", "title": "UCI Machine Learning Repository", "year": "2017" }, { "authors": "X Gu; X Yu; Y Yang; J Sun; Z Xu; I Gulrajani; F Ahmed; M Arjovsky; V Dumoulin; A Courville", "journal": "", "ref_id": "b15", "title": "Adversarial Reweighting for Partial Domain Adaptation", "year": "2017" }, { "authors": "Red Hook", "journal": "Curran Associates Inc", "ref_id": "b16", "title": "", "year": "" }, { "authors": "M Hardt; E Price; N Srebro", "journal": "", "ref_id": "b17", "title": "Equality of Opportunity in Supervised Learning", "year": "2016" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "IEEE", "ref_id": "b18", "title": "Deep Residual Learning for Image Recognition", "year": "2016" }, { "authors": "C Huang; Y Li; C C Loy; X Tang", "journal": "", "ref_id": "b19", "title": "Deep Imbalanced Learning for Face Recognition and Attribute Prediction", "year": "2019" }, { "authors": "H Jiang; O Nachum", "journal": "", "ref_id": "b20", "title": "Identifying and Correcting Label Bias in Machine Learning", "year": "2019" }, { "authors": "R Jiang; A Pacchiano; T Stepleton; H Jiang; S Chiappa", "journal": "", "ref_id": "b21", "title": "Wasserstein Fair Classification", "year": "2019" }, { "authors": "F Kamiran; T Calders", "journal": "", "ref_id": "b22", "title": "Data Preprocessing Techniques for Classification without Discrimination", "year": "2016" }, { "authors": "B Kim; H Kim; K Kim; S Kim; J Kim", "journal": "", "ref_id": "b23", "title": "Learning Not to Learn: Training Deep Neural Networks with Biased Data", "year": "2019" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b24", "title": "Adam: A Method for Stochastic Optimization", "year": "2014" }, { "authors": "R Kohavi", "journal": "AAAI Press", "ref_id": "b25", "title": "Scaling up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid", "year": "1996" }, { "authors": "E Krasanakis; E Spyromitros-Xioufis; S Papadopoulos; Y Kompatsiaris", "journal": "International World Wide Web Conferences Steering Committee", "ref_id": "b26", "title": "Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification", "year": "2018" }, { "authors": "Z Liu; P Luo; X Wang; X Tang", "journal": "", "ref_id": "b27", "title": "Deep Learning Face Attributes in the Wild", "year": "2015" }, { "authors": "D Madras; E Creager; T Pitassi; R Zemel", "journal": "", "ref_id": "b28", "title": "Learning Adversarially Fair and Transferable Representations", "year": "2018" }, { "authors": "A Mallasto; G Montúfar; A Gerolin", "journal": "", "ref_id": "b29", "title": "How Well Do WGANs Estimate the Wasserstein Metric?", "year": "2019" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b30", "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b31", "title": "", "year": "" }, { "authors": "O Shamir; T Zhang", "journal": "", "ref_id": "b32", "title": "Stochastic Gradient Descent for Non-smooth Optimization: Convergence Results and Optimal Averaging Schemes", "year": "2012" }, { "authors": "J Stanczuk; C Etmann; L M Kreusser; C Schönlieb", "journal": "", "ref_id": "b33", "title": "Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance)", "year": "2021" }, { "authors": "C Wadsworth; F Vera; C Piech", "journal": "", "ref_id": "b34", "title": "Achieving Fairness through Adversarial Learning: An Application to Recidivism Prediction", "year": "2018" }, { "authors": "M B Zafar; I Valera; M G Rodriguez; K P Gummadi", "journal": "", "ref_id": "b35", "title": "Fairness Constraints: Mechanisms for Fair Classification", "year": "2017" }, { "authors": "M Zehlike; P Hacker; E Wiedemann", "journal": "Data Min. Knowl. Discov", "ref_id": "b36", "title": "Matching code and law: achieving algorithmic fairness with optimal transport", "year": "2020" }, { "authors": "C Zhang; K C Tan; H Li; G S Hong", "journal": "", "ref_id": "b37", "title": "A Cost-Sensitive Deep Belief Network for Imbalanced Classification", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 319.5, 118.14, 238.5, 20.61 ], "formula_id": "formula_0", "formula_text": "= {(x 1 , y 1 ), . . . , (x n , y n )|x i ∈ X ⊆ R d , y i ∈ Y = {0, 1}}" }, { "formula_coordinates": [ 2, 336.77, 216.77, 215.31, 9.65 ], "formula_id": "formula_1", "formula_text": "1 ⩽ i ⩽ n p ⇒ s i = 1.∀i : n p + 1 ⩽ i ⩽ n ⇒ s i = 0." }, { "formula_coordinates": [ 2, 319.5, 575.86, 238.5, 20.91 ], "formula_id": "formula_2", "formula_text": "W (µ, ν) = min π∈Π E (x,x ′ )∼π [∥x -x ′ ∥ p ]," }, { "formula_coordinates": [ 2, 333.34, 642.75, 224.66, 12.15 ], "formula_id": "formula_3", "formula_text": "W (µ, ν) = max ∥f ∥ L ⩽1 E x∼µ [f (x)] -E x ′ ∼ν [f (x ′ )]," }, { "formula_coordinates": [ 3, 346.76, 75.84, 211.24, 30.32 ], "formula_id": "formula_4", "formula_text": "min θ n i=1 w i L(y i , (C θ • F ϕ )(x i )), with w i ⩾ 0 (1)" }, { "formula_coordinates": [ 3, 414.01, 241.36, 143.99, 31.4 ], "formula_id": "formula_5", "formula_text": "np i=1 w i = n u (2)" }, { "formula_coordinates": [ 3, 392.25, 319.71, 165.75, 31.4 ], "formula_id": "formula_6", "formula_text": "np i=1 (w i - n u n p ) 2 ⩽ T n u(3)" }, { "formula_coordinates": [ 4, 54, 115.83, 238.5, 38.28 ], "formula_id": "formula_7", "formula_text": "C : (Z, d Z ) → (Y, d Y ), we have that W (C # µ, C # ν) ≤ K • W (µ, ν)" }, { "formula_coordinates": [ 4, 54, 208.27, 238.5, 24.13 ], "formula_id": "formula_8", "formula_text": "z ′ such that C(z) ̸ = C(z ′ ) we have that 1 K ≤ d Z (z, z ′" }, { "formula_coordinates": [ 4, 55.2, 273.71, 237.3, 13.47 ], "formula_id": "formula_9", "formula_text": "1 K ≤ d Z (z, z ′ ) for every z and z ′ such that C(z) ̸ = C(z ′ )" }, { "formula_coordinates": [ 4, 54, 373.65, 238.5, 21.8 ], "formula_id": "formula_10", "formula_text": "C # ζ = 1 2 C # µ + 1 2 C # ν. Therefore, C # µ = C # ν implies that the probability of C(z) = 1 is irrespective of" }, { "formula_coordinates": [ 4, 61.91, 473.76, 230.59, 27.44 ], "formula_id": "formula_11", "formula_text": "W (µ, ν) ≈ max θ D (E z∼µ [D(z; θ D )] -E z ′ ∼ν [D(z ′ ; θ D )]) (4)" }, { "formula_coordinates": [ 4, 150.04, 541.35, 142.46, 30.32 ], "formula_id": "formula_12", "formula_text": "P U = 1 n u n i=np+1 δ(F (x i )),(5)" }, { "formula_coordinates": [ 4, 63.09, 576.51, 229.41, 31.4 ], "formula_id": "formula_13", "formula_text": "P P (w) = 1 n u np i=1 w i δ(F (x i )), with np i=1 w i = n u (6)" }, { "formula_coordinates": [ 4, 55.2, 655.73, 237.3, 51.19 ], "formula_id": "formula_14", "formula_text": "np i=1 w i = n u , np i=1 (w i - nu np ) 2 ⩽ T n u }: min w∈W W (P U , P P (w))(7)" }, { "formula_coordinates": [ 4, 336.01, 148.63, 221.99, 31.4 ], "formula_id": "formula_15", "formula_text": "min w∈W max θ D np i=1 w i D(z p i ; θ D ) - nu i=1 D(z u i ; θ D )(8)" }, { "formula_coordinates": [ 4, 364.96, 521.11, 193.04, 11.23 ], "formula_id": "formula_16", "formula_text": "d i = D(F θ (x i ); θ D ) and d = (d 1 , d 2 , ...d np ) T ." }, { "formula_coordinates": [ 4, 319.5, 570.14, 238.5, 40.99 ], "formula_id": "formula_17", "formula_text": "min w d T w, s.t.w i ⩾ 0, np i=1 w i = n u , np i=1 (w i - n u n p ) 2 ⩽ T n u (9)" }, { "formula_coordinates": [ 9, 54, 445.94, 254.51, 175.66 ], "formula_id": "formula_18", "formula_text": "W (C # µ, C # ν) = sup f ∈Lip1(Y ) Y f dC # µ - Y f dC # ν (Kantorovich duality) = sup f ∈Lip1(Y ) Z f • Cdµ - Z f • Cdν (Property of the push-forward) = sup f ∈Lip1(Y ) K • Z f • C K dµ - Z f • C K dν ≤ sup h∈Lip1(Z) K • Z hdµ - Z hdν ( f • C K is 1-Lipschitz) = K • W (µ, ν)(4)" } ]
10.1109/ACCESS.2020.3044858
2023-11-30
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b1", "b2", "b3", "b4", "b5", "b6", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b2" ], "table_ref": [], "text": "Deep Learning (DL) advances in computer vision Krizhevsky et al. [2012], Minaee et al. [2022], Redmon and Farhadi [2018] have been successfully applied to specialist domains such as medical imaging Esteva et al. [2021], improving performance in pathology detection from chest radiograph Irvin et al. [2019], Arias-Londono et al. [2020], finding malignant lesions from skin scans Liu et al. [2020] and predicting patient survival from whole slide images Srinidhi et al. [2021]. The success of DL in medical imaging motivates further investigation into feature understanding as these architectures suffer from their black-box nature, raising valid concerns by medical practitioners.\nInterpretability is generally the ability for a human to understand the reasons (i.e. features) behind the decision made by the system. Simple machine learning models, such as logistic regression or decision trees, are more easily interpretable though do not perform nearly as well as DCNNs with millions of parameters. Feature visualisation Reyes et al. [2020] is the current state of the art approach for DCNN interpretation. Feature visualisation techniques generate localisation maps, highlighting the pixels and regions in the input image used in making the prediction Saporta et al. [2022], Simonyan et al. [2014], Selvaraju et al. [2020].\nCascade Learning (CL) Marquez et al. [2018], which builds on the idea of the cascade correlation algorithm Fahlman and Lebiere [1990], is an alternative way of training a DCNN. This learning paradigm differs from traditional end-toend (E2E) learning, whereby all of the layers of the network are learned simultaneously, resulting in varied feature representations. Recent studies Du et al. [2019], Wang et al. [2022] demonstrate the superior performance of transferring CL features to downstream classification tasks. In this paper, we investigate the difference in feature representations considering localisation as a key metric for traditional E2E learning versus CL. We observe that CL does result in more localised features, considering several metrics and visualisation approaches, and these features appear to be more localised at every layer of the DCNN. We then take these findings one step further and consider whether the improved feature localisation results in superior object detection. Object detection frameworks train an effective bounding box regressor to classify and localise the object in an image or video Redmon et al. [2016], Redmon and Farhadi [2018]. In this work, we consider the association between visually localised features and the bounding box prediction. We seek to answer: does the superior localisation ability of CL further improve the ability of the model to predict the bounding box region of interest? We find that CL is promising and improves bounding box region of interest predictions in comparison to the widely adopted E2E training scheme.\nThe main contributions of this paper are as follows:\n• Our analysis via various feature visualisation techniques shows that traditional E2E training has a limited ability to localise discriminative features across the intermediary layers of a DCNN.\n• We demonstrate that using a layer-wise learning strategy, namely cascade learning, leads to an improvement in feature localisation.\n• Quantifying the degree of overlap between the binarized mask and the bounding box, for the Chest X-ray dataset, 86% images have more localised features, with CL showing a consistent improvement across every network layer.\n• We find the superior localisation ability leads to further improvement in predicting bounding box regions of interest. Our bounding box prediction via CL trained backbone leads to 2% improvement in mAP in object detection tasks.\n• We demonstrate that CL learns different features, with coarser features in early layers and finer features in later layers whereas end-to-end learned features have more evenly distributed granulometry across layers. " }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our proposed methodology. Firstly, we describe our technical contribution. Secondly, we describe different techniques in feature visualisation. Thirdly, we briefly describe the YOLO framework Redmon et al.\n[2016] and how it performs bounding box prediction in object detection tasks. Lastly, we introduce our quantification metric and datasets used in our experiments." }, { "figure_ref": [], "heading": "Deep Cascade Learning in Feature Localisation", "publication_ref": [], "table_ref": [], "text": "One of the important technical contributions of our work is to train the deep neural network from scratch via CL, then perform feature visualisation at different layers and investigate the differences in the feature representations with respect to the labelled bounding box of interest. We perform an identical experimental setup for E2E-trained models. This is partially done by retraining classifiers tapped after every convolutional layer. Our experimental result suggests that E2E-trained models are not localised to the bounding box of interest. Section 2.2 details the feature visualisation methods we adopt in our experiments. Despite the methodology being straightforward, we make an important observation that CL produces high-quality visual explanations compared to identical architectures trained via E2E learning. Our localisation experiment quantitatively demonstrates that feature saliency generated by CL highly overlaps with the region of interest annotated by domain experts. Furthermore, we propose to use CL in DCNN training as an effective bounding box regressor. Our experimental result suggests that DCNN backbone trained via CL improves performance in object detection. Section 2.3 includes the methodology details of the bounding box prediction method." }, { "figure_ref": [], "heading": "Feature Visualisation", "publication_ref": [ "b3", "b10", "b11" ], "table_ref": [], "text": "Sometimes it is not sufficient to report and be satisfied with strong performance measures on general datasets when delivering care for patients Esteva et al. [2021]. It requires a deep understanding of which cases the model has made a good performance and which circumstances it fails. Feature visualisation provides a visual explanation by plotting salient images showing the most contributing pixel location Simonyan et al. [2014], Selvaraju et al. [2020], or selecting image patches that are potentially interpretable by a model trained via perturbed images Ribeiro et al. [2016]." }, { "figure_ref": [], "heading": "Saliency Map", "publication_ref": [ "b10" ], "table_ref": [], "text": "Saliency map Simonyan et al. [2014] measures sensitivity for individual pixels, given an input image I on the final prediction. This is achieved by taking the gradient of the class score (S c ) with respect to the input image itself:\nw = ∂S c ∂I(1)\nThe result will give us a contribution map of the degree to which a pixel contributed to that class score. This gives us insight into what the network is focusing on with respect to the input image for each particular class prediction." }, { "figure_ref": [], "heading": "Grad-CAM", "publication_ref": [], "table_ref": [], "text": "The Grad-CAM Selvaraju et al.\n[2020] method generates a heat-map of the input pixels, telling us where the model is looking at to make a particular prediction. Grad-CAM considers how a change in a particular location i, j, in the activation map A k , creates a change in the class activation y c by computing this gradient (Equation 2). This is accumulated by summing the values over the entire activation map indexed by k to give α c k . The scalar α c k represents neuron importance for the k th feature map and class c. Finally, L Grad-CAM is computed using Equation 3, where Z denotes the total number of pixels in the feature map. Equation 3 accumulates the neuron importance over all the activation maps, followed by the ReLU non-linearity to remove the negative components. α c k < 0 implies that a change in A k will decrease prediction score y c , which should be avoided as those feature maps that improve the prediction are of interest Selvaraju et al. [2020], hence the ReLU: \nα c k = 1 Z i j ∂y c ∂A k ij (2) L c Grad-CAM = ReLU k α c k A k (3) 2.2.3 LIME Local Interpretable\nL BBox = S 2 i=0 B j=0 1 obj ij (x i -xi ) 2 + (y i -ŷi ) 2 + S 2 i=0 B j=0 1 obj ij √ w i -ŵi 2 + h i -ĥi 2 (4)\nwhere x, y, w, h represent the two-dimensional object's center coordinate, width and height, respectively. Final loss is calculated by iterating through all grids S and object bounding boxes B. " }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b18" ], "table_ref": [], "text": "To quantify the localisation ability, we use the Intercept Over Union (IOU) metric:\nIOU = area (B p ∩ B gt ) area (B p ∪ B gt )(5)\nwhere B p denotes the binarized saliency map. For the thresholding process, we use a fixed percentile instead of a constant value, ensuring a fair comparison. This results in binarized saliency maps that all have the same degree of pixel covering but are different in distribution. B gt denotes the binarized ground truth bounding box, where regions inside the box are True. To quantify the model's overall localisation ability, we define Localisation Accuracy by measuring the fraction of instances that satisfy IOU > 0.2. Note that the LIME framework explains the decision at the patch level. However, we are merely interested in part of the patch that overlaps with the bounding box. Therefore, we measure mainly the degree of overlap by counting the number of pixels that are inside the bounding box.\nFor the object detection task, we evaluate our model performance using mean Average Precision (mAP) Lin et al. [2014] and mean Intersection over Union (mIOU). We are using both single and multiple IOU thresholds to measure mAP. For a single IOU threshold, we select IOU = 0.5 and 0.75. For multiple IOU thresholds, we use the mean of 10 IOU thresholds, from 0.5 to 0.95 with step size 0.05." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b19", "b20" ], "table_ref": [], "text": "We show our method has improved localisation ability in both natural image and medical domains. Specifically, for the natural image domain, we choose Pascal VOC Everingham et al. [2010] which includes 11, 530 natural images in 20 classes and 27, 450 object ROI since multiple objects exist. For chest X-ray images we use the ChestX-ray8 Wang et al. [2017] dataset, where 987 chest X-ray images are provided with board certified medic annotations of the correct location of the anomaly." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Next, we present experimental results on the two varied datasets of natural and medical images." }, { "figure_ref": [ "fig_0" ], "heading": "Feature Visualisation", "publication_ref": [ "b11" ], "table_ref": [], "text": "In Figure 1, we visualise the dominant features learned by the network across various layers using the Grad-CAM Selvaraju et al. [2020] saliency map. We observe a large gap between CL (top row) and E2E (bottom row) features on the chest X-ray data. CL features are often more visually localised with respect to the bounding box. Similar phenomena are observed by only visualising the gradient signals across various layers as illustrated in Figure 2. These results suggest that the gradient signal plays an important role in generating a qualitative visualisation. Next, we quantify this effect by considering the IOU and localisation accuracy over both datasets. " }, { "figure_ref": [], "heading": "Feature Localisation via Grad-CAM and Saliency Map", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Feature Localisation via LIME Framework", "publication_ref": [ "b10" ], "table_ref": [], "text": "In this section, we evaluate CL localisation performance using LIME Ribeiro et al. [2016]. LIME requires learning multiple simple learners which creates extra complexity. But it does not require gradient information, which differs from a gradient-based method such as Grad-CAM Selvaraju et al.\n[2020] and saliency map Simonyan et al. [2014]. In Figure 5, we show that CL produces meaningful features by measuring the degree of overlap between LIME output images (occluded area are treated as 0) and the bounding box. Figure 5 shows localisation performance comparing CL and E2E learning methods using the LIME test. CL learned features consistently outperform the E2E features in all layers, with the largest improvement found at the second layer.\nFigure 5: Localisation performance comparing CL and E2E learning method using LIME framework." }, { "figure_ref": [ "fig_6" ], "heading": "CL Improves Region Proposal", "publication_ref": [ "b16", "b2", "b2", "b2" ], "table_ref": [ "tab_1", "tab_1" ], "text": "We show that training a backbone network via CL improves region proposal in bounding box prediction. We adopt a network trained via CL as an effective bounding box regressor. We keep feature layers frozen and retrain one added convolutional layer to optimize the bounding box regression loss. The loss was first introduced in YOLOv1 framework Redmon et al. [2016]. The layerwise comparison results are shown in Figure 6. We found CL achieves the best overall quality of region proposal with the largest difference at layer 4 compared to an identical network trained via E2E. Notice that we are not directly competing with the state-of-the-art model, but we claim using the CL training scheme improves overall performance in object detection tasks against the widely adopted E2E scheme. We further investigate whether using a deep network backbone trained via CL could improve region proposal. Table 1 shows YOLOv3 performance on the Pascal dataset. We adopt CL to train the DarkNet-53 Redmon and Farhadi [2018] network backbone from scratch and compare against the E2E baseline training method. To implement the CL algorithm on the DarkNet-53 architecture, we split the whole structure into multiple sub-modules. Each sub-module consists of at least one complete residual connection block. When the network architecture is fixed, the size of the sub-module is determined by the total number of splits. In the YOLOv3 framework, the output is taken from three locations among intermediate features and passed to the feature pyramid network (FPN) to improve detection for different object sizes.\nIn our experiment, we split the network into three sub-modules and denote it as CL 3 . This result, along with other splitting strategies to train with CL, are reported in Table 1. We found using pre-trained features from the middle layer of the network yields the largest difference between CL and E2E. The best performance using CL feature up to middle layer improves 2% in mAP .5 metric compared to reusing E2E feature at same layer. When increasing the number of splits, the performance starts to decrease. This is possibly caused by overfitting since the network's learning ability is limited due to sub-module size shrinks by having larger quantities of splits.\nmAP .5:.95:.05 mAP Redmon and Farhadi [2018]. All CL and E2E are using DarkNet-53 Redmon and Farhadi [2018] architecture. The lower subscript denotes the total number of splits in CL training." }, { "figure_ref": [ "fig_0", "fig_7", "fig_8" ], "heading": "Quantifying Coarse-to-Fine Features Representation", "publication_ref": [ "b21" ], "table_ref": [], "text": "Granulometry analysis Dougherty et al. [1989] on the generated saliency maps quantitatively demonstrates the coarseto-fine feature representation. The higher granulometry represents the feature activation (indicated as the irregular red patch in Figure 1) are coarser, finer detail is learned if granulometry has a low value. Figure 7 quantitatively analyze using granulometry to measure CL and E2E feature representation. We conclude that CL is learning coarser feature representation at early layers and finer at later layers. On the contrary, E2E has more evenly distributed granulometry across the layers. These results strengthen the argument for CL learning optimal feature representation as we demonstrate that early layers in the network are learning coarser features while later layers are learning more fine-grained features. In Figure 8, we visualise instances with a relatively small bounding box. By visualising the instance and its corresponding binary mask, we observed that CL is able to generate a localised heatmap for the small object of interest with a highquality image. On the other hand, E2E tend to generate salient images that are activated in a large region, result in an ambiguous localisation. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work we investigate the localisation of features across learning paradigms. Our systematic evaluation across various feature visualisation methods and datasets show that E2E training, which has been widely considered by the machine learning community, is limited to localising discriminative features across multiple network layers. We found network trained via CL is more localised to the region of interest annotated by domain experts. We show that CL's superior localisation ability leads to an improvement in object detection tasks. " } ]
Lack of interpretability of deep convolutional neural networks (DCNN) is a well-known problem particularly in the medical domain as clinicians want trustworthy automated decisions. One way to improve trust is to demonstrate the localisation of feature representations with respect to expertlabeled regions of interest. In this work, we investigate the localisation of features learned via two varied learning paradigms and demonstrate the superiority of one learning approach with respect to localisation. Our analysis on medical and natural datasets shows that the traditional end-to-end (E2E) learning strategy has a limited ability to localise discriminative features across multiple network layers. We show that a layer-wise learning strategy, namely cascade learning (CL), results in more localised features. Considering localisation accuracy, we not only show that CL outperforms E2E but that it is a promising method of predicting regions. On the YOLO object detection framework, our best result shows that CL outperforms the E2E scheme by 2% in mAP.
CASCADE LEARNING LOCALISES DISCRIMINANT FEATURES IN VISUAL SCENE CLASSIFICATION
[ { "figure_caption": "Figure 1 :1Figure 1: Grad-CAM saliency map visualisation at different layers of the neural network. Results on a (top) cascadetrained network versus (bottom) E2E training. By comparing the features to the red rectangle denoting the bounding box, CL achieves better localisation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: Saliency map generated via CL in comparison to the same network which is E2E trained. Left column: Original image and its corresponding label; Middle: CL; Right: E2E. The heatmap was generated after post-processing using a Gaussian filter.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Model-agnostic Explanations (LIME)Ribeiro et al. [2016] generates an occluded version of the image as a visual explanation. This is achieved by randomly perturbing the image patch (allocated by super-pixel) and training simple classifiers (e.g. ridge regression) using prediction score from the model to be explained Ribeiro et al.[2016]. By performing the feature selection on a simple classier, super-pixels that contribute largely to final predictions are found. Figure3shows an illustration of the LIME framework Ribeiro et al.[2016].", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustration of the LIME framework. X in is input image; y is confidence score output by model. They introduce a binarized \"intermediate representation\" z to represent the existence of certain image patches.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure4shows the scatter plots of IOU computed over (a) 2000 Pascal images and (b) 987 chest X-ray images. Each data point corresponds to an image, with the IOU of the network trained with E2E presented on the x-axis, and CL on the y-axis. The majority of the images have more localised features (higher IOU) with CL as opposed to E2E, with 74% on the Pascal dataset and 86% with the Chest x-ray dataset. The localisation accuracy is further plotted over the layers of the network in Figure4(c) and (d) demonstrating the superiority in feature localisation for networks trained via CL.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4: a-b): Scattering plot of IOU between the manual annotation and saliency maps. The experiment was conducted on both natural images (Pascal) and medical datasets (Chest X-ray). c-d): IOU between the Grad-CAM and bounding boxes, over varied learning method layers.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: CL achieves better performance with the largest difference in mAP at layer 4. For each layer, we re-train CL in three different random seeds. The shaded area denotes the standard deviation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Granulometry measure comparing CL and E2E learning methods on different layers (layer 1 -3 as the inconsistency observed in early layers). a) Pascal; b) chest X-ray", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Visualisation of data instance via Grad-CAM generated from two methods. left: Original image and its bounding box; middle: CL ; right: E2E. The IOU value associated with each binarized saliency map is shown on the right side.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 99Figure9shows scatter plots of IOU and localisation accuracy. Quantifying the localisation ability of CL via Saliency MapSimonyan et al. [2014]. Align with the result in Figure4, CL learning scheme consistently improves localisation over E2E learning scheme.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 9: a-b): Scattering plot of IOU between the manual annotation and saliency maps. The experiment was conducted on both natural image (pascal) and medical dataset (chest X-ray) c-d): IOU between the saliency maps output and bounding boxes, over different layers.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1010Figure10provides qualitative analysis by visualising bounding-box prediction for some randomly selected images. We notice CL is able to predict a precise bounding box location. On the other hand, E2E fails to generate the bounding box (e.g. second row, image of jar) or generate imprecise location (e.g. first row, image of two cats).", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Qualitative Results. YOLO prediction on random test sample from Pascal dataset. Comparing CL and E2E training scheme to train network backbone.", "figure_data": "", "figure_id": "fig_12", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "To predict coordinate information, it is normal to modify the output layer or add extra detection headRedmon et al. [2016],Redmon and Farhadi [2018]. However, the network backbone remains the same. In order to implement CL for the object detection task, we consider a two-step approach. First, pre-training the backbone network via CL on image classification tasks. Second, perform bounding-box regression via Equation4. In our experiments, we consider two backbone network structures, which are a simple 6 layer DCNN model and a 53 layer DarkNetRedmon et al. [2016].", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparing CL and E2E training method to train network backbone in YOLOv3 framework", "figure_data": ".5mAP .75CL 334.64±0.3167.29±0.28 31.77±0.4CL 733.87±0.166.56±0.15 30.27±0.3CL 23 33.34±0.2865.68±0.23 29.63±0.62E2E 33.41±0.1865.3±0.2430.07±0.43", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Junwen Wang; So17 3at; Katayoun Farrahi
[ { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "", "ref_id": "b0", "title": "ImageNet Classification with Deep Convolutional Neural Networks", "year": "" }, { "authors": "Shervin Minaee; Yuri Boykov; Fatih Porikli; Antonio Plaza; Nasser Kehtarnavaz; Demetri Terzopoulos", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b1", "title": "Image segmentation using deep learning: A survey", "year": "2022" }, { "authors": "Joseph Redmon; Ali Farhadi", "journal": "", "ref_id": "b2", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "Andre Esteva; Katherine Chou; Serena Yeung; Nikhil Naik; Ali Madani", "journal": "npj Digital Medicine", "ref_id": "b3", "title": "Deep Learning-enabled Medical Computer Vision", "year": "2021" }, { "authors": "Jeremy Irvin; Pranav Rajpurkar; Michael Ko; Yifan Yu; Ciurea-Ilcus", "journal": "", "ref_id": "b4", "title": "CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison", "year": "2019" }, { "authors": "D Julian; Jorge A Arias-Londono; Laureano Gomez-Garcia; Juan I Moro-Velazquez; Godino-Llorente", "journal": "IEEE Access", "ref_id": "b5", "title": "Artificial Intelligence applied to chest X-Ray images for the automatic detection of COVID-19. A thoughtful evaluation approach", "year": "2020" }, { "authors": "Yuan Liu; Ayush Jain; Clara Eng; David H Way; Kang Lee", "journal": "Nature Medicine", "ref_id": "b6", "title": "A deep learning system for differential diagnosis of skin diseases", "year": "2020" }, { "authors": "L Chetan; Ozan Srinidhi; Anne L Ciga; Martel", "journal": "Medical Image Analysis", "ref_id": "b7", "title": "Deep neural network models for computational histopathology: A survey", "year": "2021" }, { "authors": "Mauricio Reyes; Raphael Meier; Sérgio Pereira; Carlos A Silva", "journal": "Radiology: Artificial Intelligence", "ref_id": "b8", "title": "On The Interpretability of Artificial Intelligence in Radiology: Challenges and Opportunities", "year": "2020" }, { "authors": "Adriel Saporta; Xiaotong Gui; Ashwin Agrawal; Anuj Pareek", "journal": "medRxiv", "ref_id": "b9", "title": "Benchmarking saliency methods for chest X-ray interpretation", "year": "2022" }, { "authors": "Karen Simonyan; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b10", "title": "Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps", "year": "2014" }, { "authors": "R Ramprasaath; Michael Selvaraju; Abhishek Cogswell; Ramakrishna Das; Vedantam", "journal": "International Journal of Computer Vision", "ref_id": "b11", "title": "Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization", "year": "2020" }, { "authors": "Enrique S Marquez; Jonathon S Hare; Mahesan Niranjan", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b12", "title": "Deep Cascade Learning", "year": "2018" }, { "authors": "Scott E Fahlman; Christian Lebiere", "journal": "", "ref_id": "b13", "title": "The Cascade-Correlation Learning Architecture", "year": "1990" }, { "authors": "Xin Du; Katayoun Farrahi; Mahesan Niranjan", "journal": "ACM", "ref_id": "b14", "title": "Transfer Learning Across Human Activities Using a Cascade Neural Network Architecture", "year": "2019" }, { "authors": "Junwen Wang; Xin Du; Katayoun Farrahi; Mahesan Niranjan", "journal": "", "ref_id": "b15", "title": "Deep Cascade Learning for Optimal Medical Image Feature Representation", "year": "2022" }, { "authors": "Joseph Redmon; Santosh Kumar Divvala; Ross B Girshick; Ali Farhadi", "journal": "", "ref_id": "b16", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Marco Tulio Ribeiro; Sameer Singh; Carlos Guestrin", "journal": "Association for Computing Machinery", "ref_id": "b17", "title": "Why Should I Trust You?\": Explaining the Predictions of Any Classifier", "year": "2016" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge J Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence Zitnick", "journal": "", "ref_id": "b18", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "M Everingham; L Van Gool; C K I Williams; J Winn; A Zisserman", "journal": "International Journal of Computer Vision", "ref_id": "b19", "title": "The Pascal Visual Object Classes (VOC) Challenge", "year": "2010-06" }, { "authors": "Xiaosong Wang; Yifan Peng; Le Lu; Zhiyong Lu; Mohammadhadi Bagheri", "journal": "", "ref_id": "b20", "title": "ChestX-ray8: Hospital-scale Chest X-ray Database and Benchmarks on Weakly-supervised Classification and Localization of Common Thorax Diseases", "year": "2017" }, { "authors": "Eugene J Edward R Dougherty; Jeff B Kraus; Pelz", "journal": "IEEE", "ref_id": "b21", "title": "Image Segmentation by Local Morphological Granulometries", "year": "1989" } ]
[ { "formula_coordinates": [ 3, 286.46, 704.6, 254.21, 22.31 ], "formula_id": "formula_0", "formula_text": "w = ∂S c ∂I(1)" }, { "formula_coordinates": [ 4, 72, 245.42, 468.67, 119.2 ], "formula_id": "formula_1", "formula_text": "α c k = 1 Z i j ∂y c ∂A k ij (2) L c Grad-CAM = ReLU k α c k A k (3) 2.2.3 LIME Local Interpretable" }, { "formula_coordinates": [ 5, 169.56, 100.58, 371.11, 69.71 ], "formula_id": "formula_2", "formula_text": "L BBox = S 2 i=0 B j=0 1 obj ij (x i -xi ) 2 + (y i -ŷi ) 2 + S 2 i=0 B j=0 1 obj ij √ w i -ŵi 2 + h i -ĥi 2 (4)" }, { "formula_coordinates": [ 5, 254.63, 314.04, 286.03, 23.22 ], "formula_id": "formula_3", "formula_text": "IOU = area (B p ∩ B gt ) area (B p ∪ B gt )(5)" } ]
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [], "table_ref": [], "text": "The work was split between three members. The work was split into three components, the android application, the Langchain program and the flask app. All three were developed independently. Although the flask app required the Langchain program to be completed first in order to be fully built.\nThe final product we have developed is a functional and ready to deploy chatbot. It has a user interface in the form of an android application. The query processing program is written using the framework Langchain in python. The Langchain program is packaged into a flask application which contains a REST endpoint for processing queries in the form of POST requests. The flask application can then be hosted on a server and the android application communicates with the backend through POST requests. The relevant documents need to be placed inside the data directory in the backend server. The chatbot will then process the queries in the context of the documents present in the directory and provide a response." }, { "figure_ref": [], "heading": "II. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "We will now explore each component of the chatbot. We will discuss their development and the working logic behind each of them." }, { "figure_ref": [], "heading": "A. Langchain", "publication_ref": [], "table_ref": [], "text": "Langchain is an open source Natural Language Processing(NLP) framework designed to simplify the creation of applications using Large Language Models(LLM). The main purpose of using Langchain is to combine powerful LLMs with various external applications wherever there is a need for NLP. Langchain uses a feature known as Embeddings transformer where a LLM is used which in our case is OPEN AI's GPT to generate embedding related to the prompt that is given by the user. The key function of the embeddings is to understand the semantic meaning of the prompt given by the user. Once it understands the context in which the prompt was given, it looks for the relevant texts that are semantically relevant to the prompt.\nWhen we pass a large document as an input into Langchain, it breaks the entire content into smaller chunks of text and stores them in vector store. When a prompt is given, it understands the semantic meaning and the context in which it was given and uses a technique called as \"Cosine Similarity\" to measure the similarity between the prompt and the information stored in the vector store. It ranks every text chunk in the vector store and retrieves the text chunk that has higher Cosine Similarity with the prompt.\nDue to its capability of processing huge amount of data and responding similarly to a human, Langchain finds its application in assisting as an Interactive Chatbot, text summarization, coding assistance, marketing and e-commerce platforms to better engage with customers. In our case, it acts as the chatbot assistant to answer the legal queries of the user regarding the document which was uploaded. This can prove to be a useful tool to both the layman and the legal aspirants to understand the long and tedious judgements given by the courts by prompting questions regarding the document. The legal context is brought to the chatbot by also sending the indian constitution to the chatbot along with the document of interest." }, { "figure_ref": [], "heading": "B. Flask Application", "publication_ref": [], "table_ref": [], "text": "The technology chosen to build the backend was Flask. This component allows us to use our query processing program from anywhere by hosting it on a server. The flask app provides a POST request end point. Once the app is hosted on a server, anyone can make POST requests to it and make use of our query processing program. There are many reasons for choosing Flask as the technology to build our backend, some of which are:" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Lightweight and Flexible Framework: Flask is known for its simplicity and minimalism, making it easy to understand and quick to set up. This lightweight nature ensures that the chatbot application remains agile and responsive, facilitating faster development and deployment." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "RESTful API Support: Flask is well-suited for building RESTful APIs, and in this project, it serves as the backend with a REST endpoint for processing queries. This RESTful architecture allows for seamless communication between the android application and the backend, enhancing scalability and ease of integration with other systems if needed." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "", "publication_ref": [], "table_ref": [], "text": "Python Integration: Since the Langchain program, responsible for query processing, is written in Python, Flask, also a Python web framework, facilitates smooth integration between the android application and the processing logic. This cohesiveness simplifies the overall development process.  Scalability and Hosting: Flask applications are easily deployable, and they can be hosted on various platforms, making it convenient to scale the chatbot based on demand. This feature is crucial for ensuring that the chatbot remains accessible and responsive, especially if the user base grows over time.\nThe code in Fig. 1 initializes the flask app. It makes sure that the app is run only when the script is directly run and not just imported. It also provides the 'app.run()' method which allows us to run the development server for the app. The code also enables Cross-Origin Resource Sharing(CORS), allowing the application to handle requests from different origins. The flask app provides the '/docqna' route (Fig. 2) which handles POST requests. Upon an appropriate request being made to this path, the 'processclaim()' function will be executed. This function parses the POST request body as JSON then extracts the user's query from it. It then runs the function 'qa_chain()' with the query as its parameter. The 'qa_chain()' function is the query processing program. It takes the user's query as input and provides the response. This path will only successfully process POST requests with a JSON body of a specific form. Even though it will try to force parse any type of content as JSON it still requires a valid 'query' object in order to process the query. " }, { "figure_ref": [], "heading": "C. Android Application", "publication_ref": [], "table_ref": [], "text": "To build the android application we will be using Kotlin and XML. XML will be used for the frontend and Kotlin will be used for backend. The reasons for using the above technologies is provided below." }, { "figure_ref": [], "heading": "i) Kotlin", "publication_ref": [], "table_ref": [], "text": " Modern Language Features: Kotlin is a modern, statically-typed programming language that brings many features to the table, making code more concise, expressive, and safer compared to Java." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Interoperability with Java: Kotlin is fully interoperable with Java, allowing you to leverage existing Java libraries and frameworks seamlessly. This is particularly beneficial for Android development, as many Android libraries and tools are originally written in Java." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Conciseness and Readability: Kotlin's concise syntax reduces boilerplate code, making the codebase more readable and maintainable. This can lead to increased development speed and fewer chances of introducing bugs." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Official Language for Android: Kotlin is officially supported by Google as a first-class language for Android development. This support means that Kotlin receives regular updates, and new Android features are often Kotlin-first or Kotlin-only." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Coroutines for Asynchronous Programming: Kotlin introduces coroutines, which simplifies asynchronous programming. This is crucial for Android apps that often involve network operations, database queries, or other tasks that should not block the main UI thread." }, { "figure_ref": [], "heading": "ii)", "publication_ref": [], "table_ref": [], "text": "XML for Layouts:  Declarative UI with XML: XML is used for defining layouts in Android, providing a declarative way to describe the UI components and their attributes. This separation of UI and logic makes it easier to understand and maintain the code." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Resource Management: XML is used to define resources such as layouts, strings, and colors in a separate file. This allows for efficient resource management and makes it easy to support multiple device configurations." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Data Binding: XML can be integrated with Android Data Binding, allowing for a more seamless connection between the UI components and the underlying data model. This can lead to cleaner and more maintainable code, as changes to the data automatically update the UI and vice versa." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "UI Customization and Theming: XML allows for easy customization of UI components and theming of the app. Styles and themes can be defined in XML, providing a consistent look and feel throughout the app." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Accessibility: XML layouts can be designed with accessibility in mind, making it easier to ensure that your app is usable by people with disabilities. This includes features like content descriptions, focus order, and screen reader support.\nFig. 4 The various UI components of the android application.\nIn summary, choosing Kotlin for the app's logic and XML for defining layouts provides a powerful and efficient combination for Android app development, offering modern language features, seamless integration, and a declarative approach to UI design." }, { "figure_ref": [], "heading": "III. RESULTS", "publication_ref": [], "table_ref": [], "text": "Our final product is a functional chatbot which can be used to discuss multiple documents. During development of the chatbot. Before the integration of the android application, Postman was used to test for results." }, { "figure_ref": [ "fig_0" ], "heading": "Fig.5 Example of query and response body", "publication_ref": [], "table_ref": [], "text": "The query in the example in Fig. 10 is \"What does the research paper tell us about elevator scheduling?\" and the response is \"The research paper discusses various optimization techniques for elevator scheduling, including genetic algorithms, swarm intelligence, machine learning models and the use of advance information. It also evaluates the performance of these algorithms using criteria such as waiting time, travel time, and energy consumption, with the focus on reduction of waiting and travel times. It reviews the advantages and disadvantages of each approach and discusses their complexity and scalability.\"\nThe above query and response was asked in the context of a research paper titled \"An Analysis of Various Optimization Techniques for Elevator Scheduling \". This shows that our chatbot is able to understand the document's context and respond accordingly. To turn this into a legal context chatbot we feed the Indian constitution into the bot long with the document of interest." }, { "figure_ref": [], "heading": "IV. LITERATURE REVIEW", "publication_ref": [], "table_ref": [], "text": "The article [1] serves as a guide to beginners to learn about Langchain. It clearly explains the process of installing Langchain in the system using specific commands. It briefly covers about the process in which Langchain operates starting from getting the document from the user as input to breaking it into smaller text chunks and finding out their semantic meaning using the Cosine Similarity technique and then getting the query from user and finding and fetching the response from the Large Language Module. It also explains the intricate processes such as Embeddings transformation and vector storage of the text chunks in a plain and simple language. The article also explains about the procedure to interact with multiple documents at the same time by creating a List of Documents.\nThe part 1 of the article [2] provides key insights about the different modules and the various different features that each module has to offer. It explains about 7 important modules including their functionalities and the way in which Langchain chains all of these modules into its framework. The article also provides code snippets to implement these functionalities. The part 2 of the article explains the way in which Langchain processes the data using Embeddings and Vector store. It also covers the key details about the function which is used to remember the previous conversions and correlate with the queries.\n[4] is the official documentation of Kotlin which was used for learning the syntax of Kotlin. [5] is the course by Google which was used to understand and implement Kotlin. The video [6] was used to understand XML and how to implement XML and Kotlin together.\nThe blog [3] by Postman was used to understand the basics of API testing. The tutorial video [7] was used to understand how to make a Flask application and make use of REST APIs in Flask. The documentation of Flask [8] was also used asa reference material." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In conclusion, this project followed an efficient methodology in order to build a functional and scalable chatbot. Future work on this project holds great potential in turning this into an industry standard chatbot. There are many features such as AI training, provision in android app for uploading documents and increasing query token limit which can be developed in the future. Work can also be done on the UI to improve the user experience. Overall, a legal document chatbot able to answer questions within the context of Indian constitution was developed." } ]
With the exponential growth of digital data and the increasing complexity of legal documentation, there is a pressing need for efficient and intelligent tools to streamline the handling of legal documents. With the recent developments in the AI field, especially in chatbots, it cannot be ignored as a very compelling solution to this problem. An insight into the process of creating a Legal Documentation AI Chatbot with as many relevant features as possible within the given time frame is presented. The development of each component of the chatbot is presented in detail. Each component's workings and functionality has been discussed. Starting from the build of the Android app and the Langchain query processing code till the integration of both through a Flask backend and REST API methods.
Development of a Legal Document AI-Chatbot
[ { "figure_caption": "Fig. 11Fig.1 Code for initializing flask application", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 22Fig.2 Route provided by flask application", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 33Fig.3 Example of valid JSON body", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Nataraj Pranav; Devaraj; Rakesh Teja; Manoj Kumar; R Professor; Aaryav Gangrade
[]
[]
10.1109/ICCITECHN.2017.8281840
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b18", "b20", "b2", "b0", "b5", "b11", "b23", "b6" ], "table_ref": [], "text": "In the field of Natural Language Processing, Sentiment Analysis has earned significant attention as a research area dedicated to the analysis of textual content. A considerable body of research on Sentiment Analysis in Bangla has been conducted. Some of these works (e.g. Islam et al. (2021), Kabir et al. (2023)) are based on introducing new datasets. In parallel, other works(e.g. Amin et al. (2019), Al-Amin et al. (2017)) are done on novel approaches. In spite of these numerous works, different opportunities still exist to improve the Analysis of Sentiments.\nIn this paper, we describe our system for task 2 of the Bangla Language Processing Workshop @EMNLP-2023 (Hasan et al., 2023a). We employ various systems based on BanglaBert and BanglaBert-Large (Bhattacharjee et al., 2022). Our experimental systems include fine-tuning, increasing the generalization based on dropping random Additionally, we describe alternate potential methods that have not scored well in the result section 6. To illustrate, we explore Task Adaptive Pre-Training (Gururangan et al., 2020), in fact, has been used by this year's winner of SemEval Task 12 (Muhammad et al., 2023) on sentiment analysis of African Language, and generating paraphrases using BanglaT5 (Bhattacharjee et al., 2023). Moreover, we notice a significant drop in our score in the final test set of our best model. We describe this as our limitations in the section 7." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b20", "b5" ], "table_ref": [], "text": "Many of the related works are primarily focused on novel datasets covering diverse domains. Islam et al. (2022) have developed a dataset comprised of various public comments from social media platforms. Rahman and Dey (2018) have created their datasets based on Cricket and Restaurant reviews. Most recently, (Kabir et al., 2023) In recent years, Large Language Models(LLM), trained on huge corpus, have become popular for their capability to understand the language and can easily fine-tuned for any task like Sentiment Analysis. LLMs based on the Bangla language(e.g. BanglaBert (Bhattacharjee et al., 2022), shaha-jBert (Diskin et al., 2021), BanglaT5 (Bhattacharjee et al., 2023)) are also available, which opens opportunities to work on various tasks for Bangla." }, { "figure_ref": [], "heading": "Task Description", "publication_ref": [], "table_ref": [], "text": "This is a multi-class classification task where the objective is to detect the sentiment of the given text into 3 different classes: Positive, Negative, and Neutral. The score will be calculated using the micro-f1. The task consists of two phases: a development phase followed by a test phase. The final standing is based on the score of the test set provided during the test phase." }, { "figure_ref": [], "heading": "Dataset Description", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The dataset is comprised of MUBASE (Hasan et al., 2023b) and SentiNob (Islam et al., 2021) datasets. The SentiNob dataset consists of various public comments collected from social media platforms. It covers 13 different domains, for example, politics, education, agriculture, etc. On the other hand, the MUBASE dataset consists of posts collected from Twitter and Facebook. The sample sizes of different sets given for training, validation, and testing are shown in Table 2." }, { "figure_ref": [], "heading": "System Description", "publication_ref": [ "b7" ], "table_ref": [], "text": "Here, we discuss several systems that we have experimented with for the task including the preprocessing of the dataset. (Clark et al., 2020), which was originally used to pre-train these models. We don't perform DAPT since the models already cover the domains." }, { "figure_ref": [], "heading": "2-Stage Fine-Tuning of LLMs", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In the first stage, we fine-tune BanglaBert using the external data only. Here, we don't include any given data from the task. In the next stage, we do regular fine-tuning on the train set. We use the term \"2FT\" as a short form of this approach. The list of the external datasets and sample sizes are shown in table 10." }, { "figure_ref": [], "heading": "Data augmentation", "publication_ref": [ "b4", "b6" ], "table_ref": [], "text": "We experiment with 2 data augmentation techniques to improve the generalization. First, instead of dropping random words (Bayer et al., 2022), we drop random tokens(RTD) since dropping words might change the meaning. We apply RTD on the fly during the training. Second, we employ para-phrasing as data augmentations using BanglaT5 (Bhattacharjee et al., 2023)." }, { "figure_ref": [], "heading": "Preprocessing of Data", "publication_ref": [ "b24" ], "table_ref": [], "text": "We remove the duplicates found in the training set and development set. We replace any url and username with URL and USER tag respectively similar to Nguyen et al. (2020). While using BanglaBert we normalize the sentence by their specific normalizer 1 as required by their model. All of the sentences are tokenized by the individual tokenizer required by each model. We set the max length of tokenization to 128 for each text.\nWe use several external data. However, most of the labels don't match the labels of this task. For the initial fine-tuning of the LLMs, we first map different labels to the three labels for this task. The label mapping is shown in table 11. For TAPT, we didn't need any of these labels since we do masked language modeling. Finally, we also remove the duplicates found in the external datasets." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b22", "b11" ], "table_ref": [], "text": "We have used Models and Trainer from Huggingface 2 (PyTorch version). We employ mixed precision training (Micikevicius et al., 2017) that enables faster training and consumes low GPU memory. Moreover, we built a code such that the results are reproducible. All of the experiments are done using a single V100 GPU in Google Colaboratory 3 . We do hyper-parameters search on learning rate, batch size, dropout ratios, and total epochs. We start the search with the parameter settings as suggested Gururangan et al. (2020) " }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b29" ], "table_ref": [ "tab_6", "tab_5" ], "text": "To begin, we discuss the systems that have scored well on the Development-Test's score. The top individual model is BanglaBert-Large with a random token drop that has scored 0.733, and even without any enhancement, it can score 0.723. The next best single model is BanglaBert with random token drop(RTD) and 2-stage fine-tuning that has scored 1 https://github.com/csebuetnlp/normalizer 2 https://huggingface.co/ 3 https://colab.research.google.com/ 0.729. Table 3 shows the scores of our selected models in the Development-Test Set. Here, we see that both usages of external datasets and RTD have benefited the BanglaBert and BanglaBert-Large. We have built an ensemble of 3 best individual models(model ID 3, 5, and 6) that has scored 0.734, where we decide the class based on majority voting, and in case of a tie, we use the class predicted by the best model. We chose only the 3 best models for the ensemble because the other model's score was low and taking an odd number of models helps to decide the output class in case of a tie.\nWe have submitted the ensembled model as our best model in the test phase and has scored 0.718. Moreover, We have submitted the 3 individual best models. Our scores on the Test Set are shown in table 4. Here, we have found some inconsistency: BanglaBert-Large with random token drop, which we have considered the best model based on the Development-Test set, performed worst among the other 2 models, and BanglaBert with random token drop and pre-fine-tuned with external data, our 2nd best model, has performed the best. More importantly, every variant of BanglaBert-Large has scored low on the Test set. We discuss some analysis more in section 7. Finally, table 6 shows the confusion matrix of our ensembled model on Test set. We see that our model performed worst on detecting the Neutral class, i.e. only 412 out of 1277 samples have been correct having an accuracy of 32%, where the accuracy of Positive and Neutral classes are 78% and 83% respectively.\nThere are some systems that didn't achieve favorable performance from the beginning of our experiments. Firstly, TAPT didn't improve our results but rather declined the score by 0.039 with respect to simple fine-tuning as shown in table 5. What we can infer is that TAPT is supposed to help adapt the BanglaBert to the task domain, but it overfitted on the training samples, where the original model is already in a good optima that covered the task domain better.\nParaphrasing to create additional data using BanglaT5 also didn't work well. Its score is shown in table 5. The most perceptible reason is that paraphrased sentences, although good, were not diverse enough from the original sentences. Examples of generated paraphrases are shown in figure 1.\nOther than BanglaBert, we try the XLM-Roberta-Large, a multi-lingual model, which is used by several task winners (e.g. (Wang et al., 2022)).\nHowever, it has scored low on the Development-Test set even with all enhancements. Its score is also shown in Table 3. BanglaBert-Large on the Test set. As anticipated, models show varying performance when initialized with different seeds. Table 7 shows the results of this experiment. Moreover, we have found that the average score of the BanglaBert is better than the BanglaBert-Large. In fact, this result is consistent with the result found by the authors of BanglaBert that BanglaBert-Large performs lower than BanglaBert on Sentiment Analysis on Senti-Nob dataset4 . BangalThus, before considering a model, the average score from different seeds needs to be evaluated when the training data is small." }, { "figure_ref": [], "heading": "ID", "publication_ref": [ "b4" ], "table_ref": [], "text": "TAPT is a popular method for pre-training, but it has been ineffective for our task. However, we have inferred this based on a few experiments. Thus, we suggest that more research needs to be done on the effectiveness of TAPT, as well as DAPT, on BanglaBert.\nOur research has been mostly based on finetuning. As future work, we would like to explore using common data augmentation techniques (Bayer et al., 2022) for the given data. Besides, there are several multilingual Pre-trained Models that include the Bangla Language are need to be explored along with sophisticated methods and may even achieve better results." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we stated our systems based on BanglaBert and BanglaBert-Large for that Sentiment Analysis task. We used simple techniques like, 2-stage fine-tuning, using external datasets, and dropping random tokens. Our system scored 3rd overall in the task. We also discussed some potential systems that didn't demonstrate satisfactory performance. More importantly, we have discussed the score inconsistency of our best model between Development-Test Set and Test Set as our limitation. Finally, we discussed directing some future research like applying TAPT and DAPT on BanglaBert and trying more data augmentations or sophisticated methods." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "datasets which are used for our system are publicly available at https://github.com/Aunabil4602/" }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "Here, we show a figure and additional tables related to our descriptions. " }, { "figure_ref": [], "heading": "Parameter", "publication_ref": [], "table_ref": [], "text": "" } ]
This paper describes the system of the LowResource Team for Task 2 of BLP-2023, which involves conducting sentiment analysis on a dataset composed of public posts and comments from diverse social media platforms. Our primary aim is to utilize BanglaBert, a BERT model pre-trained on a large Bangla corpus, using various strategies including fine-tuning, dropping random tokens, and using several external datasets. Our final model is an ensemble of the three best BanglaBert variations. Our system has achieved overall 3rd in the Test Set among 30 participating teams with a score of 0.718. Additionally, we discuss the promising systems that didn't perform well namely taskadaptive pertaining and paraphrasing using BanglaT5. Training codes and external
LowResource at BLP-2023 Task 2: Leveraging BanglaBert for Low Resource Sentiment Analysis of Bangla Language
[ { "figure_caption": "Showing top 5 of the final standings of the BLP-2023 Task 2. Our team stands 3rd among 30 participants.", "figure_data": "RankTeamMicro-f11MoFa_Aambela0.7312yangst0.7273LowResource(ours)0.7184Hari_vm0.7175PreronaTarannum0.716", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Sample sizes of various sets provided in the Task 2.", "figure_data": "Set NameSample sizeTraining32566Development3934Development Test3426Test67074.1 Fine-tuning Pre-trained LLMsFine-tuning Pre-trained Models can achieve highscores with fewer training steps. Top competi-tors of different shared tasks (e.g. Wang et al.(2022), Wang et al. (2023)) use these pre-trainedmodels. For this task, we use several variations", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance of TAPT and Paraphrasing on BanglaBert-Large in comparison with fine-tuning on Development Set.", "figure_data": "PredictedNeg Neut PosTrueNeg 2770 244 Neut 598 412324 267Pos331128 1633", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Confusion Matrix of the Ensembled model on Test Set.", "figure_data": "Seed BBert BBertL1234 0.7156 0.7115420.7179 0.71107470.7197 0.721052467 0.7192 0.71222779 0.7135 0.71613620.7185 0.71348194 0.7182 0.7127avg.0.7177 0.7140Table 7: Scores from using different seeds forBanglaBert(BBert), BanglaBert-Large(BBertL) on TestSet.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Aunabil Chakma; Masum Hasan
[ { "authors": " Md; Md Saiful Al-Amin; Shapan Islam; Uzzal Das", "journal": "", "ref_id": "b0", "title": "Sentiment analysis of bengali comments with word2vec and sentiment information of words", "year": "2017" }, { "authors": " Habibul Md; Md-Mizanur Alam; Md Rahoman; Kalam Abul; Azad", "journal": "", "ref_id": "b1", "title": "Sentiment analysis for bangla sentences using convolutional neural network", "year": "2017" }, { "authors": "Al Amin; Imran Hossain; Aysha Akther; Kazi Masudul; Alam ", "journal": "", "ref_id": "b2", "title": "Bengali vader: A sentiment analysis approach using modified vader", "year": "2019" }, { "authors": "Nazmul Shamsul Arafin Mahtab; Md Islam; Mahfuzur Rahaman", "journal": "", "ref_id": "b3", "title": "Sentiment analysis on bangladesh cricket with support vector machine", "year": "2018" }, { "authors": "Markus Bayer; Marc-André Kaufhold; Christian Reuter", "journal": "ACM Computing Surveys", "ref_id": "b4", "title": "A survey on data augmentation for text classification", "year": "2022" }, { "authors": "Abhik Bhattacharjee; Tahmid Hasan; Wasi Ahmad; Kazi Samin Mubasshir; Md Saiful Islam; Anindya Iqbal; M Sohel Rahman; Rifat Shahriyar", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "BanglaBERT: Language model pretraining and benchmarks for low-resource language understanding evaluation in Bangla", "year": "2022" }, { "authors": "Abhik Bhattacharjee; Tahmid Hasan; Wasi Uddin Ahmad; Rifat Shahriyar", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "BanglaNLG and BanglaT5: Benchmarks and resources for evaluating low-resource natural language generation in Bangla", "year": "2023" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b7", "title": "ELECTRA: Pretraining text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Avishek Das; Omar Sharif; Mohammed Moshiul Hoque; Iqbal H Sarker", "journal": "", "ref_id": "b9", "title": "Emotion classification in a resource constrained language using transformerbased approach", "year": "2021" }, { "authors": "Alexey Michael Diskin; Max Bukhtiyarov; Lucile Ryabinin; Quentin Saulnier; Anton Lhoest; Dmitry Sinitsin; Dmitriy Popov; Maxim Pyrkin; Alexander Kashirin; Albert Borzunov; Denis Villanova Del Moral; Ilia Mazur; Yacine Kobelev; Thomas Jernite; Gennady Wolf; Pekhimenko", "journal": "", "ref_id": "b10", "title": "Distributed deep learning in open collaborations", "year": "2021" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": " Arid Md; Firoj Hasan; Anika Alam; Shudipta Anjum; Afiyat Das; Anjum", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Blp-2023 task 2: Sentiment analysis", "year": "2023" }, { "authors": " Arid Md; Shudipta Hasan; Afiyat Das; Firoj Anjum; Anika Alam; Avijit Anjum; Sheak Sarker; Haider Rashed; Noori", "journal": "", "ref_id": "b13", "title": "Zero-and few-shot prompting with llms: A comparative study with finetuned models for bangla sentiment analysis", "year": "2023" }, { "authors": "Asif Hassan; Mohammad Rashedul Amin; Abul Kalam; Al Azad; Nabeel Mohammed", "journal": "", "ref_id": "b14", "title": "Sentiment analysis on bangla and romanized bangla text using deep recurrent models", "year": "2016" }, { "authors": "Avishek Md Iqbal; Omar Das; Mohammed Moshiul Sharif; Hoque; Sarker", "journal": "SN Computer Science", "ref_id": "b15", "title": "Bemoc: A corpus for identifying emotion in bengali texts", "year": "2022" }, { "authors": "Nafis Irtiza; Tripto ; Mohammed Eunus; Ali ", "journal": "", "ref_id": "b16", "title": "Detecting multilabel sentiment and emotions from bangla youtube comments", "year": "2018" }, { "authors": "Md Saiful Khondoker Ittehadul Islam; Md Ruhul Islam; Amin", "journal": "", "ref_id": "b17", "title": "Emonoba: A dataset for analyzing fine-grained emotions on noisy bangla texts", "year": "2020" }, { "authors": "Ittehadul Khondoker; Sudipta Islam; Md Kar; Mohammad Ruhul Saiful Islam; Amin", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "SentNoB: A dataset for analysing sentiment on noisy Bangla texts", "year": "2021" }, { "authors": "Ittehadul Khondoker; Tanvir Islam; Md Yuvraz; Enamul Saiful Islam; Hassan", "journal": "", "ref_id": "b19", "title": "Emonoba: A dataset for analyzing fine-grained emotions on noisy bangla texts", "year": "2022" }, { "authors": "Mohsinul Kabir; Bin Obayed; Mahfuz; Syed Rifat Raiyan; Mahmud Hasan; Md Kamrul Hasan", "journal": "Findings of the Association for Computational Linguistics", "ref_id": "b20", "title": "Banglabook: A large-scale bangla dataset for sentiment analysis from book reviews", "year": "2023" }, { "authors": "Ahmed Mahfuz; Masum; Junayed Sheikh; Ayesha Ahmed; Md Saiful Tasnim; Islam", "journal": "", "ref_id": "b21", "title": "An aspect-based sentiment analysis dataset for bengali and its baseline evaluation", "year": "2020" }, { "authors": "Paulius Micikevicius; Sharan Narang; Jonah Alben; Gregory F Diamos; Erich Elsen; David García; Boris Ginsburg; Michael Houston; Oleksii Kuchaiev; Ganesh Venkatesh; Hao Wu", "journal": "", "ref_id": "b22", "title": "Mixed precision training", "year": "2017" }, { "authors": "Shamsuddeen Hassan; Muhammad ; Idris Abdulmumin; Muhie Seid; David Yimam; Ibrahim Said Ifeoluwa Adelani; Nedjma Ahmad; Abinew Ousidhoum; Saif Ali Ayele; Meriem Mohammad; Sebastian Beloucif; Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "SemEval-2023 task 12: Sentiment analysis for African languages (AfriSenti-SemEval)", "year": "2023" }, { "authors": "Thanh Dat Quoc Nguyen; Anh Tuan Vu; Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "BERTweet: A pre-trained language model for English tweets", "year": "2020" }, { "authors": "Atikur Md; Emon Rahman; Dey Kumar", "journal": "Data", "ref_id": "b25", "title": "Datasets for aspect-based sentiment analysis in bangla and its baseline evaluation", "year": "2018" }, { "authors": "Salim Sazzed", "journal": "", "ref_id": "b26", "title": "Cross-lingual sentiment classification in low-resource bengali language", "year": "2020" }, { "authors": "Salim Sazzed", "journal": "", "ref_id": "b27", "title": "Abusive content detection in transliterated bengali-english social media corpus", "year": "2021" }, { "authors": "Mingyang Wang; Heike Adel; Lukas Lange; Jannik Strötgen; Hinrich Schütze", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "NLNDE at SemEval-2023 task 12: Adaptive pretraining and source language selection for low-resource multilingual sentiment analysis", "year": "2023" }, { "authors": "Xinyu Wang; Yongliang Shen; Jiong Cai; Tao Wang; Xiaobin Wang; Pengjun Xie; Fei Huang; Weiming Lu; Yueting Zhuang; Kewei Tu; Wei Lu; Yong Jiang", "journal": "", "ref_id": "b29", "title": "DAMO-NLP at SemEval-2022 task 11: A knowledge-based system for multilingual named entity recognition", "year": "2022" } ]
[]
10.1145/2623330.2623732
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0" ], "table_ref": [], "text": "Graphs are data structures that consist of a set of nodes or vertices that are interconnected using edges. Graphs are used in computer science and social sciences to model relationships and track the connections between entities. Since many reallife applications can be modeled using graphs, the available graph data is increasing exponentially. Today, graph data can be obtained in a wide array of applications like social network analysis, network route analysis, image processing and even fields like bioinformatics.\nHence, in recent times there has been a lot of development of efficient and accurate methods of graph data processing. Most recently, Graph Neural Networks (GNNs) [1] are being used to perform convolution steps over graphs and extract useful information from graphs through node, edge and graphlevel embeddings.\nThere are also certain open problems to consider when using GNNs to process graph data. The first is the overfitting of GNNs. When the training data consists of graphs with lots of nodes and edges, the high number of features in the data can cause the GNNs to easily overfit to a particular feature. This is detrimental to the testing performance of the Graph Neural Network. Second, the lack of labelled graph data in many scenarios poses another problem. In applications like molecular property prediction, it is usually not possible to obtain enough data due to the difficulty of manufacturing and testing new chemical compounds.\nThus there is a need to study graph processing methods that work well in low data scenarios while also preventing overfitting. Our contributions in this paper are:\n1) A thorough study of Graph Data Augmentation (GDA) techniques. 2) A thorough study of Few Shot Learning (FSL) techniques for graph classification." }, { "figure_ref": [], "heading": "II. BACKGROUND FOR GRAPH CLASSIFICATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Introduction to Graph Neural Networks", "publication_ref": [], "table_ref": [], "text": "GNNs encode node features into low-dimensional space and learn representation vectors for the entire graph as well as for each individual node. A GNN architecture's main goal is to learn an embedding with neighbourhood information.\nIn general, an aggregation function (AGGREGATE) and an update function (UPDATE) can be used tfor establishing a general model for message passing GNNs. When updating each node's embedding in a layer, UPDATE integrates its own previous embedding with the aggregated neighbour embeddings. AGGREGATE combines the representations from the preceding layer for every single node from all of its neighbors. Specifically,\nR i N (v) = AGGREGRATE R i-1 u |u ∈ N (v) R i v = UPDATE R i-1 v , h i N (v)\nwhere at the i-th GNN layer, R i v denotes the representation vector of node v and N (v)specifies an organised set of a node v's neighbours." }, { "figure_ref": [], "heading": "B. Graph Classification", "publication_ref": [], "table_ref": [], "text": "Graph classification mainly focuses on predicting an attribute for each graph in the collection by employing a supervised learning approach. Consider a graph represented by G(V, E), with graph label Y, where V denotes the set of vertices and E is the set of edges, graph classification algorithms work on a function F : G → Y , where lowering the difference between the graph's true label and the forecasted labels is the primary objective of the classification method." }, { "figure_ref": [ "fig_0" ], "heading": "III. AUGMENTATION OF GRAPH DATA", "publication_ref": [], "table_ref": [], "text": "Augmentation methods for graph data carry out denoising, imputation and strengthening of graph structure which help them better to match the goals or model processes of a target learning activity. Existing methodologies for GDA are mainly classified into two subparts namely, Rule Based approaches and Learning Based approaches ( Refer Fig. 1 )" }, { "figure_ref": [], "heading": "A. Rule Based Approaches", "publication_ref": [ "b4", "b5", "b4", "b5", "b10", "b13", "b14" ], "table_ref": [], "text": "Rule based approaches are straightforward and designed to preserve the essential characteristics of the original graph while introducing new variations. They are extensively used are mainly subdivided into three categories:\n1) Data Removal: [5] and NodeDropping [6] both seek to arbitrarily remove a portion of the nodes from the supplied graph, presuming that the missing nodes shouldn't have any impact on the network's overall semantics. Feng et al. in [5] employs a consistency loss on the projected logits of several enhanced versions of the graphs for augmentation tasks while You et al. in [6] concentrates on contrastive self-supervised graph representational learning. [11] merges the raw attributes of each layer by passing the data to a two layer GNN and then combining them with vectors of each hidden layer. • Pseudo Labeling: Human labeling of graph data is expensivea and thus, pseudo-labeling is implemented as a semi-supervised approach in order to learn underlying contexts. According to the theory that adjacent nodes are more likely to share the same label, Label propagation [14,15] iteratively distributes node labels down the edges. The introduced labels on the previously unlabeled nodes can then be used to train the GNN model. " }, { "figure_ref": [], "heading": "3) Data Manipulation:", "publication_ref": [ "b17", "b18", "b19" ], "table_ref": [], "text": "• Diffusion: The basic idea is to propagate information from neighboring nodes to each node in the graph, based on the structure of the graph and the attributes of the nodes. Zhang in [18] constructs a Laplacian matrix from the graph adjacency matrix and then uses diffusion to smooth the node attributes over the graph. More recently, methods such as Heat Kernel Signature [19] and Deep-Walk [20] have been proposed to carry out the data diffusion task across edges and nodes." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "B. Learning based approaches", "publication_ref": [ "b20", "b21", "b22", "b23", "b24", "b25", "b27", "b28", "b29", "b30", "b31", "b7" ], "table_ref": [], "text": "Learning based GDA methods take advantage of rich information from downstream tasks to propose augmentation strategies in a data-driven manner.\n1) Graph Structure Learning (GSL): Current endeavors in this field of study have mostly concentrated on automating the collaborative learning of graph structures without the aid of humans or subject-matter expertise. GSL methods are used when data is noisy or missing and based on the adjacency matrix they're trying to comprehend, as shown in Fig. 2, they can be split into two groups.\n• Continuous structure learning: Learning a weighted graph structure helps gather rich information about edges. As compared to a binary matrix, a weighted continuous adjacency matrix is much easier to optimise because the latter can be done with Stochastic Gradient Descent or even convex optimisation techniques. Building upon this, the concept is further divided into five subparts as shown in Fig. 2. • Discrete structure learning: A probabilistic adjacency matrix is used to derive a discrete graph structure. It mainly includes methods like bilevel optimisation [21],\nReinforcement Learning [22], and variational inference [23,24] for the optimization task. 2) Rationalization: A subpart of the input attributes that represents, directs, and supports model prediction in the best possible manner is referred to as a rationale. They are typically naturally learnt subgraphs that serve as augmented graph data for the graph models and are representational, informative, or explanatory either employed alone or in conjunction with the original graph. CIGA was suggested by Chen et al. in [25] models the production of graphs addressing the out-ofdistribution issues with structural causal models and enhances the interactions between invariant and spurious characteristics. A novel augmentation technique called Environment Replacement is by Liu et al. in [26], which works by separating reasons from context.\n3) Automated Augmentation: Some recent research uses reinforcement learning techniques as a solution because automated GDA targets are frequently difficult to optimize. AutoGRL by Sun et al., automatically learns the ideal blend of GDA actions, GNN architecture, and hyperparameters over the training procedure. AutoGDA by Zhao et al. in [28] uses an RL-agent to develop localised augmentation strategies for node classification tasks and to generalise the learning. Another set of automated augmentation studies by Kose and Shen [29], Wang [30] and others focus on graph contrastive learning.\n4) Graph Adversarial Training: Machine learning models trained on graph data can become more robust and general by using a technique called \"graph adversarial training\". The main idea is to include erroneous predictions made by the machine learning model into the training data. The model can become more resilient to changes in the input data by learning from both normal and adversarial cases during training.\nContrary to learning the ideal graph structure, graph adversarial training does not look for one. Dai et al. in [31] proposed a method which randomly deletes edges during adversarial training without any optimisation on the graph data in order to augment the adjacency matrix. An adversarial training technique with dynamic regularisation was put forth by Feng et al. in [32], with the goal of restoring the uniformity of the graph and limiting the disparity between the anticipated outcomes of the linked nodes and the target node. Even FLAG [8] uses adversarial conditioning to progressively incorporate adversarial gradient based permutations to corresponding node properties." }, { "figure_ref": [ "fig_2" ], "heading": "C. M-Evolve", "publication_ref": [ "b32" ], "table_ref": [], "text": "Zhou et al. in [33] proposed M-Evolve, a framework which optimizes pre-trained graph classifiers using an evolutionary method by combining the augmentation of graphs, filtering out bad graphs and retraining the model.\nGraph augmentation. They generated graphs with modified edges by adding and removing ⌈β • m⌉ edges, where m represents the total number of edges in the original graph and β denotes the budget of edge modification. To select edges for addition to/removal from the graph, they defined a candidate set for each, E c add and E c del respectively. The construction of these candidate sets was done via two methods: random mapping (for a simple baseline) and motif similarity mapping. In motif similarity mapping, repeating patterns in the graph (called graph motifs) are modified in some way. M-Evolve uses open-triads as the motif. An opentriad ∧ a ij represents a graph structure with a path of length 2 connecting vertex v i to v j when passing through v a such that there is no direct edge between v i and v j (See Figure 3). del . Edges are selected from the candidate sets via weighted random sampling. The weights for this are calculated by finding the similarity between two vertices. The closer they are, the more likely it is to add an edge between them, and vice versa. The similarity has been found using Resource Allocation index (s ij ).\ns ij = k∈η( i) ∩η( j) 1 d k , S = {s ij |∀( v i , v j ) ∈ E c add ∪ E c del }\nwhere η( i) represents the neighbors of v i (within one-hop), and d k denotes vertex k's degree. Weights are then calculated.\nw add ij = s ij s∈S , w del ij = 1 - s ij s∈S\nData filtration. M-Evolve filters out augmented graphs with low label reliability. First, the classifier C is pretrained on the initial training data (D train ) and validation data (D val ). For a specified number of evolution iterations T , augmentations are performed. A prediction vector p i is found for each graph G i in D val , which is the probability of a class being correct for G i . A confusion matrix for probability Q is calculated where each entry has the mean probability (q ij ) of a graph being classified by C as class j instead of the correct class i. q k is the average probability of a graph belonging to class k.\nLabel reliability is calculated for each graph in D val . For a graph G i in D val with y i as the correct label, the label reliability (r i ) of this example is calculated.\nr i = p ⊤ i q yi\nTo filter out graphs from the generated pool of graphs D pool , a threshold θ is found such that only graphs with r i above θ will be added to the new training set D new train .\nθ = arg min θ ( Gi,yi) ∈D val Γ[ ( θ -r i ) • c( G i , y i ) ]\nHere, if x > 0, Γ( x) = 1 and it is 0 otherwise. If C( G i ) = y i , then the value of c( G i , y i ) = 1 and it is -1 otherwise.\nThe evolutionary classifier C is then retrained using data added at each evolutionary iteration. M-Evolve using the motif similarity-based mapping performed better than several graph classification methods. They observed an increase in the dataset scale, smoother decision boundaries, and less fragmented decision regions." }, { "figure_ref": [], "heading": "IV. FEW SHOT LEARNING BASED METHODS", "publication_ref": [ "b33", "b34" ], "table_ref": [], "text": "Even without directly augmenting the domain-specific and/or task-specific data, previous research has been conducted that applies various specialized techniques on low-data scenarios. A lot of this research has been focused on general data types, but research has also been conducted in graph data specific domains. Such techniques often deal with low-data tasks and are tuned to generalize well on previously unseen data and classes. These techniques are often referred to as fewshot learning algorithms. In the sections that follow, special attention has been paid to few-shot learning algorithms that are general enough to work well with graph data or are specifically designed for graph data.\nIt is rather important to note that while these few-shot learning algorithms are being studied in isolation with data augmentation techniques, in practice a combination of both would yield much better results. However, the question of which augmentation methods to match with which few-shot algorithms is a multi-faceted one, and it often boils down to the domain of the data and the quality and type of data available for the tasks.\nA thorough dive into FSL literature that could be adapted for the graph classification task follows in the sections below. We first state the few-shot graph classification problem objectively and then provide a rough nomenclature of the available graph classification techniques as they have been described in general FSL reviews like [34] and [35].\nThis classification divides the techniques into three main categories:\n1) Model based techniques: Models that train well on the few-shot data are often augmented with memory units so that prior domain-based information can be stored. This information will be helpful when training on a specific task which does not require a large amount of data points. 2) Metric-Learning based techniques: These techniques tackle a few-shot graph classification problem by \"learning to compare\" the inherent structure of graphs. Domain-specific graphs often have inherent patterns expressed in their structure and their classes. 3) Optimization based techniques: These techniques provide a good initialization parameters for a model. Once we optimize the set of initial parameters in the model, it becomes much easier to fine-tune the model to a FSL task that has similar but limited data. This parameter optimization step is often done in a \"meta-learning\" stage that comes before the main training stage. Literature in these methods also focuses on methods specific to optimizing on nodes of a graph." }, { "figure_ref": [], "heading": "A. Metric-Learning based methods", "publication_ref": [ "b35", "b36", "b38", "b39", "b40" ], "table_ref": [], "text": "Metric-learning based methods exploit the fact that graphs that belong to a specific class often have similar structural patterns. Thus, if we are able to extract such structural features (or metrics) from graphs, it would be easier to compare two graphs that lie in the same or different classes. For this to occur, we need efficient ways to embed graphs in a feature space and compare two graphs to check if they have the same class label.\n1) Graph Kernel Methods: Graph Kernel Methods aim to capture important features of the graph by quantifying its various properties. Most commonly, these Graph Kernels encode pairs of graphs at the same time and help identify how similar or different they are in overall structure. They provide mathematical formulae to capture an inner-product between graphs that quantifies this similarity. Graph Kernels were one of the first tools to be used to perform Machine Learning on graph data. Kernel-based learning algorithms like Support Vector Machines can now easily operate on graphs.\nWork by Haussler in [36] is one of the earliest known instances in literature to have applied kernels on discrete structures like graphs. A fundamental type of graph kernel is the random walk kernel, that performs random walks on the direct product of the pair of graphs. This product graph is as defined in the equation below Another notable type of graph kernel are the shortest-path graph kernels introduced in [37] by Borgwardt and Kriegel. They improve upon random walk kernels by providing precomputed shortest paths via the Floyd-Warshall algorithm. However, these are not ideal in cases where longest paths and average paths are more adequate. Thus, designing graph kernels tailored for specific tasks was still a necessity.\nShervashidze et al. probed further and introduced a family of efficient graph kernels that provided quick feature extraction for large graphs. The Weisfeiler-Lehman graph isomorphism test served as the foundation for this feature extraction method. These kernels are generally accepted to be the current stateof-the-art for graph classification.\n2) SuperClass: Since graph kernels have limited metric learning abilities and often require extensive computation to compute, they cannot be relied upon in every scenario. Instead, to determine patterns between graphs of different classes, Chauhan et al. in [39], used spectral graph theory to determine the patterns of connectivity in the discrete structures. Spectral analytics could help identify patterns within a single graph, however to identify patterns between different classes, Chauhan et al. performed graph clustering in the spectral dimension with the Lp Wasserstein distance as the distance metric. A prototype graph of each class is first generated using the mean spectral measure of each data point belonging to the graph. After that, the individual class prototypes are then clustered together using K-means++ [40]. Each of these clusters is assigned a super-class.\n3) CuCO: Chu et al. in [41] proposed a new framework called CuCo (using curriculum contrastive learning) for learning graph representations, with a special focus on negative sampling. It was an unsupervised/self-supervised learning approach for applications with limited labeled data.\nThe authors used four basic data augmentation techniques: dropping of nodes, perturbing edges, masking attributes, and sampling certain subgraphs. They strategically selected the node dropping and subgraph sampling techniques for molecular data, and used all four techniques for social network data. The model uses GNNs for the Graph Encoder which learns graph representations.\nContrastive learning, a self-supervised algorithm, is often used for representation learning. The process involves performing augmentations on a data instance, pairing them up, and labeling them as positive pairs, while pairing up two dissimilar data instances to get negative pairs. It then forces the embeddings of the instances of a positive pair to be closer to each other, and the embeddings of the instances of negative pairs to be farther apart.\nCuCo implements this by augmenting a graph G to get two generated graphs/views as a positive pair, while any other graph in the training set is paired with G to get the negative pair. Memory bank Q of size K represents all the negative samples in the training set, i.e., all the graphs except G. A similarity metric function (dot product in this case) sim(•, •) is employed on the pairs of embeddings. Based on this, the noise-contrastive estimation loss function is used as follows:\nwhere z i is the embedding/representation of graph G i , and {z i , z j } and {z i , z k } are the positive pairs and negative pairs respectively. τ denotes the temperature parameter. This loss ensures that the positive pairs score higher in terms of similarity than the negative pairs. Curriculum learning is a training strategy where a machine learning model is trained from samples which are ordered by their level of difficulty. Easier samples are learned first and harder ones are sampled in the later stages of training. This is shown to improve the generalization capability of such models.\nChu et al. realised the impact of negative sampling and thus implemented curriculum learning to sample negative pairs strategically. Similarity was used as the scoring function to measure the difficulty of negative samples. The more similar two graphs are, the harder it will be to differentiate between them and classify the sample as a negative pair. Cosine similarity and dot product similarity were employed for this.\nTo confirm the benefits of using curriculum learning, the model was also tested using random ordering of samples, as well as an anti-curriculum order which feeds negative samples in the descending order of difficulty. They found that the curriculum order (ascending order of difficulty) performed best.\nIn addition to these methods, they added an early stop mechanism with patience π which will reduce whenever loss does not decrease. Due to ordered learning and the early stop mechanism, CuCo was found to learn faster.\nThe model was tested under the unsupervised representation learning setting and the transfer learning setting. The unsupervised learning evaluation was done on graph classification tasks where the embeddings learnt by the model were used to train an SVM classifier. The graph encoder used was a three-layer GIN. The datasets used for unsupervised learning were obtained from the TUDataset. CuCo outperformed the baselines on 6 out of 7 of the chosen datasets. CuCo was also found to be effective for transfer learning by achieving the best results on six of eight molecular property prediction datasets of the OGB benchmark." }, { "figure_ref": [], "heading": "B. Optimization-based methods", "publication_ref": [ "b41", "b41", "b41", "b42", "b42" ], "table_ref": [], "text": "Another class of techniques are ones that provide good initialization model parameters. These initial parameters are trained and optimized in a step before the main on-task training of the model, in a step called the meta-training stage. Metatraining is often carried out on tasks that are similar to the low-data task. This allows us to introduce an inductive bias on the set of parameters. This ensures that our parameters are initialized in such a way that they can converge quickly to the optimal set of parameters when the model is trained on lowdata tasks. Another concern is to also prevent the overfitting of the model parameters during the fine-tuning step.\nIn the following sections we mostly look at [42] and its derivative work by Ma et al. which adapts techniques used in [42] specially to graphs.\n1) Model Agnostic Meta Learner: The overarching goal of Few-Shot Learning is to provide Machine Learning algorithms the ability to generalize well to a small amount of data just like humans without overfitting to small datasets. Finn et al. in [42] presents a very general task structure and model agnostic algorithm that is able to provide state-of-the-art results on the initialization of parameters during the meta-learning stage. The method is general and independent of the model architecture or the problem type (classification, regression, or reinforcement learning). The only restriction on the model is that it must be optimizable by some gradient-based method. Hence, this method can be readily used on models consisting of fullyconnected neural networks, convolutional neural networks, and in our use case, even graph neural networks.\nIn this technique, instead of directly training the parameters, we alter the initial parameters in such a way that maximum performance will be achieved in minimum steps of gradient descent on the novel task. In other words, we train the initial parameters such that they will work well when fine tuned on the data of the task during the training stage. This gives the desired result that ensures that a minimum number of steps of gradient descent are required at the training stage (also called fine-tuning stage in some derivative literature).\nMAML achieves these goals, by using the model parameters to build an internal representation of the tasks and learn features of the data that is often shared across these tasks. Since task-agnostic features are already being extracted at the meta-learning stage, the model becomes more sensitive to taskspecific features during the training stage. This is described as rapid task adaptation in the paper.\nThis shifts the paradigm of pretraining away from finding a set of initial parameters that perform well on all tasks, Fig. 4. Generic AS-MAML Architecture as given in [43] towards optimizing for parameters that converge on taskspecific optimality as quickly as possible.\n2) AS-MAML: An important issue with approach taken by Finn et al. is that the learning rate hyperparameters alpha and beta (for the parameter optimization and meta optimization) are very difficult to determine and often have major effects on the performance of MAML. Ma et al. in [43] takes a more automated approach to learning the step-size during the metalearning phase.\nThe approach proposed by Ma et al.(commonly referred to as AS-MAML) consists of two parts, the Adaptive Step Meta-Learner and a few-shot Graph classifier. The adaptive step meta learner is responsible for adjusting the GNN's step size and adjusting how much information is learnt from each gradient step based on the loss function gradient and the current iteration number. The overall architecture is explained in Fig. 4. Reinforcement learning was applied to provide the optimal adaptation step for meta-learner using the quality of the graph embedding. The quality of the graph embedding was determined using the Average Node Embedding (ANI) value, where an increase in ANI value signifies an increase in the quality of graph embedding that was created." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In conclusion, this research paper presents a detailed survey of graph classification techniques in low data scenarios. The review focusses on two major solutions to low data contraints: Graph Data Augmentation techniques and Few-Shot Learning Graph Algorithms. We researched both rulebased and learning-based GDA approaches and discussed some specific algorithms like M-Evolve. Lastly, we covered few-shot learning techniques specially designed for graphs like metric-Learning based methods (eg Graph Kernel Methods and CuCO) and optimization-based methods (like MAML and its derivative AS-MAML)." } ]
This survey paper presents a brief overview of recent research on graph data augmentation and few-shot learning. It covers various techniques for graph data augmentation, including node and edge perturbation, graph coarsening, and graph generation, as well as the latest developments in few-shot learning, such as meta-learning and model-agnostic meta-learning. The paper explores these areas in depth and delves into further sub classifications. Rule based approaches and learning based approaches are surveyed under graph augmentation techniques. Few-Shot Learning on graphs is also studied in terms of metriclearning techniques and optimization-based techniques. In all, this paper provides an extensive array of techniques that can be employed in solving graph processing problems faced in low-data scenarios.
EXPLORING GRAPH CLASSIFICATION TECHNIQUES UNDER LOW DATA CONSTRAINTS: A COMPREHENSIVE STUDY
[ { "figure_caption": "Fig. 1 .1Fig. 1. A tree level summary of existing Graph Data Augmentation Methods.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Graph Structure Learning methodologies.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. An open triad, ∧ a ij . M-Evolve modifies such triads in a graph.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" } ]
Kush Kothari; Bhavya Mehta; Reshmika Nambiar; Seema Shrawne
[ { "authors": "J Zhou; G Cui; S Hu; Z Zhang; C Yang; Z Liu; L Wang; C Li; M Sun", "journal": "", "ref_id": "b0", "title": "Graph neural networks: A review of methods and applications", "year": "2018" }, { "authors": "T X Yu Rong; Wenbing Huang; J Huang", "journal": "", "ref_id": "b1", "title": "Dropedge: Towards deep graph convolutional networks on node classification", "year": "2020" }, { "authors": "S Thakoor; C Tallec; M G Azar; M Azabou; E L Dyer; R Munos; P Veličković; M Valko", "journal": "", "ref_id": "b2", "title": "Large-scale representation learning on graphs via bootstrapping", "year": "2021" }, { "authors": "T Zhao; X Tang; D Zhang; H Jiang; N Rao; Y Song; P Agrawal; K Subbian; B Yin; M Jiang", "journal": "PMLR", "ref_id": "b3", "title": "Autogda: Automated graph data augmentation for node classification", "year": "2022-12-12" }, { "authors": "W Feng; J Zhang; Y Dong; Y Han; H Luan; Q Xu; Q Yang; E Kharlamov; J Tang", "journal": "Curran Associates, Inc", "ref_id": "b4", "title": "Graph random neural networks for semi-supervised learning on graphs", "year": "2020" }, { "authors": "Y You; T Chen; Y Sui; T Chen; Z Wang; Y Shen", "journal": "Curran Associates, Inc", "ref_id": "b5", "title": "Graph contrastive learning with augmentations", "year": "2020" }, { "authors": "T Fang; Z Xiao; C Wang; J Xu; X Yang; Y Yang", "journal": "", "ref_id": "b6", "title": "Dropmessage: Unifying random dropping for graph neural networks", "year": "2022" }, { "authors": "K Kong; G Li; M Ding; Z Wu; C Zhu; B Ghanem; G Taylor; T Goldstein", "journal": "", "ref_id": "b7", "title": "Robust optimization as data augmentation for large-scale graphs", "year": "2020" }, { "authors": "H Guo; Y Mao", "journal": "", "ref_id": "b8", "title": "ifmixup: Interpolating graph pair to regularize graph classification", "year": "2021" }, { "authors": "J Park; H Shim; E Yang", "journal": "", "ref_id": "b9", "title": "Graph transplant: Node saliency-guided graph mixup with local structure preservation", "year": "2022-06" }, { "authors": "X Han; Z Jiang; N Liu; X Hu", "journal": "", "ref_id": "b10", "title": "G-mixup: Graph data augmentation for graph classification", "year": "2022" }, { "authors": "H Zhang; M Cisse; Y N Dauphin; D Lopez-Paz", "journal": "", "ref_id": "b11", "title": "mixup: Beyond empirical risk minimization", "year": "2017" }, { "authors": "V Verma; A Lamb; C Beckham; A Najafi; I Mitliagkas; D Lopez-Paz; Y Bengio", "journal": "PMLR", "ref_id": "b12", "title": "Manifold mixup: Better representations by interpolating hidden states", "year": "2019-06-15" }, { "authors": "X Zhu; Z Ghahramani", "journal": "", "ref_id": "b13", "title": "Learning from labeled and unlabeled data with label propagation", "year": "2003" }, { "authors": "X Zhu; J Lafferty; R Rosenfeld", "journal": "", "ref_id": "b14", "title": "Semi-supervised learning with graphs", "year": "2005" }, { "authors": "K Xu; C Li; Y Tian; T Sonobe; K -I. Kawarabayashi; S Jegelka", "journal": "", "ref_id": "b15", "title": "Representation learning on graphs with jumping knowledge networks", "year": "2018" }, { "authors": "S Liu; R Ying; H Dong; L Li; T Xu; Y Rong; P Zhao; J Huang; D Wu", "journal": "", "ref_id": "b16", "title": "Local augmentation for graph neural networks", "year": "2021" }, { "authors": "X.-D Zhang", "journal": "", "ref_id": "b17", "title": "The laplacian eigenvalues of graphs: a survey", "year": "2011" }, { "authors": "V Zobel; J Reininghaus; I Hotz", "journal": "J. WSCG", "ref_id": "b18", "title": "Generalized heat kernel signatures", "year": "2011" }, { "authors": "B Perozzi; R Al-Rfou; S Skiena", "journal": "ACM", "ref_id": "b19", "title": "DeepWalk", "year": "2014-08" }, { "authors": "L Franceschi; M Niepert; M Pontil; X He", "journal": "PMLR", "ref_id": "b20", "title": "Learning discrete structures for graph neural networks", "year": "2019-06-15" }, { "authors": "A Kazi; L Cosmo; S.-A Ahmadi; N Navab; M M Bronstein", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b21", "title": "Differentiable graph module (DGM) for graph convolutional networks", "year": "2023-02" }, { "authors": "L Chen; C Tao; R Zhang; R Henao; L C Duke", "journal": "PMLR", "ref_id": "b22", "title": "Variational inference and model selection with generalized evidence bounds", "year": "2018-07-15" }, { "authors": "P Elinas; E V Bonilla; L Tiao", "journal": "Curran Associates, Inc", "ref_id": "b23", "title": "Variational inference for graph convolutional networks in the absence of graph data and adversarial settings", "year": "2020" }, { "authors": "Y Chen; Y Zhang; Y Bian; H Yang; K Ma; B Xie; T Liu; B Han; J Cheng", "journal": "", "ref_id": "b24", "title": "Learning causally invariant representations for out-of-distribution generalization on graphs", "year": "2022" }, { "authors": "G Liu; T Zhao; J Xu; T Luo; M Jiang", "journal": "ACM", "ref_id": "b25", "title": "Graph rationalization with environment-based augmentations", "year": "2022-08" }, { "authors": "J Sun; B Wang; B Wu", "journal": "", "ref_id": "b26", "title": "Automated graph representation learning for node classification", "year": "2021" }, { "authors": "T Zhao; X Tang; D Zhang; H Jiang; N Rao; Y Song; P Agrawal; K Subbian; B Yin; M Jiang", "journal": "PMLR", "ref_id": "b27", "title": "Autogda: Automated graph data augmentation for node classification", "year": "2022-12-12" }, { "authors": "O D Kose; Y Shen", "journal": "", "ref_id": "b28", "title": "Fair node representation learning via adaptive data augmentation", "year": "2022" }, { "authors": "Y Wang; Y Min; E Shao; J Wu", "journal": "", "ref_id": "b29", "title": "Molecular graph contrastive learning with parameterized explainable augmentations", "year": "2021" }, { "authors": "H Dai; H Li; T Tian; X Huang; L Wang; J Zhu; L Song", "journal": "PMLR", "ref_id": "b30", "title": "Adversarial attack on graph structured data", "year": "2018-07-15" }, { "authors": "F Feng; X He; J Tang; T.-S Chua", "journal": "", "ref_id": "b31", "title": "Graph adversarial training: Dynamically regularizing based on graph structure", "year": "2019" }, { "authors": "J Zhou; J Shen; S Yu; G Chen; Q Xuan", "journal": "IEEE Transactions on Network Science and Engineering", "ref_id": "b32", "title": "M-evolve: Structural-mappingbased data augmentation for graph classification", "year": "2021-01" }, { "authors": "W Chen; Y Liu; Z Kira; Y F Wang; J Huang", "journal": "CoRR", "ref_id": "b33", "title": "A closer look at few-shot classification", "year": "2019" }, { "authors": "Y Wang; Q Yao; J T Kwok; L M Ni", "journal": "ACM Comput. Surv", "ref_id": "b34", "title": "Generalizing from a few examples: A survey on few-shot learning", "year": "2020-06" }, { "authors": "D Haussler", "journal": "", "ref_id": "b35", "title": "Convolution kernels on discrete structures ucsc-crl-99-10", "year": "2001" }, { "authors": "K Borgwardt; H Kriegel", "journal": "", "ref_id": "b36", "title": "Shortest-path kernels on graphs", "year": "2005" }, { "authors": "N Shervashidze; P Schweitzer; E J Van Leeuwen; K Mehlhorn; K M Borgwardt", "journal": "Journal of Machine Learning Research", "ref_id": "b37", "title": "Weisfeiler-lehman graph kernels", "year": "2011" }, { "authors": "J Chauhan; D Nathani; M Kaul", "journal": "CoRR", "ref_id": "b38", "title": "Few-shot learning on graphs via super-classes based on graph spectral measures", "year": "2020" }, { "authors": "D Arthur; S Vassilvitskii", "journal": "Society for Industrial and Applied Mathematics", "ref_id": "b39", "title": "K-means++: The advantages of careful seeding", "year": "2007" }, { "authors": "G Chu; X Wang; C Shi; X Jiang", "journal": "", "ref_id": "b40", "title": "Cuco: Graph representation with curriculum contrastive learning", "year": "2021" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "", "ref_id": "b41", "title": "Model-agnostic meta-learning for fast adaptation of deep networks", "year": "2017" }, { "authors": "N Ma; J Bu; J Yang; Z Zhang; C Yao; Z Yu", "journal": "CoRR", "ref_id": "b42", "title": "Few-shot graph classification with model agnostic meta-learning", "year": "2020" } ]
[ { "formula_coordinates": [ 1, 341.41, 564.14, 181.81, 29.35 ], "formula_id": "formula_0", "formula_text": "R i N (v) = AGGREGRATE R i-1 u |u ∈ N (v) R i v = UPDATE R i-1 v , h i N (v)" }, { "formula_coordinates": [ 4, 51.29, 304.2, 246.41, 27.27 ], "formula_id": "formula_1", "formula_text": "s ij = k∈η( i) ∩η( j) 1 d k , S = {s ij |∀( v i , v j ) ∈ E c add ∪ E c del }" }, { "formula_coordinates": [ 4, 98.18, 370.34, 150.56, 24.72 ], "formula_id": "formula_2", "formula_text": "w add ij = s ij s∈S , w del ij = 1 - s ij s∈S" }, { "formula_coordinates": [ 4, 152.01, 567.48, 43.98, 12.69 ], "formula_id": "formula_3", "formula_text": "r i = p ⊤ i q yi" }, { "formula_coordinates": [ 4, 73.72, 643.84, 199.89, 21.2 ], "formula_id": "formula_4", "formula_text": "θ = arg min θ ( Gi,yi) ∈D val Γ[ ( θ -r i ) • c( G i , y i ) ]" } ]
10.1007/978-3-030-25312-7_14
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b3", "b4", "b5", "b6", "b8", "b9", "b11", "b10", "b11", "b12", "b12", "b13", "b14", "b16", "b16", "b17", "b18", "b19", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31", "b32" ], "table_ref": [], "text": "The concept of creating a virtual copy of a complete Cyber-Physical System (CPS) opens up a multitude of possibilities. This involves the capacity to virtualize the CPS in a comprehensive virtual environment, which allows for real-time assessments of the physical environment and vice versa (Tao et al., 2019;Biffl et al., 2019;Biesinger et al., 2019;Lo et al., 2021). By constantly learning from the physical environment, it provides reliable and precise information about the actual scenario. This process, known as the twinning process or developing a digital twin (DT), represents virtual cloning (Ors et al., 2020;He et al., 2019;Gao et al., 2022;Tao et al., 2019). First proposed by Michael Grieves and John Vickers during a presentation at NASA in 2017 (Grieves and Vickers, 2016), NASA quickly became one of the pioneering enterprises to utilize the digital twin concept for space exploration missions.\nThe concept of a digital twin revolves around creating a real-time link between a physical scenario and its virtual equivalent (Haag and Anderl, 2018;Liu et al., 2021). A survey of relevant literature reveals that digital twins are widely regarded as a critical technology for the industries of the future (Ganguli and Adhikari, 2020;Qi et al., 2021;Jones et al., 2020). Since 2017, there has been a marked increase in the number of publications discussing the digital twin concept (Jones et al., 2020), with a significant portion focusing on manufacturing and product life cycle assessment domains. Digital twins are also gaining traction in the Process Systems Engineering field (Melesse et al., 2021;Madni et al., 2019). However, a primary challenge in these systems is the need to interactively solve complex numerical problems arising from partial differential mathematical models representing the systems.\nWithin the realm of digital twins, Artificial Intelligence (AI) plays a crucial role in facilitating the modeling, representation, and learning of complex behaviors and interactions among system components and data (Farsi et al., 2020;Rathore et al., 2021). While phenomenological models can achieve similar outcomes, the extensive computational effort typically needed to solve these models numerically becomes impractical for real-time information exchange. Moreover, AI models offer the advantage of continuous learning from the system, thus providing the Cyber-Physical System (CPS) with adaptive capabilities. This approach, commonly called online learning, is an efficient tool and strategy to leverage the low computational effort needed for running an AI model online (Rathore et al., 2021;Gong et al., 2022).\nAs a result, there is a burgeoning demand for research on the development and integration of AI and digital twins (Zohdi, 2020;Goodwin et al., 2022). However, introducing online learning to the system increases the demand for computational power, which grows as the frequency of online learning activation increases (Goodwin et al., 2022;Song et al., 2022). Therefore, resource management must be carefully implemented in such scenarios to preserve the benefits of using surrogate models. This challenge becomes more pronounced when considering the prediction uncertainty of AI models, which typically involves multiple evaluations of a probability distribution of the model's parameters (Thelen et al., 2023;Gawlikowski et al., 2023;Kabir et al., 2018).\nOverall, models developed using machine learning have become increasingly popular for performing inference and decision-making tasks in various fields. However, thoroughly evaluating their reliability and effectiveness is necessary to apply these Artificial Intelligence (AI) strategies in practice. The predictions generated by these models can be affected by noise and errors inherent to the inference and modeling methods used. Therefore, it is of utmost importance to consider AI models' uncertainty and possible limitations when making critical decisions based on their predictions. Therefore, it is highly desirable to represent uncertainty reliably in any AI-based system (Pawlowski et al., 2018;Costa et al., 2023).\nIn response to these demands, there is a growing interest in developing models that are not only computationally efficient but also robust, adaptive, and endowed with a degree of cognition (Lin et al., 2021). Such models can self-adapt when discrepancies between their predictions and the current measured state are detected. The need for robustness in situations involving AI models is an increasingly prevalent topic in the literature. For example, Costa et al. (2022) proposed a robust learning methodology for uncertainty-aware scientific machine learning models, considering sources of uncertainty such as the absence of a theory, causal models, sensitivity to data corruption or imperfection, and computational effort. They applied this methodology to develop soft sensors for a polymerization process, demonstrating the identified soft sensors' resilience to uncertainties. Gneiting et al. (2007) introduced a methodology for calibrating the distribution of a known random variable, addressing issues related to non-deterministic variables and their impact on AI predictions. However, these studies primarily consider an offline environment with a virtually unlimited amount of computational resources available.\nFurthermore, knowing a model's prediction uncertainty is crucial as it provides valuable information to assess its reliability and limitations. Additionally, prediction uncertainty enables informed decision-making based on the model's predictions. By evaluating the model's reliability based on its uncertainty, it is possible to identify situations in which it is more accurate or prone to errors. This information is critical for identifying areas where the model requires improvement or additional data collection may be necessary to enhance its accuracy (Woodcock et al., 2021;Rahman et al., 2021). In particular, CPS systems enhanced by robust digital twins are increasingly in demand in the oil and gas industry, especially for offshore exploration where equipment is located hundreds of meters underwater and inaccessible (Wanasinghe et al., 2020;Knebel et al., 2023;10., 2019). This necessitates reliable and precise systems operating under stringent economic, safety, and environmental constraints. Moreover, the ecological impact of accidents in exploration fields is critical, heightening the need for reliability and safety in these processes. For instance, gas-lift systems, an artificial lift technique used in the oil and gas industry to enhance the production of hydrocarbons from wells, present several sources of uncertainty that can affect decision-making and optimal operation. Despite these advancements, there is a lack of reports in the literature addressing the robustness and uncertainty of digital twin strategies. Additionally, there is a dearth of studies examining the concise integration of techniques such as online learning, transfer learning, and robustness assessment when implementing a digital twin. In this context, the present work proposes a digital twin framework for optimal and autonomous decision-making applied to a gas-lift process that employs:\n1. Offline training to identify the ML models in an environment using computationally intensive data. 2. Bayesian inference to construct nonlinear model parameter probability distribution functions (PDFs). 3. Monte Carlo simulations to determine ML model prediction uncertainty. 4. Transfer learning to deploy the identified model and its corresponding uncertainty in an online environment distinct from its original training space.\n5. Reducing model space with statistical confidence to alleviate the computational burden associated with online learning. 6. Cognitive tack to imbue the system with cognition, enabling awareness of its predictions and data received from the plant. 7. Online learning to update the model structure and correct any drift that the DT might identify during the operating campaign." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Offline Digital Twin Identification", "publication_ref": [], "table_ref": [], "text": "In the methodology proposed in this work, offline identification serves as a foundational step in establishing the digital twin. It involves training machine learning (ML) models in a computationally intensive environment using historical and potentially large datasets. This process is performed offline due to the significant computational resources required. Hence, offline training allows the ML models to learn complex patterns and relationships within the data, ensuring high accuracy in their predictions. Once the ML models are trained, they are integrated into the digital twin framework, serving as the brain behind the digital twin's decision-making abilities. This section will provide the methodological details regarding the proposed offline identification step." }, { "figure_ref": [ "fig_0" ], "heading": "Design of Experiments and Data Collection", "publication_ref": [ "b33", "b34", "b35", "b36" ], "table_ref": [], "text": "The initial step in developing artificial neural networks involves acquiring data pertinent to the process of interest. During this stage, it is crucial to gather a substantial amount of data to comprehensively represent the operational domain of the process while accounting for exceptions and boundary conditions within the problem domain. In this regard, diversifying data significantly enhances the quality of predictions made by machine learning models (Klein and Rossin, 1999).\nIn this study, a previously validated phenomenological gas lift model was employed to generate synthetic data. This data, produced through simulations or computational algorithms, offers a cost-effective alternative to real historical data when there is a scarcity of volume, quality, and variability. On the other hand, as this model has been previously validated in a pilot unit, it will be employed as a virtual plant in this study, utilizing a software-in-the-loop (SIL) approach to serve as an environment to emulate with reliability the online implementation of a digital twin.\nAnother essential aspect to consider during the data acquisition phase is the selection of input variables for the model. It is critical to carefully determine the combination of variables used to gather system output data. Inputs should be generated without cross-correlations, as these correlations can result in data discrepancies, obscure the process behavior, or even inflate the dimension of inputs and skew the neural network training. Furthermore, poorly distributed data can lead to overfitting of the identified models. In this context, Design of Experiments (DoE) (Hicks, 1964) is a systematic approach to experiment design and data collection, with the goal of improving efficiency and accuracy. While DoE is commonly used in physical, chemical, and biological experiments, it has also been widely explored for acquiring data in the context of neural networks. This approach aims to optimize the information obtained from the input space while avoiding unintentional correlations and ensuring samples are uniformly distributed within the operating space of the system.\nTo achieve this, the latin hypercube sampling (LHS) (Stein, 1987;Owen, 1994) algorithm was utilized. LHS is a powerful technique for generating quasi-random samples that stratify data from a specified distribution and probability range. Its efficient stratification capabilities enable us to assess the full range of process behavior with fewer points than pure random sampling. Consequently, LHS allows for an efficient and representative sample of the multivariate parameter space.\nSeveral works in the literature approach using LHS to generate synthetic data in different areas of knowledge, especially in the chemical and industrial processes, aiming at constructing neural networks. In this way, using LHS to generate synthetic data can be a valuable tool to improve the accuracy and reliability of neural network models.\nThe 𝑙ℎ𝑠𝑑𝑒𝑠𝑖𝑔𝑛 function in MATLAB 2022b was used to implement LHS. The resulting input space was designed to extract the most information while minimizing unintended correlations. In Figure 1, shows a visual illustration of a 3D sample from the latin hypercube sampling method. " }, { "figure_ref": [], "heading": "Predictor structure identification", "publication_ref": [ "b37", "b38", "b39", "b40", "b41", "b42" ], "table_ref": [], "text": "Choosing the appropriate data structure is a crucial step in any modeling approach. It involves identifying the predictor type and determining its corresponding embedding dimensions. One popular model for predicting nonlinear dynamic time series systems is the nonlinear autoregressive network with exogenous inputs (NARX), which first introduced citeLeontaritis. NARX has gained widespread adoption due to its effectiveness and versatility in modeling complex systems. In the realm of chemical and industrial systems, NARX networks demonstrate the efficiency and ability to identify long-term patterns (Menezes and Barreto, 2008;Hang Xie et al., 2009).\nNARX predictors are a form of predictor that incorporates feedback from the predicted output as input to the hidden layers during subsequent iterations. This feedback mechanism allows the network to model the temporal dynamics of input data and predict future output values while considering other influencing factors.\nTo identify a NARX predictor, a sequence of input data with corresponding exogenous inputs, delays, and their respective outputs are needed. The NARX prediction model can be enhanced by incorporating exogenous inputs, which helps it to capture dynamic behavior. This approach avoids overburdening the model's nonlinear function approximation with internal dynamics while still allowing it to track changes in the system. Additionally, the literature suggests that incorporating NARX structures with recurrent models can improve model predictability and lead to a more streamlined nonlinear model. This approach has been shown to be effective in enhancing the performance of AI models in various process systems applications (Rebello et al., 2022;Nogueira et al., 2018). By reducing the number of layers and weights needed, this approach reduces computational costs, making NARX predictors a practical choice for online applications.\nGiven these benefits, we used the NARX predictor in our work. Its ability to incorporate exogenous inputs and accurately capture system dynamics while keeping a small nonlinear model structure is an ideal choice for constructing a digital twin.\nThe mathematical expression for a NARX network can be represented as Equation 1:\nŷ(𝑡) = 𝑓 (𝑦(𝑡 -1), 𝑦(𝑡 -2), … , 𝑦(𝑡 -𝑁 𝑏 ), 𝑢(𝑡 -1), 𝑢(𝑡 -2), … , 𝑢(𝑡 -𝑁 𝑎 ) + 𝑒(𝑡),(1)\nwhere 𝑦(𝑡) represents the desired output variable, ŷ(𝑡) is the predicted output, 𝑢(𝑡) is the model's input variable, 𝑁 𝑎 and 𝑁 𝑏 are the predictor embedding dimensions, defined as the input, and output variable time delays, and 𝑒(𝑡) is the additive error.\nThe performance of a predictor is independent of the choice of the type of nonlinear function approximator to be used. Therefore, defining the predictor's embedding dimensions (𝑁 𝑎 ) and (𝑁 𝑏 ) is an essential step. Despite the crucial role of predictor parameters in accurately identifying dynamic systems, their definition and estimation are often overlooked in AI modeling literature. This oversight can lead to inaccuracies and errors in the modeling process. Therefore, it is important to emphasize the significance of defining and estimating predictor parameters in AI modeling to ensure the reliability and validity of the results. In this study, the Lipschitz coefficient (𝑞 (𝑛) 𝑗 ) proposed by He and Asada (1993) was used to characterize the embedded nonlinear relationship between inputs and outputs of a complex dynamic system and identify the predictor embedding dimensions. The Lipschitz coefficient is calculated by the ratio of the difference between function output (𝑦) values and the distances between the respective inputs (𝑥), according to Equation 2.\n𝑞 (𝑚) 𝑗 = |𝛿𝑦| √ (𝛿𝑥 1 ) 2 + ... + (𝛿𝑥 𝑚 ) 2 = | | 𝑓 1 𝛿𝑥 1 + ... + 𝑓 𝑚 𝛿𝑥 𝑚 | | √ (𝛿𝑥 1 ) 2 + ... + (𝛿𝑥 𝑚 ) 2 , (2\n)\nwhere 𝑚 represents the number of input variables in the input-output formulation.\nThe Lipschitz Index is used to identify the ideal number of delays. This index is calculated according to Equation 3:\n𝑞 (𝑛) = ( 𝑝 ∏ 𝑘=1 √ 𝑛𝑞 𝑗 (𝑘) (𝑚) ) ( 1 𝑝 ) , (3\n)\nwhere 𝑛 is the number of delays considered in the variables, 𝑝 is a parameter usually between 0.01 N and 0.02 N and 𝑞(𝑘) (𝑛) is the k-th most significant Lipschitz coefficient from all 𝑞 (𝑚) 𝑗 calculated in Equation 2The method consists of testing different values of the delay number, represented by 𝑛, and calculating the value of the Lipschitz index for each tested delay value. The goal is to determine if there is a significant difference between the Lipschitz index values calculated for different values of 𝑛. Based on these calculations, it is possible to identify the first index that indicates a region in which variations of 𝑛 do not significantly affect the calculated value of the Lipschitz index. This index corresponds to the ideal number of delays desired for the inputs.\nIt is essential to highlight that using Lipschitz metrics in constructing and analyzing mathematical models, whether for machine learning or other types, is important for providing information that ensures stability and convergence of training optimization algorithms and model adjustments.\nThis work adopted the Multiple-Input Single-Output (MISO) (Xia et al., 2019) strategy due to its ease of identification and real-time training for neural networks." }, { "figure_ref": [ "fig_1" ], "heading": "Nonlinear model Hyperparameters Identification", "publication_ref": [ "b43", "b44" ], "table_ref": [ "tab_0" ], "text": "After collecting and defining the predictors, the next step is to define the hyperparameters that shape the nonlinear model's structure. These hyperparameters can be broadly categorized into two groups: model parameters and algorithm parameters. Model parameters, which are established before training begins, determine the network's architecture and include the number of layers, the number of neurons per layer, the layer type, the initial learning rate, the batch size, the number of epochs, and other essential features. These parameters are crucial in identifying a good neural network, as they directly impact the model's function, structure, and performance. The set of hyperparameters comprises discrete and continuous variables, which makes the appropriate choice challenging, considering the number and type of variable.\nIn contrast, algorithm hyperparameters are the internal parameters that are updated during the learning process. These parameters are adjusted during training, such as regularization parameters, learning rate schedules, momentum, and optimization algorithms. Proper tuning of these hyperparameters can help the network generalize better and avoid overfitting.\nIn the field of machine learning, it is commonplace to use trial and error methods, such as random search and grid search, for hyperparameter tuning (Bergstra and Bengio, 2012;Li et al., 2018). However, these methods are computationally expensive and inefficient, making them less than ideal.\nA promising alternative is the HYPERBAND method, which has gained significant attention in hyperparameter optimization due to its superior efficiency and precision. HYPERBAND is an optimization algorithm that utilizes random sampling and early stopping of model training to minimize the number of evaluated hyperparameter combinations. This method discards low-performing models while allowing high-performing models to continue in the optimization process. As the search continues, resources are gradually allocated to the most promising models until the best hyperparameters in the search space are identified.\nOverall, HYPERBAND's resource allocation strategy optimizes hyperparameters efficiently, reducing the computational cost and time associated with traditional methods like random search and grid search. Its popularity among machine learning practitioners is on the rise, and it is expected to play an essential role in future hyperparameter optimization studies.\nBefore using the HYPERBAND optimization algorithm, a preliminary step is to define the hyperparameters of interest and their corresponding search spaces. These hyperparameters may include the learning rate, the number of neural network layers, the batch size, and other parameters that can impact the model's performance. Each hyperparameter is defined within a specific search space, which usually represents a range of possible values for that parameter. By defining these search spaces, the HYPERBAND algorithm can explore the different combinations of hyperparameters and find the optimal set that produces the best model performance.\nChoosing an appropriate search space and parameter set is important to achieve more precise and computationally efficient results. It is necessary to balance the search for space exploration and the intensification of training in specific areas. Otherwise, the algorithm may save time and effort on parameters that will not significantly impact the model's performance. Furthermore, it would increase the probability of overfitting. For this reason, in this work, the hyperparameter's Initial learning rate, number of dense layers, activation function in each layer, and number of neurons in each layer were selected to find the optimal set of parameters for the model.\nIn the present study, the hyperparameter search space is represented in Table 1. To fine-tune a neural network model using the HYPERBAND algorithm, it's crucial to have access to the training, validation, and test sets. In the present methodology, these sets were obtained during the data acquisition phase and organized based on the chosen predictors using the Lipschitz Index.\nDuring the optimization process, the HYPERBAND algorithm iteratively tunes the model's hyperparameters to improve its performance with respect to the defined objective function. This objective function is typically evaluated using the training and validation data with the hyperparameters chosen within the specified search space. The objective function in the present case includes a loss function that measures the model's performance on both the training and validation sets.\nAfter completing the optimization process, the model's performance is assessed using the test data to determine its ability to generalize to new datasets. The model with the optimal hyperparameters can then be chosen as the final model for the uncertainty assessment. Figure 2 presents a schematic representation of the methodology presented in this section." }, { "figure_ref": [], "heading": "Markov Chain Monte Carlo Uncertainty Assessment", "publication_ref": [ "b45", "b46", "b47", "b48" ], "table_ref": [], "text": "During neural network training, an optimization process is performed to adjust various parameters known as weights and biases. This process involves the repetition of several epochs, in which the weights are adjusted to achieve the desired performance of the neural network. Optimization of weights is often performed using techniques such as stochastic gradient descent, which adjusts weights based on prediction error. Once the neural network has been trained, the final weights are used to make predictions on new input data. As these weights are stochastic variables, they will have an associated probability distribution, making it possible, even though usually neglected, to identify the uncertainty associated with the parameters and the propagation for model prediction.\nOverall, the literature presents several methods to evaluate uncertainty in model predictions. Among these methods, the Bayesian method combines information from an a priori probability distribution with sample information produced in a posterior probability distribution of one or more parameters in a parametric space. This Bayesian approach offers a more comprehensive and complete view regarding meditation, allowing the inclusion of previous information about the parameters. In contrast, a frequentist approach, which might include simplifications such as the Least Squares and Maximum Likelihood Methods, does not provide a probability distribution for the parameters but instead assigns a fixed value to them. Therefore, in this work, we chose Bayesian inference to be used in the proposed methodology for identifying robust digital twins.\nIn the Bayesian approach (Finkelstein and Fairley, 1970;Lampinen and Vehtari, 2001), the true value of the parameters 𝜽 is unknown. Therefore, it is possible to quantify the uncertainties associated with the values of 𝜽 in terms of probability distributions (𝑃 (𝜽)). This approach is advantageous because it allows for the incorporation of prior information about the parameters before data acquisition by assigning a probability distribution. However, when no prior information is available or a more conservative scenario is desired, a non-informative prior can be used. Thus, ensuring that the posterior distribution is not influenced by unreliable or subjective information.\nOnce the prior is defined, the likelihood function is determined to obtain the posterior density distributions of 𝜽 so that any information regarding the parameters 𝜽 can be obtained from the posterior probability density function (PDF). The process of Bayesian inference involves using reference data to update the prior probability distribution to obtain the posterior probability distribution. This is achieved by applying Bayes' theorem (Swinburne, 2004;Koch, 1990) represented in Equation 4." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "𝐠 𝜽 ( 𝜼| 𝑫, 𝑰) ∝ 𝐿( 𝜼| 𝑫)𝒈 𝜽 ( 𝜼| 𝑰),", "publication_ref": [ "b49", "b50" ], "table_ref": [], "text": "(4)\nwhere 𝜂 represents sampled values of 𝜽, 𝐿 is the likelihood function, 𝒈 𝜽 ( 𝜼| 𝑰) are the prior distributions of 𝜽 that are a new observation of 𝜽, and 𝒈 𝜽 ( 𝜼| 𝑫, 𝑰) represents the posterior probability distribution. In this work, the likelihood function used was the Mean Squared Error (MSE) as approved Equation 5:\n𝐿(𝜼 | 𝑫) = 1 𝑛 𝑛 ∑ 𝑖=1 (𝑦 𝑖 -ŷ𝑖 ) 𝑇 (𝑦 𝑖 -ŷ𝑖 ),(5)\nwhere 𝑦 𝑖 is the 𝑖𝑡ℎ observed value, ŷ𝑖 is the corresponding predicted value, and 𝑛 is the number of observations. The posterior PDF of each parameter 𝜃 𝑖 of the vector 𝜽, 𝒈 𝜽 ( 𝜼| 𝑫, 𝑰) are obtained from the marginal posterior density function 𝒈 𝜽 ( 𝜼| 𝑰) and this is defined in the Equation 6:\n𝒈 𝜽 ( 𝜼| 𝑫, 𝑰) ∝ ∫ 𝑛𝑝-1 𝐿( 𝜼| 𝑫)𝒈 𝜽 ( 𝜼| 𝑰)𝑑𝜽 𝑛𝑝-𝑗 . (6\n)\nIdentifying the posterior PDF of each parameter involves solving the inference problem composed of Equations 5 and 6. Therefore, the posterior probability distribution represents the updated belief about the unknown parameter after incorporating the experimental data. It combines the prior probability distribution and the likelihood function, following the parameter values that are both supported by the prior beliefs and consistent with the observed data.\nEspecially in complex nonlinear models, the solution cannot be obtained analytically, requiring numerical estimation. For this task, the Monte Carlo method via Markov Chains (MCMC) (Brooks, 1998) is a valuable technique in the Bayesian inference context, as it enables the problem's solution through sampling from the posterior distributions of the parameters of interest.\nThe MCMC is used in this context as an iterative method that generates random samples from a proposed distribution to estimate the posterior distribution. At each iteration, the proposed distribution generates a new sample, which is accepted or rejected based on a reception probability determined by the relationship between the new sample's posterior density and the current sample's posterior density, as shown schematically in Figure 3. By repeating this process, the MCMC generates a sequence of random sequences of the posterior distribution of the parameters of interest, allowing the belief of their marginal distributions and the identification of possible correlations between the parameters.\nIn this study, the most probable value ( θ) for each parameter of the neural networks was calculated according to the following Equation 7,\nθ = ∫ ∞ -∞\n𝜂𝒈 𝜽 ( 𝜼| 𝑫, 𝑰)𝑑𝜽, (7) and, the covariance matrix (𝑈 𝜃𝜃 ) of the parameters is defined by the following Equation 8:\n𝑈 𝜽𝜽 = ∫ ∞ -∞ (𝜂 -θ) 𝑇 (𝜂 -θ)𝒈 𝜽 ( 𝜼| 𝑫, 𝑰)𝑑𝜽. (8\n)\nOnce the PDF of the parameters of neural networks is built, it is possible to propagate the uncertainty of the parameters to the prediction using techniques such as Monte Carlo (MC) sampling (Shapiro, 2003). The methodology consists of selecting random samples of parameters from their probability distributions and performing several predictions with these different groups of parameters. From these predictions, it is possible to calculate the probability distribution of the response and, therefore, the uncertainty of the final prediction, incorporating the uncertainty of the parameters.\nIn Figure 4, a schematic diagram of the methodology described in this section is presented." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "Online Digital Twin Implementation", "publication_ref": [ "b51", "b52", "b53" ], "table_ref": [], "text": "In the offline training phase, one can use computationally expensive data without concerns regarding the demand for online feedback. However, when transitioning to an online environment, computational resources are limited by the frequency at which predictions are required. The transfer learning strategy addresses this issue by utilizing computationally expensive information in the offline context and bringing it to an online context by transferring previously acquired knowledge. Hence, the first step to implementing an online digital twin is to transfer the knowledge and its corresponding uncertainty acquired in an offline environment to an online environment.\nIn this context, it is crucial to capitalize on the knowledge acquired during the offline phase of the digital twin and use it as a foundation for deploying the digital twin (DT) in an online setting. This technique, known as transfer learning in machine learning, is employed in this work due to its advantages. These benefits include reducing the amount of training data required for online learning purposes, decreasing the computational effort associated with the structural identification of neural networks (by using transfer learning, it can be assumed that the model structure identified during the offline phase is optimal), and significantly enhancing the performance of the new model. By utilizing transfer learning, the online digital twin effectively leverages offline knowledge while adapting to real-time data and varying operational scenarios, thus ensuring a seamless transition and efficient performance in the online environment.\nA subsequent step within the online environment is the Reducing model node. This is meant to reduce the hyperspace of probable models and project it into a low-dimensional space that comports the computational resources of the online environment. This theorem is presented in Equation 9.\n𝑅 𝑛 → 𝑅 𝑛-𝑞 , (9\n)\nwhere 𝑛 -𝑞 is given by the sensitivity analysis.\nIn this scenario, 𝑅 represents the n-dimensional space of the models, while (𝑛 -𝑞) corresponds to the lowdimensional space. Determining the dimension reduction factor 𝑞 is critical, as it impacts the online performance of prediction uncertainty. In this case, the factor was identified through an offline sensitivity analysis. The value of 𝑞 was successively reduced until the prediction uncertainty fully degenerated. The degeneration inflection point can serve as the minimum position dimension. This study introduced a safety factor of 25% to the methodology, ensuring that the digital twin (DT) operates well away from the degeneration point. After defining the dimension reduction factor 𝑞, the model parameters' original probability density function (PDF) is randomly sampled to populate the new reduced space. This PDF dimension reduction and sorting strategy aims to minimize the computational effort of running a large distribution of potential models online. Subsequently, the uncertainty in the parameters can be propagated to the prediction using MC methods. This process involves selecting random samples of parameters from their respective probability distributions and conducting multiple predictions with these diverse parameter sets. From these predictions, the probability distribution of the response can be calculated, enabling the determination of the uncertainty in the final online prediction while accounting for the uncertainty of the parameters.\nThe next step is the self-awareness component proposed in this work. The necessity for a digital twin to be selfaware of the quality of its predictions in relation to the system's current state is vital for several reasons. As the digital twin might play a critical role in monitoring, predicting, and optimizing the system, their effectiveness depends on the accuracy and reliability of their predictions. Hence, this work proposes a self-aware digital twin that can better adapt to system changes as it continually evaluates its predictive performance. This identifies potential discrepancies between the model and the real system, enabling real-time adjustments to improve prediction accuracy (Zheng et al., 2022;Al Faruque et al., 2021). This is done within the cognitive track block and the cognitive node, Figure 5. The self-aware digital twin fosters a continuous learning environment where the model learns from its successes and failures, refining its predictions over time. Hence, the cognitive node will be the instance responsible for controlling the online learning activation. When the cognitive threshold (CT) is achieved, the node triggers online learning to update the digital twin. This results in a more robust and reliable model that can adapt to various operational scenarios. Equations 10 and 11 present the functions behind the cognitive node, which were developed inspired by the activation mechanism of a neuron and considering the prediction uncertainty of the DT.\n𝑍 = 𝑏 ∑ 𝑛=𝑎 [𝐻(𝑦 𝑚𝑒𝑠𝑢𝑟𝑒𝑑 -𝐼𝑛𝑓 (𝑦 𝐶𝑜𝑣𝑒𝑟𝑎𝑔𝑒𝑅𝑒𝑔𝑖𝑜𝑛 ) + 𝐻(𝑆𝑢𝑝(𝑦 𝐶𝑜𝑣𝑒𝑟𝑎𝑔𝑒𝑅𝑒𝑔𝑖𝑜𝑛 ) -𝑦 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 )], (10\n)\nwhere 𝐻 is a Heaviside function. The infimum (inf) and supremum (sup) operators will compute the smallest lower bound and least upper bound of the DT's coverage region, respectively. When the measured value falls outside the bounds of the coverage area, the Heaviside function activates and returns a value of one. However, the Heaviside function remains inactive if the predicted value is within the coverage area. Essentially, this means that the measurement is located within the coverage regions of the DT. 𝑎 and 𝑏 are functions of a moving horizon (MH) factor, which can be computed by:\n{ 𝑎 = 0 + 𝑘 𝑏 = 𝑀𝐻 + 𝑘. (11\n)\nThe MH factor is a component that aids the cognitive tracker in maintaining an accurate understanding of the digital twin's current state. This dynamic approach ensures that the digital twin remains responsive and adaptive to real-time changes in the system it represents, enhancing its overall effectiveness. The moving horizon factor continually updates the analyzed data window, ensuring that the most recent information is always considered. This approach allows the cognitive tracker to focus on the most relevant data and discard older, less pertinent information. As a result, the digital twin can effectively respond to changes in the system and maintain an accurate representation of its current state. Overall, this adds a memory component to the cognitive node.\nFurthermore, in the real-time online environment, disturbances and unforeseen scenarios different from those encountered during offline training may occur. Therefore, the digital twin must receive real-time performance data from the system and evaluate the necessity of incorporating it into its learning to make reliable predictions. Since industrial processes are highly dynamic and subject to various sources of disturbances and unpredictable scenarios, it is crucial that the digital twin can identify behavior changes, adapt quickly to new system scenarios, and incorporate new data in real time to update the tool. This makes it possible to obtain increasingly accurate and reliable predictions. To achieve this goal, in this work, the online learning tool was integrated into the digital twin to enable the system to incorporate cognition to identify scenario changes and constantly check the predictions and data that the process receives. The implementation of this online learning tool is building a new database as the measurements are collected and processed so that the digital twin can retrain the neural networks when necessary. Therefore, the online learning methodology updates the model and corrects possible deviations identified by the digital twin during the operational campaign.\nFigure 5 presents a schematic representation showcasing the integration of essential concepts that form the proposed digital Twin foundation. This work specifically combines transfer learning, uncertainty management, hyperdimensional reduction techniques for parameter selection and PDF construction, system awareness of changes in plant scenarios (through cognitive nodes and cognitive thresholds), collaborative online learning concepts, and finalizing the strategy with the results presented in a man-machine interface (HMI).\nIt is important to highlight that this approach accommodates synthetic and real data. Synthetic data generated by phenomenological models serve a dual purpose: they not only increase the volume of data required for effective neural network training but also facilitate the generation of risk scenarios or operational abnormalities that might not be frequently encountered during daily operations (Le et al., 2017). Meanwhile, process data can be seamlessly incorporated into the digital twin system following proper data curation, which is simulated in this work. As a result, Figure 5 illustrates both potential data sources for constructing the dataset, emphasizing the versatility and adaptability of the proposed digital Twin Online approach. " }, { "figure_ref": [], "heading": "Case Study: Gas Lift System", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Gas Lift System", "publication_ref": [ "b54", "b55", "b54" ], "table_ref": [], "text": "A gas lift unit consists of a method for the artificial lift of oil and gas used to boost the production of hydrocarbons from wells with insufficient natural pressure. This technique's basic concept consists of injecting compressed gas, typically natural gas, into the wellbore, which reduces the density of the fluid column and creates a gas-oil mixture that is easier to lift to the surface. In this study, the use of a gas lift pilot unit was considered. In this study, the use of a gas lift pilot unit was considered. This unit is a small-scale experimental platform that simulates different underwater oil well network processing scenarios.\nFigure 6 shows the didactic division of the industrial prototype into three sections: reservoir, wells, and risers. In this system, the working fluids are water and air, replacing oil and air. The reservoir consists of a 200 L steel tank, centrifugal pump, and control valves (CV101, CV102, and CV103). In this work, the valve openings were used to simulate different behaviors resulting from the reservoir, which only produces liquid. As shown in Figure 6, flow meters (FI101, FI102, and FI103) are located before the reservoir valves.\nThe simulation of the wells in the experimental prototype is performed through three flexible hoses with a diameter of 2 cm and a length of 1.5 m. The gas lift system air is injected 10 cm after the CV101, CV102, and CV103 valves, within the range of 1 to 5 𝑠𝐿𝑚𝑖𝑛 -1 . A control system can be activated (FIC104, FIC105, and FIC106) to regulate the injected gas flow in the system.\nFinally, the risers are represented by three vertical tubes with an internal diameter of 2 cm and a height of 2.2 m, which are orthogonal to the well-representative pipes. The pressure at the top is measured by the gauges PI101, PI102, and PI103, and three manual valves are located in sequence, which is kept open during the experiments. At the end of the process, the liquid is recirculated in a closed system and returned to the reservoir, while the air is expelled into the atmosphere.\nIn the current study, we employed a first principle model of the gas lift process. This model functions as a virtual plant, providing a platform for developing and testing the proposed methodology. As the model has been previously proposed and validated in the literature, it offers a reliable source of information for our investigation (Matias et al., 2022).\nBy employing this well-established and validated model, we are able to simulate various gas lift scenarios and analyze the performance of the proposed methodology under different operating conditions. Hence, facilitating the development process. 2022). The measured system variables are the well top pressures (PI101, PI102, and PI103), the pump outlet pressure (PI104), the liquid flowrates (FI101, FI102, and FI103), and the gas flowrates (FI104, FI105, and FI106). The reservoir valve openings (CV101, CV102, and CV103) are the system disturbances.\nThe phenomenological model describing this system consists of a set of algebraic and differential equations based on Krishnamoorthy et al. (2018) model. The model accounts for both hydrostatic pressure and pressure loss due to friction. When making calculations, the pressure difference along the riser is deemed insignificant, which means that only two pressures are considered: one at the bottom and another at the top of the riser Matias et al. (2022).\nThe following equations represent the mass balances for liquid and gas in the system, known as differential Equations 12 and 13:\nṁ𝑔 = 𝑤 𝑔 -𝑤 𝑔,𝑜𝑢𝑡 , (12\n) ṁ𝑙 = 𝑤 𝑙 -𝑤 𝑙,𝑜𝑢𝑡 ,(13)\nwhere ṁ𝑙 and ṁ𝑔 represent the respective mass holdups of liquid and gas inside the wells and riser. 𝑤 𝑔 represents the mass flowrate of gas injected into the system, while 𝑤 𝑙 represents the flowrate of liquid coming from the reservoir. Additionally, 𝑤 𝑔,𝑜𝑢𝑡 and 𝑤 𝑙,𝑜𝑢𝑡 respectively denote the outlet production rate of gas and liquid from the system. The algebraic equations are used to describe certain relationships within the system. The following equation can express the outflow from the reservoir:\n𝑤 𝑙 = 𝑣 𝑜 𝜃 𝑟𝑒𝑠 √ 𝜌 𝑙 (𝑃 𝑝𝑢𝑚𝑝 -𝑃 𝑏𝑖 ), (14\n)\nwhere 𝜃 𝑟𝑒𝑠 is the reservoir valve flow coefficient, 𝜌 𝑙 represents the density of the liquid in the system, and 𝑣 𝑜 is the valve opening. The pump outlet pressure, 𝑃 𝑝𝑢𝑚𝑝 , is measured, and the pressure before the injection point, 𝑃 𝑏𝑖 , is calculated using hydrostatic pressure and accounting for the pressure drop due to friction. To simplify this calculation, we utilize the Darcy-Weisbach equation for laminar flow in cylindrical pipes. Therefore, the expression for 𝑃 𝑏𝑖 becomes:\n𝑃 𝑏𝑖 = 𝑃 𝑟ℎ + 𝜌 𝑚𝑖𝑥 𝑔Δℎ + 128𝜇 𝑚𝑖𝑥 (𝑤 𝑔 + 𝑤 𝑙 )𝐿 𝜋𝜌 𝑚𝑖𝑥 𝐷 4 , (15\n)\nwhere, 𝑃 𝑟ℎ represents the pressure measured at the riser head, Δℎ, which denotes the height from the bottom of the well to the top of the riser, 𝐿, the length of the pipes (i.e., the combined length of the well and riser), 𝐷, the diameter of the pipes, 𝑔, the gravitational acceleration; and 𝜇 𝑚𝑖𝑥 , the viscosity of the mixture of liquid and gas. In the experimental setup, the mixture viscosity is approximated by the liquid viscosity. The mixture (𝑙𝑖𝑞𝑢𝑖𝑑+𝑔𝑎𝑠) density 𝜌 𝑚𝑖𝑛 is calculated by Equation 16:\n𝜌 𝑚𝑖𝑥 = 𝑚 𝑡𝑜𝑡𝑎𝑙 𝑉 𝑡𝑜𝑡𝑎𝑙 = 𝑚 𝑔 + 𝑚 𝑙 𝑉 𝑡𝑜𝑡𝑎𝑙 . (16\n)\nAdditionally, there is an equation that states the sum of the volumetric holdups of gas (𝑉 𝑔 ) and liquid (𝑉 𝑙 ) is equal to the total volume of the system.\n𝑉 𝑡𝑜𝑡𝑎𝑙 = 𝑉 𝑔 + 𝑉 𝑙 = 𝑚 𝑙 𝜌 𝑙 + 𝑚 𝑔 𝜌 𝑔 . (17\n)\nThe liquid density is considered to be constant, denoted by the symbol 𝜌 𝑙 . The gas density (𝜌 𝑔 ), on the other hand, is calculated using the ideal gas law:\n𝜌 𝑔 = 𝑃 𝑏𝑖 𝑀 𝑔 𝑅𝑇 , (18\n)\nhere, 𝑀 𝑔 refers to the molecular weight of air, while 𝑅 denotes the universal gas constant and 𝑇 represents the temperature of the surrounding environment. The total outlet flow rate can be determined using the following relationship:\n𝑤 𝑡𝑜𝑡𝑎𝑙 = 𝑤 𝑔,𝑜𝑢𝑡 + 𝑤 𝑙,𝑜𝑢𝑡 = 𝜃 𝑡𝑜𝑝 √ 𝜌 𝑚𝑖𝑥 (𝑃 𝑟ℎ -𝑃 𝑎𝑡𝑚 ), (19\n)\n𝑃 𝑎𝑡𝑚 represents the atmospheric pressure, while 𝜃 𝑡𝑜𝑝 denotes the flow coefficient of the top valve. Additionally, we assume that the proportion between the liquid and total outlet flow rates remains consistent with the liquid fraction (represented by 𝑎 𝑙 ) present in the mixture. This assumption can be expressed as follows:\n𝛼 𝑙 = 𝑚 𝑙 𝑚 𝑡𝑜𝑡𝑎𝑙 = 𝑤 𝑙,𝑜𝑢𝑡 𝑤 𝑡𝑜𝑡𝑎𝑙 . (20\n)" }, { "figure_ref": [], "heading": "Offline Digital Twin Identification", "publication_ref": [ "b56", "b56" ], "table_ref": [], "text": "The digital twin identification offline step of this methodology is a crucial phase in developing a digital twin framework, which aims to address the challenges associated with robustness, uncertainty, and the integration of various learning techniques for optimal and autonomous decision-making in gas-lift processes. Overall, this step of offline training involves identifying the AI models using computationally intensive data in an offline environment. The objective is to create accurate and reliable models based on historical data, which can then be deployed in an online environment for real-time decision-making.\nData acquisition is the first step in constructing the gas lift offline digital twin. The data quality and quantity are fundamental for adequately representing the process domain. Synthetic data was generated through the previously validated phenomenological model. As discussed in Section 2.1.1, we used the DoE methodology to plan data acquisitions. Therefore, LHS was applied to generate 4000 experiments for input data.\nEach experiment consists of a given input that was applied to the process for some time necessary for the system to reach a steady state, defined as 100 seconds. Hence, the database is constituted of 400,000 points. It is essential to highlight the complete transient responses of each experiment were stored for DT identification.\nTo choose the input variables for the Digital Twin model, a Gram-Schmidt orthogonalization method was employed to analyze the impact of operational variables on the flow rates of water and air injected into the system. This variable ranking methodology, developed by Nogueira et al. (2016), enables identifying which process variables have the most significant impact on the outcome of the process. For further details, please refer to Nogueira et al. (2016).\nHence, the orthogonalization analysis pointed out that the gas flow rates for each corresponding well (FI104, FI105, and FI106) and the pump outlet pressure are the variables with the greatest impact on the water and gas flow rates produced in the gas lift system. Therefore, these are the variables used as input in the data-driven model. The variables selected as inputs to the data-driven model, as well as the defined limits for generating input data, are presented in Table 2. The limits were established according to the operational conditions of the real plant." }, { "figure_ref": [ "fig_6", "fig_7", "fig_8", "fig_8", "fig_9", "fig_10", "fig_13", "fig_13", "fig_11", "fig_12" ], "heading": "Table 2", "publication_ref": [ "b39" ], "table_ref": [ "tab_0", "tab_1", "tab_2", "tab_2" ], "text": "Operating conditions bounds given to the LHS design of experiments.\n𝑄 𝑔,1 ∕(𝑠𝐿𝑚𝑖𝑛 -1 ) 𝑄 𝑔,2 ∕(𝑠𝐿𝑚𝑖𝑛 -1 ) 𝑄 𝑔,3 ∕(𝑠𝐿𝑚𝑖𝑛 -1 ) 𝑃 𝑝𝑢𝑚𝑝 ∕(𝑏𝑎𝑟) Minimum 1 1 1 1.3 Maximum 5 5 5 4\nIt is worth noting that Matias et al. ( 2022) also uses these variables (Table 2) as input in the system for real-time optimization and process control. On the other hand, the reservoir valve openings (CV101, CV102, and CV103) are used to introduce unmeasured disturbances in the system.\nAfter generating the input matrix using the LHS method, a correlation evaluation between the variables was performed. Figure 7 shows a heat map of all input variables of the model. The results indicate that the correlations have low values close to zero. This is an essential factor suggesting that the input space provided by LHS was well designed, which avoids data skew and, consequently, the training will not present undesirable biases. The limits for the design of experiments of the input were adjusted to their respective engineering dimensions, presented in Table 2. Subsequently, these perturbations were inserted into the phenomenological model to generate data sets that comprised the output matrix of the model. Figure 8 illustrates the input representation used to induce disturbances in the phenomenological model.\nAfter constructing the output matrix of the phenomenological model, it was necessary to define the appropriate data structure for the AI model. As discussed in Section 2.1.2, we chose to use a model known as Nonlinear Autoregressive Network with Exogenous Inputs (NARX) to predict nonlinear behaviors in dynamic systems of the gas lift model. This prediction model allows the introduction of exogenous variables. However, it was necessary to define the predictors' embedding dimensions, i.e., the ideal number of previous inputs (𝑁 𝑎 ), and previous outputs (𝑁 𝑏 ), to organize the training, validation, and test datasets. Figure 9 presents the Lipschitz coefficients to determine the optimal number of delays for the inputs and outputs in a Nonlinear Autoregressive with exogenous inputs predictor. The graph shows the relationship between the number of delays and the Lipschitz Index, which measures the predictability of the system. The slope of the Lipschitz index is used to determine the optimal number of delays, with the slope tending to zero at the optimal point. In the present case, the optimal number of delays for the inputs and outputs can be seen on the graph as the points where the slope of the Lipschitz index approaches zero.\nFigure 9 concisely represents the relationship between the number of delays and the Lipschitz index, making it a useful tool for determining the optimal number of delays in NARX predictors. As seen in the figure, the optimal values for the number of delays in all the inputs and outputs are five and two, respectively. These values are critical for optimizing the predictability of the NARX predictor and ensuring that the model accurately captures the system's behavior.\nThe HYPERBAND method is a powerful optimization technique that was applied to identify the optimal hyperparameters for an AI model used for uncertainty identification. The methodology, described in Section 2.1.3, uses a combination of random sampling and bracketing to efficiently explore the hyperparameter space and determine the best values for each hyperparameter. In this study, two groups of hyperparameters were considered: optimization parameters (learning rate and mini-batch size) and structural parameters (number of neurons, activation function, and number of the dense layer).\nTo ensure that the model was suitable for prediction purposes, the type of layer was fixed as a simple feedforward layer. This decision was based on the recommendations of Rebello et al. (2022), who present a comprehensive guide for selecting Neural Network structures for prediction and simulation purposes. The search limits for each hyperparameter were defined as described in Table 1. The results of the HYPERBAND search are presented in Table 3. Figure 10 presents a crucial aspect of evaluating the performance of the AI model that was identified as the base for the digital twin. The parity graph visually compares the model's predictions and the actual test data. This graph provides a clear insight into the model's behavior and its ability to capture the underlying system's dynamics accurately.\nThe random distribution of the points along the diagonal line in the graph indicates the model's accuracy. The parity graph shows that the AI model's predictions align well with the test data across the complete range of validation, and the residuals are randomly distributed. This is an important verification of the optimization procedure's effectiveness in finding the best parameters for the AI model.\nStatistically, random residuals indicate that the optimization procedure has reached a satisfactory result in the parameter identification process. In other words, the AI model has learned the underlying patterns and relationships between the inputs and outputs and can accurately make predictions. In line with the parity analysis, the effectiveness of the AI model is highlighted in Figure 11, where the model's prediction is compared with the test data over time. The graph represents the model's behavior and ability to track the test data dynamics accurately.\nThe results of the AI model performance are presented in Table 4, showcasing the accuracy of the model's predictions. The metrics used to evaluate the models' performance include the Mean Absolute Error (MAE) and Mean Squared Error (MSE), both of which are commonly used measures of the difference between the actual and predicted values. The results show that all of the models have low MAE and MSE values for the test data, indicating that the models have been successfully identified and are making accurate predictions. The low error values highlight the reliability of the models and their ability to precisely track the dynamics of the test data over time. Overall, the results presented in Table 4 provide evidence of the success of the AI model identification process. The next step in the proposed methodology is to assess the uncertainty of the digital twin-base model. The objective of this step is to improve the robustness and reliability of the AI model, making it an even more effective digital twin for the underlying system. To achieve this, the methodology outlined in Section 2.1.4 was followed. The first step in the uncertainty assessment process is to evaluate the performance of the Markov Chains, which can be evaluated in Figure 14. The highlighted area in that figure represents the burn-in phase. This phase is necessary because the work assumes a non-informative prior, which generates low-probability regions that must be removed. The burn-in step minimizes the impact of the first samples on the total samples of the MCMC. This work assumed that the first 10000 samples from the chain correspond to the burn-in phase. The burn-in phase corresponds to the highlighted area in Figure 14. Additionally, a total of 50000 samples were used to ensure a comprehensive analysis.\nThe final step in the proposed methodology is the application of Monte Carlo methods to propagate the identified uncertainty toward the model predictions. This process results in a population of possible models that can represent the system, which together forms the digital twin (DT). By building the confidence region of the DT predictions, we can further evaluate the reliability of the identified DT.\nFigure 12 and Figure 13 present the DT confidence region plotted against the test data. It is evident from the figure that all test data points are covered by the DT uncertainty, providing a final evaluation of the reliability of the identified DT. This analysis concludes the offline identification step and prepares the DT for deployment.\nThe use of Monte Carlo methods in this step ensures that the DT is not limited to a single model but instead comprises a range of possible models that can represent the system. This population of models provides a comprehensive representation of the system's behavior, considering the identified uncertainty." }, { "figure_ref": [], "heading": "Online Digital Twin Implementation", "publication_ref": [], "table_ref": [], "text": "Finally, the digital twin was deployed in a software-in-the-loop environment. This SIL framework allowed the simulation of a gas lift virtual plant and the deployment of the DT to monitor it. In the present study, we evaluated the performance of the proposed digital twin framework in monitoring a gas lift system through a series of carefully designed scenarios. These scenarios were constructed with the aim of not only testing the robustness and adaptability of our DT model but also exploring its capabilities in different operational conditions. We considered three distinct scenarios for this purpose. Two scenarios where the identification of the drifting source was taken into account. This allowed us to observe how the DT responds to changes in the system over time and how effectively it can identify and adapt to the source of drift. Another scenario that did not consider identifying the drifting source was introduced. This scenario was designed to challenge the DT's adaptability and robustness under unpredictable and potentially disruptive changes. Here, we aimed to investigate how the DT would perform in the face of unexpected deviations in the system's behavior without prior knowledge or detection of the source of drift.\nEach of these scenarios provided valuable insights into the performance of our digital twin in monitoring the gas lift system. They also allowed to examine the strengths and limitations of our proposed methodology under different conditions and to identify areas where further improvements may be required. The findings from these scenarios will be discussed in detail in the following sections." }, { "figure_ref": [ "fig_14", "fig_14", "fig_14" ], "heading": "Scenario 1", "publication_ref": [], "table_ref": [], "text": "In Test Scenario 1, the digital twin monitors a gas-lift process and detects degradation in the valve of well 1. The results obtained for this case are presented in Figure 15. Unlike scenario 2, the digital twin identifies the source of the problem in this case. Upon recognizing the anomaly, the cognitive node reactivates the offline instance and requests new training data based on the system's current state. Subsequently, data is generated offline using a new design of experiments and sent back to the cognitive node. This cognitive node then activates online learning using these new data to update the predictions of the digital twin.\nThe results illustrated in Figure 15 demonstrate that the digital twin experiences a drift during a brief moment, as it is possible to see the zoomed area in Figure 15. However, after obtaining the new data and completing the online learning process, the digital twin continues to track the process with remarkable precision. In contrast, the traditional AI model drifts away from the process, unable to adapt to the changing conditions.\nThe optimal moving horizon was determined by fine-tuning the window size and the sensitivity of the cognitive tracker. By choosing the most suitable window size, the digital twin can strike a balance between collecting adequate data and promptly addressing drifts. Additionally, adjusting the sensitivity helps ensure that the cognitive tracker activates online learning only when needed, avoiding unnecessary system reactions to minor variations, false triggers, or rapid dynamic discrepancies. In this scenario, the optimal cognitive parameters were identified through a sensitivity analysis, resulting in MH=100, and a=1 as the ideal values.\nThis scenario highlights the benefits of incorporating cognitive capabilities and online learning into digital twin frameworks. The system can maintain high accuracy and reliability throughout the process by enabling the digital twin to recognize drifts and adapt its predictions accordingly. The adaptability and responsiveness demonstrated in this " }, { "figure_ref": [ "fig_15", "fig_15", "fig_15" ], "heading": "Scenario 2", "publication_ref": [], "table_ref": [], "text": "In Test Scenario 2, the digital twin monitors a gas-lift process, and at time 2700 s, a reduction of 75% is observed in the valve CV of well number 2. The digital twin's CT detects a prediction drift, signaling a discrepancy between the virtual model and the actual process. However, the origin of this drift remains unidentifiable for the digital twin, presenting a challenge in determining the appropriate corrective action. This scenario is illustrated in Figure 16.\nIn such situations, the digital twin must wait for sufficient process data to be generated before activating online learning. This waiting period enables the digital twin to collect enough information to identify underlying patterns or trends within the online learning process. During this time, the system relies on its current capabilities to provide information regarding the system. In the present scenario, a waiting period of 5000 seconds is required before the cognitive node activates online learning and corrects the digital twin's prediction. This period is highlighted in Figure 16. After this correction, no further drifts are observed for the proposed digital twin. In contrast, Figure 16 also presents To effectively manage this scenario, the moving horizon of the cognitive node, the cognitive tracker, must be properly tuned. If the CT is not calibrated appropriately, the digital twin may activate online learning prematurely or too frequently, leading to unnecessary computational overhead and potentially compromising the system's overall performance.\nThe moving horizon can be optimized by adjusting the cognitive tracker's window sizes and sensitivity. By selecting an optimal window size, the digital twin can balance the need for sufficient data collection with the urgency of addressing the drift. Fine-tuning the sensitivity ensures that the CT activates online learning only when necessary, preventing the system from overreacting to minor fluctuations, false alarms, or fast dynamic mismatches. In this scenario, the optimal cognitive parameters (MH, and a) were determined through a sensitivity analysis and found to be 100, and 1, respectively.\nHence, online learning is activated once the digital twin has collected enough process data and the cognitive tracker determines that the drift is significant. The system updates its internal model and uncertainty to better align with the process, improving its predictive accuracy and overall performance. By carefully tuning the moving horizon of the cognitive node, the digital twin can maintain its adaptability and efficiency while effectively addressing the challenges posed by unidentifiable drift origins. However, it is necessary to recognize that this scenario's waiting time is significant. Further studies must be carried out to improve the DT performance in such a situation." }, { "figure_ref": [ "fig_16" ], "heading": "Scenario 3", "publication_ref": [], "table_ref": [], "text": "In Test Scenario 3, the digital twin monitors a gas-lift process and observes a continuous degeneration of well number 3. This scenario's results are presented in Figure 17. Similar to Scenario 1, the digital twin is able to identify the source of the drifting. Upon recognizing the drift, the cognitive node reactivates the offline instance and requests new training data based on the current state of the system, taking into account the continuous degeneration of well number 3. Subsequently, data is generated offline using a new design of experiments that accounts for the ongoing degeneration and is sent back to the cognitive node. As the nature of the degeneration depends on the time, the data generation is done for different CV conditions along the drifting slope. The cognitive node then activates online learning using this new data to update the digital twin's predictions, allowing it to adapt to the changing conditions in the well.\nAs the degeneration continues, the digital twin effectively tracks the process and maintains high accuracy and reliability. Its adaptability and responsiveness to the continuous degeneration of well number 3 demonstrate the benefits of incorporating cognitive capabilities and online learning into the digital twin framework. By recognizing drifts and updating its predictions accordingly, the system is better equipped to handle dynamic changes in the process and maintain its performance, in contrast to traditional AI models that lack the ability to adjust in real time." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This study presented an innovative and comprehensive digital twin framework designed to facilitate optimal and autonomous decision-making, specifically focusing on gas-lift processes in the oil and gas industry. This framework amalgamates several key techniques, including offline training, Bayesian inference, Monte Carlo simulations, transfer learning, online learning, and a novel model hyperspace dimension reduction with cognitive tack. Integrating these techniques results in an adaptive, robust, and efficient system while addressing the computational challenges associated with online learning and the accurate representation of uncertainty in AI-based systems. Developing AI models that balance computational efficiency with robustness and adaptability is crucial for industries like oil and gas, where operations must be carried out under strict economic, safety, and environmental constraints. However, this is not only limited to the oil and gas industries. By leveraging the proposed digital twin framework, industrial operations can enhance their operations safety, reliability, and precision. Integrating AI models and digital twins enables better monitoring and management of complex processes, ultimately reducing the risks associated with uncertainty and unforeseen events. The fusion of digital twin technology with AI has the potential to optimize industrial processes, minimize environmental impacts, and increase overall operational efficiency. The results of this study contribute to the expanding knowledge base surrounding digital twin technology and its applications in the oil and gas industry. The proposed framework establishes a foundation for future research, especially in robust and adaptive AI models for complex systems. While the current methodology offers numerous advantages, there are opportunities to refine and expand upon certain aspects to address the limitations discussed in the results section. For instance, although the digital twin can effectively track and maintain up-to-date information, it experiences considerable delays when the source of prediction drifting is unidentifiable. To overcome this limitation, future research could explore integrating fault detection tools into the digital twin framework. This enhancement would enable the system to identify and respond to discrepancies better, improving its adaptability and overall performance.\nIn summary, this study not only adds to the growing literature on digital twin technology in the oil and gas industry but also provides a foundation for developing more robust and adaptive AI models. In summary, this study demonstrates the potential of a comprehensive digital twin framework to address the challenges associated with optimal and autonomous decision-making. Industries can improve safety, reliability, and precision in their operations by harnessing the power of AI models that are computationally efficient, robust, and adaptive. " } ]
The concept of creating a virtual copy of a complete Cyber-Physical System opens up numerous possibilities, including real-time assessments of the physical environment and continuous learning from the system to provide reliable and precise information. This process, known as the twinning process or the development of a digital twin (DT), has been widely adopted across various industries. However, challenges arise when considering the computational demands of implementing AI models, such as those employed in digital twins, in real-time information exchange scenarios. This work proposes a digital twin framework for optimal and autonomous decision-making applied to a gas-lift process in the oil and gas industry, focusing on enhancing the robustness and adaptability of the DT. The framework combines Bayesian inference, Monte Carlo simulations, transfer learning, online learning, and novel strategies to confer cognition to the DT, including model hyperdimensional reduction and cognitive tack. Consequently, creating a framework for efficient, reliable, and trustworthy DT identification was possible. The proposed approach addresses the current gap in the literature regarding integrating various learning techniques and uncertainty management in digital twin strategies. This digital twin framework aims to provide a reliable and efficient system capable of adapting to changing environments and incorporating prediction uncertainty, thus enhancing the overall decisionmaking process in complex, real-world scenarios. Additionally, this work lays the foundation for further developments in digital twins for process systems engineering, potentially fostering new advancements and applications across various industrial sectors.
Digital Twin Framework for Optimal and Autonomous Decision-Making in Cyber-Physical Systems: Enhancing Reliability and Adaptability in the Oil and Gas Industry
[ { "figure_caption": "Figure 1 :1Figure 1: Latin hypercube sampling schematic representation", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Flowchart of the nonlinear model identification strategy presented in this work", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Markov Chains Monte Carlo method schematic representation", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Flowchart of the uncertainty strategy presented in this work", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Flowchart of the online digital twin strategy presented in this work", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Experimental setup of gas lift experiment adapted from Matias et al. (2022). The measured system variables are the well top pressures (PI101, PI102, and PI103), the pump outlet pressure (PI104), the liquid flowrates (FI101, FI102, and FI103), and the gas flowrates (FI104, FI105, and FI106). The reservoir valve openings (CV101, CV102, and CV103) are the system disturbances.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Correlation heatmap of the LHS inputs signals.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Experimental input sequence.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Lipschitz coefficients results for the predictor embedding dimensions.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Parity graphics of the model test, (a) 𝑚 𝑔 of the well 1, (b) 𝑚 𝑔 of the well 2, (c) 𝑚 𝑔 of the well 3, (d) 𝑚 𝑙 of the well 1, (e) 𝑚 𝑙 of the well 2, (f) 𝑚 𝑙 of the well 3", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Neural network models test for each process output", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Prediction uncertainty of neural network models sampled for test data", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Prediction uncertainty of neural network models sampled for test data", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Burn-in MCMC and full chain for nine randomly drawn parameters", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Scenario 1 results: The first graph shows a step disturbance in the CV101 valve, the second graph shows the behavior of the variable 𝑚 𝑔 of the well 1 together with uncertainty evaluation and activation of online learning in the digital twin, third graph the same representation of the previous graph for the variable 𝑚 𝑙 of the well 1, and finally the moving horizon for identification of deviation in the digital twin and activation of the cognitive node.", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Scenario 2 results: The first graph shows a step disturbance in the CV102 valve, the second graph shows the behavior of the variable 𝑚 𝑔 of the well 2 along with uncertainty evaluation and activation of online learning in the digital twin, and the third graph shows the exact representation of the previous graph for the variable 𝑚 𝑙 of the well 2, and finally the moving horizon for identification of deviation in the digital twin and activation of the cognitive node after a waiting time for data collection.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Scenario 3 results: first graph shows a step disturbance in the CV103 valve, second graph shows the behavior of the variable 𝑚 𝑔 of the well 3 along with uncertainty evaluation and activation of online learning in the digital twin, third graph the same representation of the previous graph for the variable 𝑚 𝑙 of the well 3, and finally the moving horizon for identification of deviation in the digital twin and activation of the cognitive node.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Hyperparameter search space for HYPERBAND", "figure_data": "HyperparametersSearch spaceInitial learning rate1 × 10", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results of best hyperparameters for each performance indicator for NN.", "figure_data": "WellsHyperparametersm 𝑔m 𝑙Initial learning rate1 × 10 -31 × 10 -3Number of dense layers22Well 1Activation function in each layer{relu, linear}{relu, linear}Number of neurons in each layer{60, 1}{60, 1}Number of parameters for each layer{900, 61}{900, 61}Initial learning rate1 × 10 -31 × 10 -3Number of dense layers22Well 2Activation function in each layer{relu, linear}{relu, linear}Number of neurons in each layer{40, 1}{70, 1}Number of parameters for each layer{600, 41}{1050, 71}Initial learning rate1 × 10 -31 × 10 -3Number of dense layers22Well 3Activation function in each layer{relu, linear}{relu, linear}Number of neurons in each layer{60, 1}{60, 1}Number of parameters for each layer{900, 61}{900, 61}", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Models' performance indicators of the validation.", "figure_data": "WellsMetricsm 𝑔m 𝑙Well 1MSE MAE1.37 × 10 -7 2.69 × 10 -41.62 × 10 -7 2.72 × 10 -4Well 2MSE MAE8.46 × 10 -7 9.79 × 10 -41.55 × 10 -7 2.96 × 10 -4Well 3MSE MAE1.23 × 10 -7 2.80 × 10 -41.65 × 10 -7 2.68 × 10 -4", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Carine Menezes Rebello; Johannes Jäschke; Idelfonso B R Nogueira
[ { "authors": "F Tao; Q Qi; L Wang; A Nee", "journal": "Engineering", "ref_id": "b0", "title": "Digital twins and cyber-physical systems toward smart manufacturing and industry 4.0: Correlation and comparison", "year": "2019" }, { "authors": "", "journal": "Springer International Publishing", "ref_id": "b1", "title": "Digital Twins for Cyber-Physical Systems Security: State of the Art and Outlook", "year": "2019" }, { "authors": "F Biesinger; D Meike; B Kraß; M Weyrich", "journal": "", "ref_id": "b2", "title": "A digital twin for production planning based on cyber-physical systems: A case study for a cyberphysical system-based creation of a digital twin", "year": "2018" }, { "authors": "C Lo; C Chen; R Y Zhong", "journal": "Advanced Engineering Informatics", "ref_id": "b3", "title": "A review of digital twin in product design and development", "year": "2021" }, { "authors": "E Ors; R Schmidt; M Mighani; M Shalaby", "journal": "", "ref_id": "b4", "title": "A conceptual framework for ai-based operational digital twin in chemical process engineering", "year": "2020" }, { "authors": "R He; G Chen; C Dong; S Sun; X Shen", "journal": "ISA Transactions", "ref_id": "b5", "title": "Data-driven digital twin technology for optimized control in process systems", "year": "2019" }, { "authors": "L Gao; M Jia; D Liu", "journal": "Journal of Software Engineering and Applications", "ref_id": "b6", "title": "Process Digital Twin and Its Application in Petrochemical Industry", "year": "2022" }, { "authors": "F Tao; H Zhang; A Liu; A Y C Nee", "journal": "IEEE Transactions on Industrial Informatics", "ref_id": "b7", "title": "Digital twin in industry: State-of-the-art", "year": "2019" }, { "authors": "M Grieves; J Vickers", "journal": "Transdisciplinary Perspectives on Complex Systems: New Findings and Approaches", "ref_id": "b8", "title": "Digital twin: Mitigating unpredictable, undesirable emergent behavior in complex systems", "year": "2016" }, { "authors": "S Haag; R Anderl", "journal": "Journal of Manufacturing Systems", "ref_id": "b9", "title": "Review of digital twin about concepts, technologies, and industrial applications", "year": "2018" }, { "authors": "R Ganguli; S Adhikari", "journal": "Applied Mathematical Modelling", "ref_id": "b10", "title": "The digital twin of discrete dynamic systems: Initial approaches and future challenges", "year": "2020" }, { "authors": "Q Qi; F Tao; T Hu; N Anwer; A Liu; Y Wei; L Wang; A Nee", "journal": "Journal of Manufacturing Systems", "ref_id": "b11", "title": "Enabling technologies and tools for digital twin", "year": "2021" }, { "authors": "D Jones; C Snider; A Nassehi; J Yon; B Hicks", "journal": "CIRP Journal of Manufacturing Science and Technology", "ref_id": "b12", "title": "Characterising the Digital Twin: A systematic literature review", "year": "2020" }, { "authors": "T Y Melesse; V Di Pasquale; S Riemma", "journal": "IET Collaborative Intelligent Manufacturing", "ref_id": "b13", "title": "Digital Twin models in industrial operations: State-of-the-art and future research directions", "year": "2021" }, { "authors": "A M Madni; C C Madni; S D Lucero", "journal": "Systems", "ref_id": "b14", "title": "Leveraging digital twin technology in model-based systems engineering", "year": "2019" }, { "authors": "", "journal": "Springer International Publishing", "ref_id": "b15", "title": "The Convergence of Digital Twin, IoT, and Machine Learning: Transforming Data into Action", "year": "2020" }, { "authors": "M M Rathore; S A Shah; D Shukla; E Bentafat; S Bakiras", "journal": "IEEE Access", "ref_id": "b16", "title": "The role of ai, machine learning, and big data in digital twinning: A systematic literature review, challenges, and opportunities", "year": "2021" }, { "authors": "H Gong; S Cheng; Z Chen; Q Li", "journal": "Nuclear Science and Engineering", "ref_id": "b17", "title": "Data-Enabled Physics-Informed Machine Learning for Reduced-Order Modeling Digital Twin: Application to Nuclear Reactor Physics", "year": "2022" }, { "authors": "T Zohdi", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b18", "title": "A machine-learning framework for rapid adaptive digital-twin based fire-propagation simulation in complex environments", "year": "2020" }, { "authors": "T Goodwin; J Xu; N Celik; C.-H Chen", "journal": "Journal of Simulation", "ref_id": "b19", "title": "Real-time digital twin-based optimization with predictive simulation learning", "year": "2022" }, { "authors": "H Song; M Song; X Liu", "journal": "Applied Energy", "ref_id": "b20", "title": "Online autonomous calibration of digital twins using machine learning with application to nuclear power plants", "year": "2022" }, { "authors": "A Thelen; X Zhang; O Fink; Y Lu; S Ghosh; B D Youn; M D Todd; S Mahadevan; C Hu; Z Hu", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b21", "title": "A comprehensive review of digital twin-part 2: roles of uncertainty quantification and optimization, a battery digital twin, and perspectives", "year": "2023" }, { "authors": "J Gawlikowski; C R N Tassi; M Ali; J Lee; M Humt; J Feng; A Kruspe; R Triebel; P Jung; R Roscher; M Shahzad; W Yang; R Bamler; X X Zhu", "journal": "Artificial Intelligence Review", "ref_id": "b22", "title": "A survey of uncertainty in deep neural networks", "year": "2023" }, { "authors": "H M D Kabir; A Khosravi; M A Hosen; S Nahavandi", "journal": "IEEE Access", "ref_id": "b23", "title": "Neural network-based uncertainty quantification: A survey of methodologies and applications", "year": "2018" }, { "authors": "N Pawlowski; A Brock; M C H Lee; M Rajchl; B Glocker", "journal": "", "ref_id": "b24", "title": "Implicit weight uncertainty in neural networks", "year": "2018" }, { "authors": "E A Costa; C D M Rebello; M Fontana; L Schnitman; I B D R Nogueira", "journal": "Mathematics", "ref_id": "b25", "title": "A robust learning methodology for uncertainty-aware scientific machine learning models", "year": "2023" }, { "authors": "T Y Lin; Z Jia; C Yang; Y Xiao; S Lan; G Shi; B Zeng; H Li", "journal": "Advanced Engineering Informatics", "ref_id": "b26", "title": "Evolutionary digital twin: A new approach for intelligent industrial product development", "year": "2021" }, { "authors": "E A Costa; C D M Rebello; M Fontana; L Schnitman; I B D R Nogueira", "journal": "Mathematics", "ref_id": "b27", "title": "A Robust Learning Methodology for Uncertainty-Aware Scientific Machine Learning Models", "year": "2022" }, { "authors": "T Gneiting; F Balabdaoui; A E Raftery", "journal": "Journal of the Royal Statistical Society Series B: Statistical Methodology", "ref_id": "b28", "title": "Probabilistic Forecasts, Calibration and Sharpness", "year": "2007" }, { "authors": "J Woodcock; C Gomes; H D Macedo; P G Larsen", "journal": "Springer International Publishing", "ref_id": "b29", "title": "Uncertainty quantification and runtime monitoring using environment-aware digital twins", "year": "2021" }, { "authors": "M Rahman; A Khan; S Anowar; M Al-Imran; R Verma; D Kumar; K Kobayashi; S Alam", "journal": "Springer International Publishing", "ref_id": "b30", "title": "Leveraging Industry 4.0: Deep Learning, Surrogate Model, and Transfer Learning with Uncertainty Quantification Incorporated into Digital Twin for Nuclear System", "year": "2021" }, { "authors": "T R Wanasinghe; L Wroblewski; B K Petersen; R G Gosine; L A James; O De Silva; G K I Mann; P J Warrian", "journal": "IEEE Access", "ref_id": "b31", "title": "Digital twin for the oil and gas industry: Overview, research trends, opportunities, and challenges", "year": "2020" }, { "authors": "F P Knebel; R Trevisan; G S Do Nascimento; M Abel; J A Wickboldt", "journal": "SPE Offshore Europe Conference and Exhibition", "ref_id": "b32", "title": "A study on cloud and edge computing for the implementation of digital twins in the oil & gas industries", "year": "2019-09-03" }, { "authors": "B Klein; D Rossin", "journal": "Omega", "ref_id": "b33", "title": "Data quality in neural network models: effect of error rate and magnitude of error on predictive accuracy", "year": "1999" }, { "authors": "C R Hicks", "journal": "Holt, Rinehart and Winston", "ref_id": "b34", "title": "Fundamental Concepts in the Design of Experiments", "year": "1964" }, { "authors": "M Stein", "journal": "Technometrics", "ref_id": "b35", "title": "Large sample properties of simulations using latin hypercube sampling", "year": "1987" }, { "authors": "A B Owen", "journal": "Journal of the American Statistical Association", "ref_id": "b36", "title": "Controlling Correlations in Latin Hypercube Samples", "year": "1994" }, { "authors": "J M P Menezes; G A Barreto", "journal": "Neurocomputing", "ref_id": "b37", "title": "Long-term time series prediction with the NARX network: An empirical evaluation", "year": "2008" }, { "authors": "Hang Xie; Hao Tang; Yu-He Liao", "journal": "IEEE", "ref_id": "b38", "title": "Time series prediction based on NARX neural networks: An advanced approach", "year": "2009" }, { "authors": "C M Rebello; P H Marrocos; E A Costa; V V Santana; A E Rodrigues; A M Ribeiro; I B R Nogueira", "journal": "Processes", "ref_id": "b39", "title": "Machine Learning-Based Dynamic Modeling for Process Engineering Applications: A Guideline for Simulation and Prediction from Perceptron to Deep Learning", "year": "2022" }, { "authors": "I B Nogueira; A M Ribeiro; R Requião; K V Pontes; H Koivisto; A E Rodrigues; J M Loureiro", "journal": "Applied Soft Computing", "ref_id": "b40", "title": "A quasi-virtual online analyser based on an artificial neural networks and offline measurements to predict purities of raffinate/extract in simulated moving bed processes", "year": "2018" }, { "authors": "X He; H Asada", "journal": "IEEE", "ref_id": "b41", "title": "A New Method for Identifying Orders of Input-Output Models for Nonlinear Dynamic Systems", "year": "1993" }, { "authors": "W Xia; G Zheng; Y Zhu; J Zhang; J Wang; A P Petropulu", "journal": "", "ref_id": "b42", "title": "Deep learning based beamforming neural networks in downlink miso systems", "year": "2019" }, { "authors": "J Bergstra; Y Bengio", "journal": "J. Mach. Learn. Res", "ref_id": "b43", "title": "Random search for hyper-parameter optimization", "year": "2012" }, { "authors": "L Li; K Jamieson; G Desalvo; A Rostamizadeh; A Talwalkar", "journal": "Journal of Machine Learning Research", "ref_id": "b44", "title": "Hyperband: A novel bandit-based approach to hyperparameter optimization", "year": "2018" }, { "authors": "M O Finkelstein; W B Fairley", "journal": "Harvard Law Review", "ref_id": "b45", "title": "A Bayesian Approach to Identification Evidence", "year": "1970" }, { "authors": "J Lampinen; A Vehtari", "journal": "Neural Networks", "ref_id": "b46", "title": "Bayesian approach for neural networks-review and case studies", "year": "2001" }, { "authors": "R Swinburne", "journal": "Revue Philosophique de la France Et de l'Etranger", "ref_id": "b47", "title": "Bayes' theorem", "year": "2004" }, { "authors": "K.-R Koch", "journal": "Springer", "ref_id": "b48", "title": "Bayes' Theorem", "year": "1990" }, { "authors": "S Brooks", "journal": "Journal of the Royal Statistical Society: Series D (The Statistician)", "ref_id": "b49", "title": "Markov chain Monte Carlo method and its application", "year": "1998" }, { "authors": "A Shapiro", "journal": "Elsevier", "ref_id": "b50", "title": "Monte carlo sampling methods", "year": "2003" }, { "authors": "X Zheng; J Lu; D Kiritsis", "journal": "International Journal of Production Research", "ref_id": "b51", "title": "The emergence of cognitive digital twin: vision, challenges and opportunities", "year": "2022" }, { "authors": "M A Al Faruque; D Muthirayan; S.-Y Yu; P P Khargonekar", "journal": "", "ref_id": "b52", "title": "Cognitive digital twin for manufacturing systems", "year": "2021" }, { "authors": "T A Le; A G Baydin; R Zinkov; F Wood", "journal": "", "ref_id": "b53", "title": "Using synthetic data to train neural networks is model-based reasoning", "year": "2017" }, { "authors": "J Matias; J P Oliveira; G A Le Roux; J Jäschke", "journal": "Journal of Process Control", "ref_id": "b54", "title": "Steady-state real-time optimization using transient measurements on an experimental rig", "year": "2022" }, { "authors": "D Krishnamoorthy; B Foss; S Skogestad", "journal": "Computers & Chemical Engineering", "ref_id": "b55", "title": "Steady-state real-time optimization using transient measurements", "year": "2018" }, { "authors": "I B Nogueira; A M Ribeiro; A E Rodrigues; J M Loureiro", "journal": "Computers & Chemical Engineering", "ref_id": "b56", "title": "Dynamics of a True Moving Bed separation process: Effect of operating variables on performance indicators using orthogonalization method", "year": "2016" } ]
[ { "formula_coordinates": [ 4, 65.37, 562.64, 440.05, 10.59 ], "formula_id": "formula_0", "formula_text": "ŷ(𝑡) = 𝑓 (𝑦(𝑡 -1), 𝑦(𝑡 -2), … , 𝑦(𝑡 -𝑁 𝑏 ), 𝑢(𝑡 -1), 𝑢(𝑡 -2), … , 𝑢(𝑡 -𝑁 𝑎 ) + 𝑒(𝑡),(1)" }, { "formula_coordinates": [ 5, 63.74, 123.92, 437.8, 32.34 ], "formula_id": "formula_1", "formula_text": "𝑞 (𝑚) 𝑗 = |𝛿𝑦| √ (𝛿𝑥 1 ) 2 + ... + (𝛿𝑥 𝑚 ) 2 = | | 𝑓 1 𝛿𝑥 1 + ... + 𝑓 𝑚 𝛿𝑥 𝑚 | | √ (𝛿𝑥 1 ) 2 + ... + (𝛿𝑥 𝑚 ) 2 , (2" }, { "formula_coordinates": [ 5, 501.55, 131.63, 3.87, 8.9 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 5, 63.74, 221.76, 437.8, 36.59 ], "formula_id": "formula_3", "formula_text": "𝑞 (𝑛) = ( 𝑝 ∏ 𝑘=1 √ 𝑛𝑞 𝑗 (𝑘) (𝑚) ) ( 1 𝑝 ) , (3" }, { "formula_coordinates": [ 5, 501.55, 238.82, 3.87, 8.9 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 7, 63.74, 544.53, 441.68, 30.53 ], "formula_id": "formula_5", "formula_text": "𝐿(𝜼 | 𝑫) = 1 𝑛 𝑛 ∑ 𝑖=1 (𝑦 𝑖 -ŷ𝑖 ) 𝑇 (𝑦 𝑖 -ŷ𝑖 ),(5)" }, { "formula_coordinates": [ 7, 63.74, 643.53, 437.8, 17.46 ], "formula_id": "formula_6", "formula_text": "𝒈 𝜽 ( 𝜼| 𝑫, 𝑰) ∝ ∫ 𝑛𝑝-1 𝐿( 𝜼| 𝑫)𝒈 𝜽 ( 𝜼| 𝑰)𝑑𝜽 𝑛𝑝-𝑗 . (6" }, { "formula_coordinates": [ 7, 501.55, 643.69, 3.87, 8.9 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 8, 65.41, 398.81, 34.14, 26.75 ], "formula_id": "formula_8", "formula_text": "θ = ∫ ∞ -∞" }, { "formula_coordinates": [ 8, 63.74, 458.7, 437.8, 26.75 ], "formula_id": "formula_9", "formula_text": "𝑈 𝜽𝜽 = ∫ ∞ -∞ (𝜂 -θ) 𝑇 (𝜂 -θ)𝒈 𝜽 ( 𝜼| 𝑫, 𝑰)𝑑𝜽. (8" }, { "formula_coordinates": [ 8, 501.55, 467.89, 3.87, 8.9 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 9, 63.74, 405.4, 437.8, 11.15 ], "formula_id": "formula_11", "formula_text": "𝑅 𝑛 → 𝑅 𝑛-𝑞 , (9" }, { "formula_coordinates": [ 9, 501.55, 407.65, 3.87, 8.9 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 10, 63.74, 135.55, 437.53, 29.83 ], "formula_id": "formula_13", "formula_text": "𝑍 = 𝑏 ∑ 𝑛=𝑎 [𝐻(𝑦 𝑚𝑒𝑠𝑢𝑟𝑒𝑑 -𝐼𝑛𝑓 (𝑦 𝐶𝑜𝑣𝑒𝑟𝑎𝑔𝑒𝑅𝑒𝑔𝑖𝑜𝑛 ) + 𝐻(𝑆𝑢𝑝(𝑦 𝐶𝑜𝑣𝑒𝑟𝑎𝑔𝑒𝑅𝑒𝑔𝑖𝑜𝑛 ) -𝑦 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 )], (10" }, { "formula_coordinates": [ 10, 501.27, 146.55, 4.15, 8.9 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 10, 63.74, 265.64, 437.53, 27.54 ], "formula_id": "formula_15", "formula_text": "{ 𝑎 = 0 + 𝑘 𝑏 = 𝑀𝐻 + 𝑘. (11" }, { "formula_coordinates": [ 10, 501.27, 276.72, 4.15, 8.9 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 12, 67.22, 421.75, 434.05, 10.59 ], "formula_id": "formula_17", "formula_text": "ṁ𝑔 = 𝑤 𝑔 -𝑤 𝑔,𝑜𝑢𝑡 , (12" }, { "formula_coordinates": [ 12, 67.22, 421.75, 438.19, 54.43 ], "formula_id": "formula_18", "formula_text": ") ṁ𝑙 = 𝑤 𝑙 -𝑤 𝑙,𝑜𝑢𝑡 ,(13)" }, { "formula_coordinates": [ 12, 63.74, 565.62, 437.53, 15.82 ], "formula_id": "formula_19", "formula_text": "𝑤 𝑙 = 𝑣 𝑜 𝜃 𝑟𝑒𝑠 √ 𝜌 𝑙 (𝑃 𝑝𝑢𝑚𝑝 -𝑃 𝑏𝑖 ), (14" }, { "formula_coordinates": [ 12, 501.27, 570.85, 4.15, 8.9 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 12, 63.74, 660.77, 437.53, 26.91 ], "formula_id": "formula_21", "formula_text": "𝑃 𝑏𝑖 = 𝑃 𝑟ℎ + 𝜌 𝑚𝑖𝑥 𝑔Δℎ + 128𝜇 𝑚𝑖𝑥 (𝑤 𝑔 + 𝑤 𝑙 )𝐿 𝜋𝜌 𝑚𝑖𝑥 𝐷 4 , (15" }, { "formula_coordinates": [ 12, 501.27, 669.35, 4.15, 8.9 ], "formula_id": "formula_22", "formula_text": ")" }, { "formula_coordinates": [ 13, 63.74, 132.42, 437.53, 26.15 ], "formula_id": "formula_23", "formula_text": "𝜌 𝑚𝑖𝑥 = 𝑚 𝑡𝑜𝑡𝑎𝑙 𝑉 𝑡𝑜𝑡𝑎𝑙 = 𝑚 𝑔 + 𝑚 𝑙 𝑉 𝑡𝑜𝑡𝑎𝑙 . (16" }, { "formula_coordinates": [ 13, 501.27, 141.01, 4.15, 8.9 ], "formula_id": "formula_24", "formula_text": ")" }, { "formula_coordinates": [ 13, 63.74, 205.84, 437.53, 26.15 ], "formula_id": "formula_25", "formula_text": "𝑉 𝑡𝑜𝑡𝑎𝑙 = 𝑉 𝑔 + 𝑉 𝑙 = 𝑚 𝑙 𝜌 𝑙 + 𝑚 𝑔 𝜌 𝑔 . (17" }, { "formula_coordinates": [ 13, 501.27, 214.43, 4.15, 8.9 ], "formula_id": "formula_26", "formula_text": ")" }, { "formula_coordinates": [ 13, 63.74, 282.62, 437.53, 24.12 ], "formula_id": "formula_27", "formula_text": "𝜌 𝑔 = 𝑃 𝑏𝑖 𝑀 𝑔 𝑅𝑇 , (18" }, { "formula_coordinates": [ 13, 501.27, 291.2, 4.15, 8.9 ], "formula_id": "formula_28", "formula_text": ")" }, { "formula_coordinates": [ 13, 63.74, 368.92, 437.53, 13.38 ], "formula_id": "formula_29", "formula_text": "𝑤 𝑡𝑜𝑡𝑎𝑙 = 𝑤 𝑔,𝑜𝑢𝑡 + 𝑤 𝑙,𝑜𝑢𝑡 = 𝜃 𝑡𝑜𝑝 √ 𝜌 𝑚𝑖𝑥 (𝑃 𝑟ℎ -𝑃 𝑎𝑡𝑚 ), (19" }, { "formula_coordinates": [ 13, 501.27, 371.72, 4.15, 8.9 ], "formula_id": "formula_30", "formula_text": ")" }, { "formula_coordinates": [ 13, 63.74, 443.33, 437.53, 25.87 ], "formula_id": "formula_31", "formula_text": "𝛼 𝑙 = 𝑚 𝑙 𝑚 𝑡𝑜𝑡𝑎𝑙 = 𝑤 𝑙,𝑜𝑢𝑡 𝑤 𝑡𝑜𝑡𝑎𝑙 . (20" }, { "formula_coordinates": [ 13, 501.27, 451.64, 4.15, 8.9 ], "formula_id": "formula_32", "formula_text": ")" }, { "formula_coordinates": [ 14, 83.96, 160.09, 373.27, 35.78 ], "formula_id": "formula_33", "formula_text": "𝑄 𝑔,1 ∕(𝑠𝐿𝑚𝑖𝑛 -1 ) 𝑄 𝑔,2 ∕(𝑠𝐿𝑚𝑖𝑛 -1 ) 𝑄 𝑔,3 ∕(𝑠𝐿𝑚𝑖𝑛 -1 ) 𝑃 𝑝𝑢𝑚𝑝 ∕(𝑏𝑎𝑟) Minimum 1 1 1 1.3 Maximum 5 5 5 4" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b12", "b25", "b10", "b27", "b9", "b13", "b23", "b10", "b25", "b27", "b25", "b27", "b25", "b27", "b11", "b12", "b11", "b12" ], "table_ref": [], "text": "Deep Neural Networks (DNNs) have revolutionized the machine learning field through their superior performance in various tasks especially in the field of computer vision [12,13,22], natural language processing [6], and speech technology [5]. In essence, a DNN comprises a sequence of layers containing trainable parameters (weights and bias) to learn a complex mapping between input signals and output labels. For deploying DNNs in real-world applications, it is crucial to analyze their robustness or sensitiv-ity to hardware/sensor noise introduction [2], environment changes [26] and adversarial attacks [9]. Sensitivity analysis also helps in building a quantized-weights model with commensurate performance [11,28].\nIn the literature, sensitivity analysis of DNNs has been performed by perturbing either the input signal or the architectural parameters. The work in [8, 10,14,16,17,24] analyze DNN robustness by manipulating the input signals, whereas the work in [11,21,26,28,29] perturb architectural parameters to analyze robustness. Yeung et al. [31] provide a detailed sensitivity analysis of neural networks over input and parameter perturbations. In this work, we focus on the sensitivity analysis of DNNs when architectural parameters (learned weights) are perturbed.\nThe authors in [21, 26,28,29] provide a theoretical sensitivity analysis based on parameter perturbations. Shu and Zhu [21] propose an influence measure motivated by information geometry to quantify the effects of various perturbations to input signals and network parameters on DNN classifiers. Xiang et al. [29] design an iterative algorithm to compute the sensitivity of a DNN layer by layer, where sensitivity is defined as \"the mathematical expectation of absolute output variation due to weight perturbation with respect to all possible inputs\" [29]. Tsai et al. [26] study the robustness of the pairwise class margin function against weight perturbations. Weng et al. [28] compute a certified robustness bound for weight perturbations, within which a neural network will not make erroneous outputs. In addition, they also identify a useful connection between the developed certification and the challenge of weight quantization.\nIn this work, we empirically analyze the sensitivity of DNNs by manipulating their architectural parameters. We examine sensitivity of three widely used architectures (VGG [22], ResNet [12], and DenseNet [13]) under three types of parameter perturbations (Gaussian noise, weight zeroing and weight scaling). We apply the perturbations in two settings: over all the layers of a network simultaneously and over each layer at a time. Our work is motivated from [2], where they also empirically analyze the sensitivity of the pre-trained AlexNet and VGG16 networks to internal architecture and weight perturbations. However, our work is vastly different. Being motivated by their analysis, not only do we analyze the robustness or sensitivity of the newer networks, we also improve those models with different perturbation methods without any training. First, we extend the work by evaluating the sensitivity of heavily used CNN architectures in biometric tasks: VGG, ResNet, and DenseNet. Second, we perform additional weight manipulations (weight scaling, variants of weight zeroing, and additional setting of applying perturbations over the entire network parameters) in the sensitivity analysis. Third, we leverage the findings from the sensitivity analysis and propose an ensemble of perturbed models to improve the performance without any further training. Our main contributions are as follows: 1. We perform sensitivity analysis of three DNN architectures (VGG [22], ResNet [12] and DenseNet [13]) against parameter perturbations. 2. We apply a number of parameter perturbations (three types of perturbations and its variant in two settings) to analyze the sensitivity of deep neural networks in the context of iris presentation attack detection. 3. We leverage the sensitivity analysis to propose a better performing model by ensembling the perturbed models at two different levels: score-level and parameter-level. 4. We perform experiments using five datasets. Three of the datasets (IARPA, NDCLD-2015, Warsaw Postmortem v3) are used for training, whereas the others (LivDet-Iris-2017 and LivDet-Iris-2020) are used for testing. This represents a cross-dataset scenario, where training and testing are performed on different datasets.\nThe rest of the paper is organized as follows: Section 2 provides the details of various parameter perturbations used for the sensitivity analysis of DNNs; Section 3 describes the application scenario considered in this work; Section 4 explains the dataset and experimental setup; Section 5 provides the sensitivity analysis of the three architectures against the considered parameter perturbations; and Section 6 describes how we leverage the sensitivity analysis to generate an ensemble of perturbed models for improving performance. Finally, Section 7 summarizes the paper and provides future directions." }, { "figure_ref": [], "heading": "Parameter Perturbations", "publication_ref": [ "b25", "b14", "b8" ], "table_ref": [], "text": "We explore the sensitivity of neural networks by perturbing their architectural parameters (weights and bias). From here on, we use the terms 'architectural parameters', 'parameters', and 'weights' interchangeably. To measure the sensitivity, we consider the change in the performance of the DNN when weights are perturbed. Let n input samples be {x 1 , x 2 , ..., x n } and their output be {y 1 , y 2 , ..., y n }. Here, we labeled the positive class as '1' and the negative class as '0'. The predicted output values from a DNN approx-imator are {f (x 1 , W org ), f (x 2 , W org ), ..., f (x n , W org )}, where W org are the learned parameters. We measure the performance of the DNN in terms of True Detection Rate (TDR). TDR is a percentage of positive samples correctly classified:\nT DR org = n i (f (x i , W org ) > T ) n i y i * 100 (1)\nwhere, T is the threshold. The input sample with a predicted value above the threshold is considered a positive class. After weight perturbation, we estimate the output as {f (x 1 , W mod ), f (x 2 , W mod ), ..., f (x n , W mod )}, where W mod are the perturbed parameters. We then use these predicted values to measure the performance of DNN (T DR mod ). The higher the change in the performance (|T DR org -T DR mod |), the higher the sensitivity of the neural network to the particular perturbation. We perturb the parameters in two settings: manipulating parameters of all layers simultaneously and manipulating parameters one layer at a time. The first setting aims to understand the overall sensitivity of DNNs, whereas the second setting examines which layer has more impact on the model. The higher the sensitivity, the lower the generalization of the DNN [17,26]. The three perturbations we consider are Gaussian noise manipulation, weight zeroing, and weight scaling. These perturbations resemble (a) noise introduction due to defects in hardware implementations of neural networks [15], and (b) adversarial weight perturbations [9,19] on open-sourced models. Eventually, the choice of perturbations is based on their simplicity. This work has also the potential of obtaining quantized or compressed DNN models, which consume less memory with equivalent performance. Details of these perturbations are as follows: 1. Gaussian Noise Manipulation: Here, we manipulate the original parameters of the layers by adding Gaussian noise sampled from a normal distribution of zero mean and scaled standard deviation. We control the scaling of the standard deviation by the scalar factor α. The modified parameters are defined as\nW mod = W org + N (0, α * σ(W org )).\n(2)\nHere, W org are the original parameters, W mod are the modified parameters, and N (µ, σ) is the normal distribution. We calculate σ(W org ) for a particular layer by first flattening the parameter tensor to a 1-D array and then computing the standard deviation. So, the standard deviation and the Gaussian noise distribution will differ for each layer since σ(W org ) varies from layer to layer. Consequently, the absolute perturbations differ for each layer. However, relative perturbations are the same across layers. 2. Weight Zeroing: In the second manipulation, we randomly select a certain proportion of parameters and set them to zero. The portion of parameters is determined by a scalar factor β. The modified parameters are represented as\nW mod [random(β, W org )] = 0.(3)\nHere, random(., .) is the function that returns the index of β proportion of randomly selected parameters from the original set of parameters. We also define another version of weight zeroing, where weights are first sorted, and then β proportion of low-magnitude weights is set to zero." }, { "figure_ref": [], "heading": "Weight Scaling:", "publication_ref": [], "table_ref": [], "text": "The third perturbation scales the original parameters by a scalar factor γ as\nW mod = γ * W org .\n(4)" }, { "figure_ref": [], "heading": "Application Scenario", "publication_ref": [ "b0" ], "table_ref": [], "text": "We perform sensitivity analysis in the context of iris presentation attack detection (PAD). A presentation attack (PA) occurs when an adversary presents a fake or altered biometric sample such as printed eyes, plastic eyes, or cosmetic contact lenses to circumvent the iris recognition system [1]. Our application is to detect these PAs launched against an iris system. We formulate the detection problem as a two-class problem based on DNNs, where the input is a near-infrared iris image and the output is a PA score (range from 0-1) which is based on a specified threshold labeled as \"bonafide\" or \"PA\"." }, { "figure_ref": [], "heading": "Datasets and Experimental Setup", "publication_ref": [ "b26", "b24", "b2", "b2", "b11", "b19", "b12", "b2", "b19", "b22" ], "table_ref": [ "tab_1", "tab_0" ], "text": "The training data we use to build our iris PAD models are IARPA, NDCLD-2015 [27] and Warsaw PostMortem v3 [25] datasets. The IARPA dataset is a proprietary dataset consisting of 19,453 bonafide irides and 4,047 presentation attack (PA) samples. From the NDCLD-2015 dataset, we use 2,236 cosmetic contact lens images for training. From the Warsaw PostMortem v3 dataset, 1,200 cadaver iris images from the first 37 cadavers are used for training. Testing is performed on the LivDet-Iris-2017 [30] and LivDet-Iris-2020 [3] datasets. Both of these are publicly available competition datasets for evaluating iris presentation attack detection performance. The LivDet-Iris-2017 dataset [30] consists of four subsets: Clarkson, Warsaw, Notre Dame, and IIITD-WVU. All subsets contain train and test partitions, and we use only the test partition. Warsaw and Notre Dame subsets further contains two splits in the test partition: 'Known' and 'Unknown'. The 'Known' split corresponds to the scenario where PAs of the same type or images from similar sensors are present in both train and test partitions, while the 'Unknown' split contains different types of PAs or images from different types of sensors in the train and test partitions. In our case, both test splits are considered as 'Unknown' type as we use different datasets for training. Such a testing scenario is referred to cross-dataset. However, we keep the original terminologies ('Known' and 'Unknown') of test splits in the work. The LivDet-Iris-2020 [3] consists of a single test split, and this scenario also corresponds to cross-dataset. Table 1 describes all training and test sets, along with the types of PAs and images present in them. In aggregate, both datasets provide a diverse set of PAs.\nWe use three iris PA detectors for sensitivity analysis. Two of the detectors utilize VGG19 [22] and ResNet101 [12] networks as their backbone architecture. The third detector is D-NetPAD [20], where the backbone architecture is DenseNet161 [13]. The D-NetPAD shows state-of-theart performance on both LivDet-Iris-2017 and LivDet-Iris-2020 iris PAD competitions [3,20]. Since D-NetPAD already had the state-of-the-art performance on the evaluation datasets and Smith et. al. [23] found that convolutionbased networks can perform same as vision transformer at scale, we did not perform similar analysis or experiments on transformer-based models like ViT [7]. The convolutional networks we use require a cropped iris region resized to 224 × 224 as input. For training, we initialize the model with the weights from the ImageNet dataset [4] and then finetune the models using the training datasets described above. The learning rate was set to 0.005, the batch size was 20, the number of epochs was 50, the optimization algorithm was stochastic gradient descent with a momentum of 0.9, and the loss function used was cross-entropy.\nWe measure the sensitivity of these DNNs by evaluating their performance as a function of the weight perturbations. The performance is estimated in terms of TDR (%) at 0.2% False Detection Rate (FDR). 1 FDR is the percentage of bonafide samples incorrectly classified as PAs. 2 In Table 3, the row corresponding to the 'Original' method reports the performance of these models on the LivDet-Iris-2017 and LivDet-Iris-2020 datasets before weights were perturbed. On the LivDet-Iris-2017 dataset, ResNet101 performs the best (average 74.55% TDR), whereas on the LivDet-Iris-2020 dataset, D-NetPAD performs the best (90.22% TDR). We also provide information about the number of weights and bias parameters present in all three models (Table 2). The VGG19 architecture has the highest number of parameters, followed by the ResNet101 architecture." }, { "figure_ref": [], "heading": "Sensitivity Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Gaussian Noise Addition", "publication_ref": [], "table_ref": [], "text": "The Gaussian noise manipulation involves the addition of Gaussian noise to the original parameters. Figure 1a Table 1. Summary of training and test datasets along with the number of bonafide and PA iris images present in the datasets. The information about the sensors used to capture images is also provided. Here, \"K. Test\" means a known test set of the dataset, and \"U. Test\" means an unknown test set (see text for explanation). shows the performance of all the networks when we perturb parameters of all layers with the Gaussian noise. The scale factor (α) used to modify the standard deviation is shown on the x-axis. Every data point in the figure represents a single performance of the model. From a trend standpoint, the performance of all networks decreases with an increase in the standard deviation. However, this decrease is not linear.\nIn fact, there are some performance gains at certain scales. These scales are different for different networks. For instance, the VGG19 network shows improvement for α = 0.3, 0.6, and 0.9, ResNet101 for α = 0.1, 0.3, and 0.9, and D-NetPAD for α = 0.1, 0.4 and 1.0. Surprisingly, certain scales give higher performance than the original model, such as 0.1 scale for the ResNet101 and D-NetPAD models, and 0.3 scale for the VGG19 model. The results indicate that all three networks are sensitive to Gaussian noise perturbations when perturbations are applied over all layers of the network, and we cannot conclude which network is comparatively stable under these weight perturbations.\nWe further analyze the impact of perturbation at different layers on the performance of the models. We manipulate the parameters one layer at a time and observe the performance change. For the layer-wise analysis, we show the results for only the D-NetPAD model since the other two models also show similar performance trends. In the case of D-NetPAD, we select the first convolution layer and the last convolution layers of four dense blocks for perturbation. Figure 1b shows the performance of D-NetPAD when the individual layer's parameters are perturbed. We observe that the initial layers have more influence on the performance of the D-NetPAD compared to the later layers. The model is highly robust to the perturbations in the last convolution layer of the fourth dense block, even at a scale factor of 30. Cheney et al.\n[2] also observe the higher impact of perturbations in the initial layers on the performance. Generally, initial layers focus more on capturing discriminative or representative features, whereas later layers are more responsible for generating decision boundaries. Manipulations to extracted features have more impact on the performance compared to a slight change in decision boundaries. Moreover, manipulation in initial layers changes feature maps of all subsequent layers and, hence, causes propagation of error. Change in middle layers exhibit large fluctuations in performance compared to the initial and later layers." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_0", "fig_1", "fig_2" ], "heading": "Weight Zeroing", "publication_ref": [], "table_ref": [], "text": "The weight zeroing manipulation involves random selection of a particular fraction of weight parameters and setting them to zero. Figure 2a shows the performance of all three architectures when we manipulate the entire set of network parameters, while Figure 2b shows the performance of D-NetPAD when we perturb individual layers. Similar conclusions can be drawn from Figure 2a as drawn from Figure 1a that the overall performance of all three architectures decreases with an increase in the proportion of weights set to zero. However, certain perturbations give improved performance. For example zeroing 3% of weights improves the VGG19 network performance from 76.87% TDR (original) to 92.70% TDR. In the case of ResNet101, zeroing 3% of weights improves performance from 84.11% TDR (origi- nal) to 88.88% TDR. Again, all three networks are sensitive to the zeroing out of randomly selected weights.\nIn the layer-wise setup (Figure 2b), the performance of D-NetPAD is stable except for the first convolution layer. This is due to the fact that the original weights of the convolution layers have a zero mean and a small standard deviation ranging from 0.10 (first convolution layer) to 0.01 (last convolution layer) as shown in Figure 3. Initial layers have a higher standard deviation compared to later layers, which makes the network more sensitive to the manipulations in the initial layers. A similar performance trend is observed in the VGG19 and ResNet101 networks as well.\nSince most of the original weights are already close to 0, we apply a variant of weight zeroing where only low-magnitude weights are set to zero. Figure 4a shows the performance of all architectures when we manipulate the entire network in this fashion, while Figure 4b shows the performance of D-NetPAD on layer-wise manipulation. ResNet101 and D-NetPAD networks are observed to be robust to this manipulation as zeroing out even 33% of all weights does not affect their performance. VGG19 also shows robustness with only a 6% drop in performance, though its performance is not as stable as the ResNet101 and D-NetPAD networks. Figure 4b shows the sensitivity of the D-NetPAD on layer-wise perturbations. Zeroing out even 30% of the first convolution layer weights does not impact its performance. Remarkably, the manipulation in the last convolution layer of the first and second dense blocks shows a linear increase in performance. The performance of D-NetPAD increases from 90.22% TDR to 96.28% TDR upon manipulating the last convolution layer of the first dense block. This implies that we could zero out lowmagnitude weights and reduce the size of the model without affecting its performance. This finding is useful in building a compressed DNN model with better time and memory efficiency to deploy on mobile or embedded devices." }, { "figure_ref": [], "heading": "Weight Scaling", "publication_ref": [], "table_ref": [], "text": "This manipulation scales the original parameters with a scalar value. Figure 5a shows the performance of all three architectures when we manipulate the entire set of network parameters, while Figure 5b presents the performance of D-NetPAD when we perturb specific layers. The performance at scale 1 indicates the original performance without weight perturbations. Weight perturbations across the entire network resulted in a radical drop in performance even with a small scalar factor (0.8 or 1.1). In the layer-wise manipulation, the initial layers show a higher impact on the performance of D-NetPAD compared to the later layers. The manipulation in the last convolution layer does not impact the performance even at a scaling factor of 10. A similar performance trend is observed on the VGG19 and ResNet101 networks as well." }, { "figure_ref": [ "fig_2" ], "heading": "Findings", "publication_ref": [], "table_ref": [], "text": "Here are the main findings from the aforementioned analysis: 1. All three networks decrease in performance when perturbations are applied over the entire network. 3 However, the networks show robustness when low-magnitude weights are set to zero. The scaling of weights has a major negative impact on the performance of networks. 2. Layer-wise sensitivity analysis shows that perturbations in initial layers impacted the performance to a greater extent compared to the later layers. The weight distribution of all layers are zero-centered and later layers have a lower standard deviation compared to initial layers (Figure 3), making later layers less sensitive to weight zeroing and scaling perturbations as majority of their weights are already close to the zero mean. The zero-centered nature of weight distributions is also a reason why Gaussian noise perturbations have the most negative impact on the performance compared to the other perturbations. 3. Certain perturbations improve the performance of network models over the original one in both settings (entire network and layer-wise). This observation indicates that the parameters learned by the models during training are not optimum. Random change in the weights in their close vicinity shows improvement in the performance. Hence, there is further scope for optimizing weights. 4. Zeroing out low-magnitude weights results in better per-formance as well as reduces the size of the model." }, { "figure_ref": [], "heading": "Performance Improvement", "publication_ref": [ "b17" ], "table_ref": [ "tab_1" ], "text": "We observe that certain perturbations result in better performance, even higher than that of the original model. We leverage this observation and obtain better performing models using these perturbations without any additional training. In this regard, we explore two directions: the first is to find a single perturbed model which achieves good performance consistently, and the second is to create an ensemble of high-performing perturbed models. In the earlier part of the work, we analyzed the sensitivity of different architectures based on their performance on the LivDet-Iris-2020 dataset. Here, we select a high-performing perturbed model and validate its performance on the LivDet-Iris-2017 dataset. For the ensemble of models, we further explore two sub-directions based on the level of fusion. In the first, we simply fuse their decision scores using the sum rule. This level of fusion better spans the decision space and generalizes well to the test data [18]. However, it increases the inference time as decision scores are required from all the We repeat the experiment 100 times for each of the high-performing models and select the one with the most consistent performance.\n3. Ensemble Models at the Score-Level: We combine two consistent high-performing perturbed models by fusing their PA scores using the sum rule. For all three architectures, we fuse the above specified perturbed models with 4. Ensemble Models at the Parameter-Level: We create a single ensemble model by averaging the parameters of two consistent high-performing perturbed models. The PA score is generated from a single merged model. The models selected for fusion are the same ones used for ensembling at the score-level. Table 3 provides the performance of these models (based on VGG19, ResNet101, and D-NetPAD architectures). The performance of perturbed and ensemble models is better than the original model on both datasets. The observation holds true for all three architectures. The perturbed models show an average improvement of 47.12% and 8.97%, the ensemble model at the score-level shows an improvement of 16.01% and 10.65%, and the ensemble model at the parameter-level shows an improvement of 43.58% and 9.25% on the LivDet-Iris-2017 and LivDet-Iris-2020 datasets, respectively. One major advantage of these perturbed models is that these models are created without any further training. Another advantage is that these highperforming perturbed models have reduced model size." }, { "figure_ref": [], "heading": "Summary and Future Work", "publication_ref": [], "table_ref": [], "text": "We analyze the sensitivity of three DNN architectures (VGG19, ResNet101, and D-NetPAD) under three types of parameter perturbations (Gaussian noise manipulation, weight zeroing, and weight scaling). We apply the perturbations in two settings: modifying the weights across all layers and modifying weights layer-by-layer. We found that CNNs are generally less sensitive to a variant of weight zeroing, where low-magnitude weights are set to zero. From the layer-wise analysis, we observe that the CNNs are more robust to perturbations in later layers compared to the initial layers and Gaussian noise addition most negatively impacts the performance due to the zero-centered nature of weight distributions. Certain manipulations improve the performance over the original one. Based on these observations, we propose the use of an ensemble of models that consistently perform well on both LivDet-Iris-2017 and LivDet-Iris-2020 datasets. As future work, we will focus on finding the analytical optimum direction for weight perturbations. Additionally, the approach can be applied to other domains and tasks." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This research is based upon work supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2017 -17020200004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." } ]
Deep neural networks (DNNs) exhibit superior performance in various machine learning tasks, e.g., image classification, speech recognition, biometric recognition, object detection, etc. However, it is essential to analyze their sensitivity to parameter perturbations before deploying them in real-world applications. In this work, we assess the sensitivity of DNNs against perturbations to their weight and bias parameters. The sensitivity analysis involves three DNN architectures (VGG, ResNet, and DenseNet), three types of parameter perturbations (Gaussian noise, weight zeroing, and weight scaling), and two settings (entire network and layer-wise). We perform experiments in the context of iris presentation attack detection and evaluate on two publicly available datasets: LivDet-Iris-2017 and LivDet-Iris-2020. Based on the sensitivity analysis, we propose improved models simply by perturbing parameters of the network without undergoing training. We further combine these perturbed models at the score-level and at the parameter-level to improve the performance over the original model. The ensemble at the parameter-level shows an average improvement of 43.58% on the LivDet-Iris-2017 dataset and 9.
Investigating Weight-Perturbed Deep Neural Networks With Application in Iris Presentation Attack Detection
[ { "figure_caption": "Figure 1 .1Figure 1. Gaussian noise manipulation: (a) Performance (TDR at 0.2% FDR) of VGG19, ResNet101, and D-NetPAD when weights and bias parameters of the entire network are perturbed. (b) Performance of D-NetPAD when the individual layer's parameters (weights and bias) are perturbed. Here, Conv1 means the first convolution layer of the D-NetPAD, Dense1 LastConv means the last convolution layer of the first dense block, and so on.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Weight zeroing manipulation: (a) Performance (TDR at 0.2% FDR) of VGG19, ResNet101, and D-NetPAD when parameters of the entire network are perturbed. (b) Performance of D-NetPAD when the individual layer's parameters are perturbed.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Weight distribution of different layers of the trained D-NetPAD architecture. Mean (µ) and standard deviation (σ) are provided below each distribution.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .Figure 5 .45Figure 4. Variant of the weight zeroing manipulation (low-magnitude weights are set to zero): (a) Performance (TDR at 0.2% FDR) of VGG19, ResNet101, and D-NetPAD when parameters of the entire network are perturbed. (b) Performance of D-NetPAD when individual layer's parameters are perturbed.", "figure_data": "", "figure_id": "fig_3", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "The number of parameters (weights and bias) present in all convolutional layers and the entire network of the VGG19, ResNet101, and D-NetPAD architectures.", "figure_data": "Train/TestTrain", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The performance of VGG19, ResNet101, and D-NetPAD models in terms of True Detection Rate (%, higher the better) at 0.2% False Detection Rate on the LivDet-Iris-2017 and LivDet-Iris-2020 datasets. The performance is shown on original model (no parameter perturbations), perturbed model and an ensemble of model.", "figure_data": "DatasetsLivDet-Iris-2017SubsetsClarksonWarsawNotre DameIIITD-WVULivDet-Iris-2020SplitsTestK. Test U. Test K. Test U. TestTestVGG19 ModelOriginal51.3286.2510.1210099.001.4476.87Perturbed54.8891.129.0810097.781.5890.31Ensemble (Score-level)66.1792.957.2310098.003.1489.53Ensemble (Parameter-level)73.0184.9213.9099.7897.789.4388.26ResNet101 ModelOriginal15.8289.9391.6710099.4450.4784.11Perturbed23.0195.3394.6510095.6758.1486.40Ensemble (Score-level)21.6192.9594.6010089.8858.0291.07Ensemble (Parameter-level)19.1095.3394.3710095.6758.1489.92D-NetPAD ModelOriginal60.0476.6835.7610099.3332.0190.22Perturbed68.5494.9453.0210099.0050.3596.86Ensemble (Score-level)68.3493.8446.4010097.6648.0896.71Ensemble (Parameter-level)64.2994.9453.0210099.0042.5995.66", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" } ]
Renu Sharma; Redwan Sony; Arun Ross
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Information technology -Biometric Presentation Attack Detection -Part 1: Framework", "year": "" }, { "authors": "Nicholas Cheney; Martin Schrimpf; Gabriel Kreiman", "journal": "", "ref_id": "b1", "title": "On the robustness of convolutional neural networks to internal architecture and weight perturbations", "year": "2017" }, { "authors": "Priyanka Das; Joseph Mcfiratht; Zhaoyuan Fang; Aidan Boyd; Ganghee Jang; Amir Mohammadi; Sandip Purnapatra; David Yambay; Sébastien Marcel; Mateusz Trokielewicz", "journal": "", "ref_id": "b2", "title": "Iris liveness detection competition (LivDet-Iris)-the 2020 edition", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b3", "title": "ImageNet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Li Deng; Jinyu Li; Jui-Ting Huang; Kaisheng Yao; Dong Yu; Frank Seide; Michael Seltzer; Geoff Zweig; Xiaodong He; Jason Williams", "journal": "", "ref_id": "b4", "title": "Recent advances in deep learning for speech research at microsoft", "year": "2013" }, { "authors": "Li Deng; Yang Liu", "journal": "Springer", "ref_id": "b5", "title": "Deep learning in natural language processing", "year": "2018" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b6", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Alhussein Fawzi; Seyed-Mohsen Moosavi-Dezfooli; Pascal Frossard", "journal": "", "ref_id": "b7", "title": "Robustness of classifiers: From adversarial to random noise", "year": "2016" }, { "authors": "Siddhant Garg; Adarsh Kumar; Vibhor Goel; Yingyu Liang", "journal": "", "ref_id": "b8", "title": "Can adversarial weight perturbations inject neural backdoors", "year": "2020" }, { "authors": "Ian Goodfellow; Jonathon Shlens; Christian Szegedy", "journal": "", "ref_id": "b9", "title": "Explaining and harnessing adversarial examples", "year": "2015" }, { "authors": "Song Han; Jeff Pool; John Tran; William J Dally", "journal": "", "ref_id": "b10", "title": "Learning both weights and connections for efficient neural networks", "year": "2015" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b11", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "G Huang; Z Liu; L V D Maaten; K Q Weinberger", "journal": "", "ref_id": "b12", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "Nikolaos Karianakis; Jingming Dong; Stefano Soatto", "journal": "", "ref_id": "b13", "title": "An empirical evaluation of current convolutional architectures' ability to manage nuisance location and scale variability", "year": "2016" }, { "authors": "", "journal": "Kluwer / Springer US", "ref_id": "b14", "title": "Analog VLSI Implementation of Neural Systems", "year": "1989" }, { "authors": "Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Omar Fawzi; Pascal Frossard", "journal": "", "ref_id": "b15", "title": "Universal adversarial perturbations", "year": "2017" }, { "authors": "Roman Novak; Yasaman Bahri; Daniel A Abolafia; Jeffrey Pennington; Jascha Sohl-Dickstein", "journal": "", "ref_id": "b16", "title": "Sensitivity and generalization in neural networks: an empirical study", "year": "2018" }, { "authors": "R Polikar", "journal": "IEEE Circuits and Systems Magazine", "ref_id": "b17", "title": "Ensemble based systems in decision making", "year": "2006" }, { "authors": "Adnan Siraj Rakin; Zhezhi He; Jingtao Li; Fan Yao; Chaitali Chakrabarti; Deliang Fan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)", "ref_id": "b18", "title": "T-bfa: Targeted bit-flip adversarial weight attack", "year": "2021" }, { "authors": "Renu Sharma; Arun Ross", "journal": "", "ref_id": "b19", "title": "D-NetPAD: An Explainable and Interpretable Iris Presentation Attack Detector", "year": "2020" }, { "authors": "Hai Shu; Hongtu Zhu", "journal": "", "ref_id": "b20", "title": "Sensitivity analysis of deep neural networks", "year": "2019" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b21", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "L Samuel; Andrew Smith; Leonard Brock; Soham Berrada; De", "journal": "", "ref_id": "b22", "title": "Convnets match vision transformers at scale", "year": "2023" }, { "authors": "Christian Szegedy; Wojciech Zaremba; Ilya Sutskever; Joan Bruna; Dumitru Erhan; Ian Goodfellow; Rob Fergus", "journal": "", "ref_id": "b23", "title": "Intriguing properties of neural networks", "year": "2014" }, { "authors": "Mateusz Trokielewicz; Adam Czajka; Piotr Maciejewicz", "journal": "Image and Vision Computing (IVC)", "ref_id": "b24", "title": "Post-mortem iris recognition with deep-learning-based image segmentation", "year": "2020" }, { "authors": "Yu-Lin Tsai; Chia-Yi Hsu; Chia-Mu Yu; Pin-Yu Chen", "journal": "Advances in Neural Information Processing Systems (NeurIPS)", "ref_id": "b25", "title": "Formalizing generalization and adversarial robustness of neural networks to weight perturbations", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b26", "title": "The Notre Dame Contact Lens Dataset -NDCLD", "year": "2015" }, { "authors": "Pu Tsui-Wei Weng; Sijia Zhao; Pin-Yu Liu; Xue Chen; Luca Lin; Daniel", "journal": "", "ref_id": "b27", "title": "Towards certificated model robustness against weight perturbations", "year": "2020-04" }, { "authors": "Lin Xiang; Xiaoqin Zeng; Yuhu Niu; Yanjun Liu", "journal": "IEEE Access", "ref_id": "b28", "title": "Study of sensitivity to weight perturbation for convolution neural network", "year": "2019" }, { "authors": "David Yambay; Benedict Becker; Naman Kohli; Daksha Yadav; Adam Czajka; Kevin W Bowyer; Stephanie Schuckers; Richa Singh; Mayank Vatsa; Afzel Noore", "journal": "", "ref_id": "b29", "title": "LivDet iris 2017-iris liveness detection competition", "year": "2017" }, { "authors": "S Daniel; Ian Yeung; Daming Cloete; Shi; W Y Wing; Ng", "journal": "Springer Publishing Company, Incorporated", "ref_id": "b30", "title": "Sensitivity Analysis for Neural Networks", "year": "2009" } ]
[ { "formula_coordinates": [ 2, 342.48, 137.49, 202.63, 27.9 ], "formula_id": "formula_0", "formula_text": "T DR org = n i (f (x i , W org ) > T ) n i y i * 100 (1)" }, { "formula_coordinates": [ 2, 349.94, 564.3, 154.1, 9.68 ], "formula_id": "formula_1", "formula_text": "W mod = W org + N (0, α * σ(W org ))." }, { "formula_coordinates": [ 3, 104.55, 110.35, 181.82, 9.65 ], "formula_id": "formula_2", "formula_text": "W mod [random(β, W org )] = 0.(3)" }, { "formula_coordinates": [ 3, 128.78, 230.24, 78.92, 9.68 ], "formula_id": "formula_3", "formula_text": "W mod = γ * W org ." } ]
2023-11-21
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b11", "b35", "b9", "b11", "b17", "b17", "b25", "b9", "b10", "b9", "b10", "b29", "b40", "b10", "b40", "b10", "b10", "b38", "b10", "b9", "b10", "b18", "b25", "b25", "b38", "b8", "b18", "b10", "b17", "b25", "b13", "b14", "b49", "b0", "b26", "b7", "b30" ], "table_ref": [], "text": "Iris recognition systems use the texture of the iris in order to recognize individuals [12]. A typical iris recognition system operates in the near-infrared (NIR) spectrum. There are several reasons for using NIR sensors to acquire an image of the iris: (a) NIR illumination is non-invasive and, unlike visible spectrum lighting, does not excite the pupil; and (b) NIR illumination can be used to elicit the texture of even dark-colored irides since it can penetrate the multilayered iris more effectively than visible spectrum lighting. Despite their success in a number of real-world applications, iris systems are vulnerable to number of attacks [36], including presentation attacks (PAs) [2,10]. A presentation attack occurs when an adversary presents a fake or al-tered trait to the sensor in order to obfuscate their own identity, spoof another person's identity or to create a virtual identity. The biometric characteristics or materials used to launch a presentation attack are referred to as Presentation Attack Instruments (PAI). Examples of PAIs in the case of the iris modality include printed iris images [9, 12,18,33], plastic, glass, or doll eyes [18,26], cosmetic contact lenses [4, 22,34,44], a video display of an eye image [10,11,35], cadaver eyes [10,11,30], robotic eye models [24] holographic eye images [32] and synthesized irises [41]. A few examples of iris PAIs are shown in Fig. 1. Among all the attacks described above, iris pattern printed on a paper is perhaps one of the easiest ones. The efficacy of this type of attack depends on a number of factors including the choice of printer (inkjet or laserjet), paper (matte, glossy, photographic, butter, white, recycled or cardboard), resolution (600 or 1,200 dpi), image type (grayscale or color), configuration (with or without pupil cutout), and sensing device (IrisPass, IrisAccess, or Iris-Guard). In the LivDet-Iris 2013 competition [47], a combination of two different printers, two commercial iris sensors and matte paper were used. Later, the dataset was extended in the LivDet-Iris 2015 [48], LivDet-Iris 2017 [46], LivDet-Iris 2020 [11] and LivDet-Iris 2023 [41] competitions by including more variations in the resolution, contrast and texture of the printed irides. In [11], various add-ons were applied to printed paper, including transparent domes and textured as well as clear contact lenses.\nThe use of cosmetic contact lens as a PAI poses an even greater challenge than the prints, since the former has significantly more manufacturers, brands, and colors [11,46]. In [47], 22 types of patterned contact lenses were collected, which was later increased to 57 types with different texture patterns [39]. In [48], 20 different varieties of cosmetic contacts were used to generate iris PA samples. This was later extended by adding samples from the Notre Dame subset, which contained five different brands of textured contact lenses, and the IITD-WVU subset, which contained four manufacturers and six colors [46]. In [11], three different brands of cosmetic contacts (Johnson & Johnson, Ciba Vision, and Bausch & Lomb) were captured using the LG IrisAccess 4000 and IrisGuard AD100 under various illumination setups (two different illuminants in LG4000 and six different illuminants in AD100).\nIn addition to the printed and cosmetic contact attacks, the replay or display attack also poses a challenge. In this type of attack, a previously captured iris image or video is presented to an iris sensor via a display media. However, most modern computers, laptops and mobile phone screens do not necessarily emit NIR light. So this type of attack has been predominantly tested on iris systems operating in the visible spectrum [10,35]. However, the display of certain Kindle devices emit NIR light and, consequently, can be more easily imaged using NIR iris sensors [11,19].\nA plastic or prosthetic eye is a highly viable PAI, but has not been as extensively explored in the literature, unlike some of the other PAs. Variants of such artificial eyes can be designed using different materials like Poly Methyl Meta Acrylate [26], glass or plastic. Lee et al. [26] created three different-colored artificial eyes (blue, gray and dark brown). Sun et al. [39] selected 40 different subjects iris images from the UPOL database [29] and printed them on plastic eyeball models. Hoffman et al. [19] collected images of fake eyes using three different plastic/glass eye brands and 10 distinct colors. Das et al. [11] presented two different types of fake eyes: Van Dyke Eyes (which have higher iris quality details) and Scary Eyes (plastic fake eyes with a simple pattern on the iris region). They also presented add-ons for fake eyes, such as textured and clear contacts.\nAs stated above, attacks using artificial eyes made of glass or plastic have not been heavily studied [7,18,26,49]. The goal of this work is to leverage recent developments in material science to test the robustness and measure the susceptibility of iris systems to such type of spoof attacks. Specifically, we are interested to produce spoofs which are fabricated by affixing chemically modified films on these artificial eyes. 1 The iris is generally imaged in the NIR spectrum; accordingly, we have attempted to use NIR-sensitive 1 In principle, it can be used on other types of PA artifacts.\nVanadium dioxide (VO 2 ) films to generate these spoofs. VO 2 is a typical thermochromic material that has been widely studied as smart coatings for buildings fenestrations [3, [14][15][16]50]. The synthesis of VO 2 films has been reported briefly in the literature, and its manufacturing is easy and cost effective. It is an advantage to use VO 2 for our work as it is deposited on a glass substrate and its handling is smooth. A further advantage is its low toxicity and high stability at room temperature conditions for such a short period of usage. These films show transmittance drop, close to a temperature of 68 • C, in the NIR region [1,27,28,31,43]. This implies that at temperatures below 68 • C, the film allows maximum light to pass through, but as the temperature increases above 68 • C, the film behaves in a completely different manner, only allowing a portion of light to pass (Fig. 2). This change in behavior of the film allows us to image the fake eyes in 2 different arrangements. Thus, in order to generate an effective spoof, we used the VO 2 coated and uncoated (blank) films in varied configurations on the fake glass eye. This is a unique kind of presentation attack, which combines multiple attack modes and that has never been attempted before. " }, { "figure_ref": [], "heading": "Experiment and Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Fabrication", "publication_ref": [], "table_ref": [], "text": "Vanadium dioxide (VO 2 ) thin films were deposited by pulsed laser deposition over fused silica (SiO 2 ) substrates (2\" in diameter, 250 µm thick), following a process similar to what has been described in the past [8,13]. The substrate was heated to 600 • C in a vaccum chamber with a background pressure of 1x10 -7 Torr. After reaching this setpoint, oxygen and argon gas was introduced to the chamber, while the chamber pressure was controlled through a butterfly valve to be at 35 mTorr. At this point, laser pulses from an excimer laser (wavelength λ=248 nm, 20 ns pulse duration, and ∼ 4 J/cm 2 fluence) ablated a metallic vanadium target, and the pressure was controlled to 35 mTorr. A total of 320,000 pusles resulted in a 280 nm thick VO 2 thin film over the SiO 2 substrate. After VO 2 deposition, the 2\" sample was diced into squares and triangles (2 mm × 2 mm). Another blank identical SiO 2 substrate (i.e., with no VO 2 thin film deposited) was also diced with the same dimensions. The resulting samples were multiple bare SiO 2 and VO 2 -coated 2 mm × 2 mm \"pixels\".\nOur aim in this work is to fabricate fake eyes (Van Dyke eyes, made of soft glass) with different patterns of films on it. This patterning was based on different factors such as shape, type and orientation of the films (Fig. 3). To achieve this, VO 2 coated films and blank films were fixed on the fake eyes in 11 different geometrical configurations as described below. For the first set of images (Con 0), the naked Van Dyke eyes were imaged in different angular and lighting conditions (Fig. 5 (a-j)). To achieve this, the Van Dyke eyes were first attached to fake Halloween glasses using double-sided tape. The user then mounted these glasses and approached the iCAM 7100S iris sensor for imaging. This triggered the activation of the sensor, as indicated by the appearance of an orange dot on the mirror. Now, at the correct distance, once the orange dot is aligned over the bridge of the nose, it turns green, and both the irides are acquired. This process was subsequently repeated by using the tilt up/down button on the sensor unit. Multiple other images (Fig. 5 i (f-j)) were also captured by focusing some extra light (120 V, GE-IR table lamp) on the fake eyes (mounted on the user). For Con 1, a few blank square films were removed from the whole blank diced lot using a pair of tweezers. These films were then carefully stuck on the fake iris in a circular pattern (Fig. 5 ii (a-j)), with a couple of them on the pupil portion of the fake eyes. This patterned eye was imaged using the same process as stated above. One additional change was the in situ heating of the films using the IR lamp. The film was heated for 2.5 min to reach a temperature of 80 • C, and a picture was acquired immediately. This was done to appreciate the difference in image and PA scores with and without heating (Fig. 5 ii (j)).\nA similar procedure was adopted for Con 2, where VO 2 coated square films were used instead of blank films (Fig. 5 (a-j)). Again, for Con 3, VO 2 coated and blank film were arranged alternately on the iris and pupil of the fake eyes (Fig. 5 iv (a-j)). Con 4 was designed by closely placing the coated and blank films in 2 rings on the iris, with one blank film on the pupil (Fig. 5 v (a-j)). Con 5 was fabricated by choosing triangular blank films. These triangular films were placed in a group of 3's to form a flower-like pattern (Fig. 5 vi (a-j)). Similarly, VO 2 coated films were arranged in triangles of 3, forming flower-like pattern for Con 6 (Fig. 5 vii (a-j)). The Con 7 was designed using both coated and uncoated triangular films in grouping of 3 on the fake eyes (Fig. 5 viii (a-j)). Con 8, 9, and 10 were fabricated by placing triangular-shaped blank; triangular-shaped coated; and triangular-shaped blank and coated on the fake iris, respectively. (Fig. 5 ix-xi (a-j)). For the last configuration (Con 11), transparent plastic chips were stuck on the Van Dyke eyes (Fig. 5 (a-j)).\nWe captured 10 images of an eye for each configuration, resulting in a database of 120 samples. These images were taken with the help of six different subjects. More than one subject was used to eliminate any subject-specific errors during data collection. These images were then assessed using two state-of-the-art PA detectors: D-NetPAD and IrisTL-PAD (Fig. 4). Both PA detection methods produce a single-valued PA score. These PA scores range from 0 to 1, where 1 indicates a PA sample and 0 indicates a bonafide or live iris." }, { "figure_ref": [], "heading": "Iris Presentation Attack Detection Methods", "publication_ref": [ "b37", "b10" ], "table_ref": [ "tab_0" ], "text": "The two iris PA detection algorithms that are utilized to assess the vulnerability of adhering VO 2 films on artificial eyes are described below. They both are based on deep neural architectures.\nD-NetPAD: D-NetPAD [38] 2 is based on a densely connected convolutional neural network where each layer connects to every other layer in a feed-forward fashion. Its base architecture is DenseNet-121 [21], which consists of 121 convolutional layers in a series of four Dense Blocks and three Transition Layers. A detailed description of the architecture is provided in [21]. To detect iris PA, the iris region is first cropped from the ocular image and resized to 224 × IrisTL-PAD: IrisTL-PAD [4-6] operates on the cropped iris regions and offers a simple and fast solution for PA detection. It also utilizes the pre-trained ImageNet model to initialize the weights and then performs transfer learning. First, an off-line trained iris detector was used to obtain a rectangular region encompassing the outer boundary of the iris. Then, the iris region was automatically cropped based on the estimated rectangular coordinates. Finally, the cropped iris region was input to a CNN (ResNet50) to train the iris PA detection model (Fig. 6). The training was finetuned on an existing ImageNet model, by leveraging extensive data augmentation schemes. The IrisTL-PAD model was trained on 9,072 bonafide images and 7,352 PA images as summarized in Table 1.\nBoth PAD algorithms are state-of-the-art methods that resulted in the best performance another proprietary dataset. The data were collected using the iCAM7000 NIR sensor from 1,315 subjects. A total of 3,315 iris images were acquired, out of which 2,963 were bonafide irides and 352 were PA samples. PAs in the dataset include two types of VanDyke eyes and 10 different types of cosmetic contact lenses. The D-NetPAD and IrisTL-PAD methods resulted in a True Detect Rate (TDR) of 98.58% and 92.61%, respectively, at a False Detect Rate (FDR) of 0.2%. 3 The TDR denotes the fraction of PA samples that were correctly classified, while the FDR denotes the fraction of bonafide samples that were incorrectly classified as PA samples. In addition, both PAD algorithms were the best performing algorithms in the LivDet-Iris 2020 competition [11]." }, { "figure_ref": [ "fig_5", "fig_6", "fig_6", "fig_6", "fig_6", "fig_7", "fig_7", "fig_7" ], "heading": "Evaluation and Results", "publication_ref": [ "b36", "b41" ], "table_ref": [], "text": "To determine whether the designed configurations of the Vanadium dioxide films on artificial eyes can be used to attack the system or not, we compared their PA scores to that of bonafide, i.e., live human eyes. The PA score for a live human eye ranges from 0.0-0.5 for IrisTL-PAD and 0.0-0.4 for D-NetPAD. As depicted in Table 2, Cons 0, 1 and 2 showed PA scores more than the threshold value (0.5 for IrisTL-PAD, 0.4 for D-NetPAD) for both the algorithms. This indicates that these configurations were detected as spoofs by both the algorithms. However, as we move onto Con 3, the PA scores dip below the threshold for all 10 images for IrisTL-PAD and 6 images for D-NetPAD. This is a successful configuration that fools the PA detection systems and passes as a live or bonafide eye (Fig. 9). Con 4, which has VO 2 coated and blank films in 2 concentric circles in the iris region, has an attack success rate of 40% for IrisTL-PAD and 10% for D-NetPAD (Fig. 9). Attack success rate was calculated as the percentage of attacks below the given threshold value. A configuration having success rate of 50% or more was considered to be a successful attack. Con 5 which has triangular blank films arranged in a group of 3, shows attack success rate of 50% for IrisTL-PAD and 0% for D-NetPAD. Con 6 images have slightly lower chances of working as a spoof (rates: 40% IrisTL-PAD and 10% D-NetPAD). Con 7 on the other hand has higher chances of passing as a live eye, with attack success percentage at 90% for IrisTL-PAD and 50% for D-NetPAD. Con 8 has 30% success for IrisTL-PAD, and 10% for D-NetPAD. Con 9 too has a lower chance to deceive the system (only 20% success for IrisTL-PAD and none for D-NetPAD). Cons 10 (10% success rate for D-NetPAD) and 11 also do not pose a threat to the two PA detection methods. One point to be mentioned is that the heating of VO 2 films upto a temperature of 80 • C does not bring a significant change in the PA scores. Cons 1(j), 5(i), 9(i), 9(j) and an indication that the thermochromic behaviour of the film does not play a big role in deceiving the system as far as our experimental protocol is concerned. In summary, our preliminary observations indicate that Cons 3, 5 and 7 have high presentation attack success rates. The high presentation attack success rate for Cons 3, 5 and 7 could be due to the kind of geometrical arrangement of films on them. Note that Cons 3 and 4 have a similar type of arrangement for both the films (coated and uncoated), but Con 3 has more space between the films (Fig. 8). This causes a change in the captured iris pattern and impacts the PA detection methods. The result was further visually analyzed by generating \"heatmaps\" using Gradient-weighted Class Activation Figure 9. Presentation attack success rate across all configurations. Cons 3, 5 and 7 have a higher success rate (against the IrisTL-PAD method) compared to other configurations. This suggests that the films may have to be strategically placed on the fake iris pattern in order to defeat an iris PA detection system.\nMapping (Grad-CAM) [37]. Grad-CAM produces a coarse localization map highlighting the salient regions in an image that were used by the network in order to generate its inference. Fig. 10 presents the \"heatmaps\" for configurations that were unsuccessful (Con 0) as well as those that were successful (Con 3 and Con 7) in defeating the D-NetPAD algorithm. The red regions indicate high activation, whereas the blue regions represent low activation when inferring the final decision (i.e., bonafide or PA). The first row of Fig. 10 shows the heatmaps of Con 0 images, where the high activation region is at the pupillary zone of the printed iris pattern Table 2. Detailed table of PA scores for each image captured across all 12 configurations. Red colored cells represent PA scores for IrisTL-PAD which are less than or equal to its threshold value (0.5). Yellow colored cells represent PA scores for D-NetPAD which are less than or equal to its threshold value (0.4). Orange fonted numbers represent images taken with extra lightning conditions. Blue fonted numbers represent PA scores of images taken after heating the films. of the fake eye. The other two rows of Fig. 10 correspond to Cons 3 and 7 (high presentation attack success rate). The high activation regions in these two rows of images are dis-tributed throughout the iris pattern. We hypothesize that the combination of VO 2 and blank films when placed on the Van Dyke eyes interfered with the iris pattern inscribed on the fake eye. This presumably resulted in a pattern that was never seen by the algorithm during training. As a result, the focus shifted away from the iris pattern (see the last two rows of Fig. 10), resulting in PA scores that were in the vicinity of the threshold (0.40). The chances of mis-classification seems to have increased with an increase in the density of the VO 2 and blank films. The VO 2 films appear to obscure the underlying pattern due to their special optical property with NIR illumination, whereas the blank films distort the pattern. Thus, Cons 3 and 7, which have a high concentration of VO 2 and blank films (Fig. 5), show high misclassification rates.\nAfter Grad-CAM visualization, which utilizes backpropagation, we also visualized the features fed into the architecture in the forward direction for the final decision. The features were extracted from the penultimate layer (just before the fully connected layer) of D-NetPAD and reduced to two dimensions using t-Distributed Stochastic Neighbor Embedding (t-SNE) [42]. t-SNE plots are shown in Fig. 11, where the green and blue data points represent bonafide and fake eye images from the proprietary dataset, respectively. The red data points (Fig. 11) represent configurations with a high PA success rate (Cons 3, 4, 6, 7, 8), whereas the pink data points represent configurations with a low PA success rate (Cons 0, 1, 2, 5, 9, 10, 11). Fig. 11 shows the distribution of the configurations departing from that of the fake eyes, as well as being spaced out. This divergence further substantiates the effectiveness of using VO 2 films in performing iris presentation attacks. " }, { "figure_ref": [], "heading": "Role of Coated and Blank Films", "publication_ref": [], "table_ref": [], "text": "It is clear from the experiments conducted in this work that introduction of the VO 2 films along with blank films made a difference in the optical properties of the fake eyes. This change triggered a shift of focus of the PAD algorithms away from the iris portion, labelling them as bonafide. To check the function of VO 2 films, we carried out some preliminary experiments using metal coated films. These films when used alternately with blank films on fake eyes lowered the PA scores. This clearly shows that the new films (metal), just like VO 2 ones, caused changes in the optical properties of the fake eyes. These films worked as an attack only when used with blank films but not just by themselves. This suggests that a patch-based configuration is able to fool the PAD algorithm and pass as a genuine eye. But extensive experiments have to be carried out with the new films which can help us strengthen this hypothesis." }, { "figure_ref": [], "heading": "Mitigation Measures", "publication_ref": [], "table_ref": [], "text": "We performed another experiment to find a potential solution for the misclassification of VO 2 and blank films coated fake eyes as bonafides. We utilized all samples (=10) from a subset of configurations defined in this paper to perform incremental training of the D-NetPAD algorithm. The configurations used for the training were 1, 2, 5, 9 and 11 as they were correctly classified as PAs by the D-NetPAD. For incremental training, we fine-tuned the D-NetPAD model with the selected samples. Next, we recomputed the PA scores of all samples, including those pertaining to Cons 3, 4, 6, 7, 8, 10 which were not used for training. We observed that all the samples were now correctly classified as PAs. The experiment shows that incremental training with only a few samples can extend the discriminative power of the model in detecting such new attacks." }, { "figure_ref": [], "heading": "Discussion and Future Work", "publication_ref": [ "b18" ], "table_ref": [], "text": "In this work, we assessed the possibility of combining Vanadium dioxide films with artificial glass eyes in order to create PAs that can potentially evade presentation attack detection. VO 2 films can be used to selectively regulate NIR transmission thereby causing such artificial eyes to be misclassified as bonafide samples. Our experimental results suggest that the placement of these films in specific configurations can indeed confound a PA detection system.\nHaving said that, there are some ways to detect these types of attacks: (a) Patch-based PAD: The Vanadium dioxide films used to create the PAs, modified the iris texture in configurations that can be described by local patches (see Fig. 5). Both IrisTL-PAD and D-NetPAD solutions extract global features from the cropped iris images. By using local regions for PA determination, it is likely that patches which are not modified with VO 2 films would produce high PA scores. Hence, averaging the PA scores across individual patches can increase the robustness of these PAD solutions to the proposed attack [19]. (b) One-class Classification: It is difficult to model the distribution of every unknown or unseen PAs. To tackle such PAs, a one-class classifier concept can be leveraged where only bonafide distribution is required to create the PAD model [45]. Future work will explore the application of Vanadium dioxide films to generate PA artifacts that can be used for training and increase the robustness of existing PAD methods. Further, their thermochromic behavior can possibly be used to design new iris hardware for PA detection. We will also study the efficacy of this attack on other PAD techniques. In this work, the attack has been studied in the context of PAD only; future work will involve analyzing the impact on the iris recognition method also.\nEthical Implications: The goal of this paper was to alert researchers and practitioners to potential attacks and provide a preliminary solution to detect them. However, it should not be misused to launch an attack against iris recognition systems." } ]
Iris recognition systems, operating in the near infrared spectrum (NIR), have demonstrated vulnerability to presentation attacks, where an adversary uses artifacts such as cosmetic contact lenses, artificial eyes or printed iris images in order to circumvent the system. At the same time, a number of effective presentation attack detection (PAD) methods have been developed. These methods have demonstrated success in detecting artificial eyes (e.g., fake Van Dyke eyes) as presentation attacks. In this work, we seek to alter the optical characteristics of artificial eyes by affixing Vanadium Dioxide (VO 2 ) films on their surface in various spatial configurations. VO 2 films can be used to selectively transmit NIR light and can, therefore, be used to regulate the amount of NIR light from the object that is captured by the iris sensor. We study the impact of such images produced by the sensor on two state-of-the-art iris PA detection methods. We observe that the addition of VO 2 films on the surface of artificial eyes can cause the PA detection methods to misclassify them as bonafide eyes in some cases. This represents a vulnerability that must be systematically analyzed and effectively addressed.
Iris Presentation Attack: Assessing the Impact of Combining Vanadium Dioxide Films with Artificial Eyes
[ { "figure_caption": "Figure 1 .1Figure 1. Examples of PAI used to launch presentation attacks (PAs) on the iris modality: (a) plastic eyes, (b) printed images, and (c) cosmetic contacts [20].", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Schematic of thermochromic behavior of VO2 films (a) below and (b) above 68 • C (critical temperature).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Diagrammatic representation of the various patterns in which the blank films and V O2 coated films were arranged on the fake eyes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4.Step-wise procedure for fabrication of VO2 modified fake eye starting from its deposition to PA score procurement[23].", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .Figure 6 .56Figure 5. Images with various configurations of films as captured by iCAM 7100S. Images (a)-(j) represent a particular configuration taken in different angular, lighting and temperature conditions, sequentially (for a detailed label refer to Table 2). Images (i)-(xii) represent configurations Con 0 to Con 11.", "figure_data": "", "figure_id": "fig_4", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Comparison of the geometrical arrangement of Cons 3 and 4. Con 3 has VO2 coated and blank films arranged in an alternate manner, but in no particular geometrical pattern all over the artificial eye. Con 4 on the other hand, has these films arranged in 2 concentric circles inside the iris region and 2 blank films on the pupil of the fake eyes.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure10. Grad-CAM[37] heatmaps of images corresponding to Con 0, Con 3 and Con 7 configurations. Con 0 has low PA success rate, whereas Con 3 and Con 7 have a high PA success rate. Red-colored regions represent highly focused region by the D-NetPAD. The blue region represents low priority regions. These regions help in making the final decision about being a bonafide or a PA.", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 1111Figure 11. t-SNE plot of the D-NetPAD algorithm on bonafide (green) and fake eye (blue) images on the proprietary dataset. It also shows the t-SNE of all the configurations considered in this work. Red-colored data points represent configurations with a high PA attack success rate (Con 3, 4, 6, 7, 8), whereas pink represents configurations with a low PA attack success rate (Con 0, 1, 2, 5, 9, 10, 11). The distribution of the configurations is observed to be substantially different from that of the fake eyes, suggesting the novel nature of the attack.", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "A summary of datasets used to train IrisTL-PAD.", "figure_data": "DatasetTotalLive Print Contact Lenses Artificial EyeLivDet-Iris 2017-IIT-WVU [46]1,750750-1000-LivDet-Iris 2017-NotreDame [46] 1,200600-600-LivDet-Iris 2017-Warsaw [46]4,513 1,844 2,669--BERC-Iris-Fake [25]4,598 2,778 1,60014080CASIA-Iris-Interval [17]740--740-Private Dataset3,623 3,1006334183Combined16,424 9,0727,352", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Darshika Jauhari; Renu Sharma; Cunjian Chen; Nelson Sepulveda; Arun Ross
[ { "authors": "A S Barker; H W Verleur; H J Guggenheim", "journal": "Physical Review Letters (PRL)", "ref_id": "b0", "title": "Infrared optical properties of vanadium dioxide above and below the transition temperature", "year": "1966" }, { "authors": "Aidan Boyd; Jeremy Speth; Lucas Parzianello; Kevin Bowyer; Adam Czajka", "journal": "", "ref_id": "b1", "title": "State of the art in open-set iris presentation attack detection", "year": "2022" }, { "authors": "Tianci Chang; Xun Cao; R Liv; Shiwei Dedon; Aibin Long; Zewei Huang; Ning Shao; Hongjie Li; Ping Luo; Jin", "journal": "Nano Energy", "ref_id": "b2", "title": "Optical design and stability study for ultrahigh-performance and long-lived vanadium dioxide-based thermochromic coatings", "year": "2018" }, { "authors": "Cunjian Chen; Arun Ross", "journal": "Biometrics: Theory, Applications and Systems (BTAS)", "ref_id": "b3", "title": "Exploring the use of IrisCodes for presentation attack detection", "year": "2018" }, { "authors": "Cunjian Chen; Arun Ross", "journal": "IEEE Winter Applications of Computer Vision (WACV) Workshops", "ref_id": "b4", "title": "A multi-task convolutional neural network for joint iris detection and presentation attack detection", "year": "2018" }, { "authors": "Cunjian Chen; Arun Ross", "journal": "", "ref_id": "b5", "title": "An explainable attentionguided iris presentation attack detector", "year": "2021-01" }, { "authors": "Rui Chen; Xirong Lin; Tianhuai Ding", "journal": "Pattern Recognition Letters (PRL)", "ref_id": "b6", "title": "Liveness detection for iris recognition using multispectral images", "year": "2012" }, { "authors": "Horacio Coy; Rafmag Cabrera; Nelson Sepúlveda; Félix E Fernández", "journal": "Journal of Applied Physics (JAP)", "ref_id": "b7", "title": "Optoelectronic and all-optical multiple memory states in vanadium dioxide", "year": "2010" }, { "authors": "A Czajka", "journal": "", "ref_id": "b8", "title": "Database of iris printouts and its application: Development of liveness detection method for iris recognition", "year": "2013" }, { "authors": "Adam Czajka; Kevin W Bowyer", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b9", "title": "Presentation attack detection for iris recognition: An assessment of the state-ofthe-art", "year": "2018" }, { "authors": "P Das; J Mcgrath; Z Fang; A Boyd; G Jang; A Mohammadi; S Purnapatra; D Yambay; S Marcel; M Trokielwicz; P Maciejewicz; K Bowyer; A Czajka; S Schuckers; J T Farias; S Gonzalez; M Fang; N Damer; F Boutros; A Kuijper; R Sharma; C Chen; A Ross", "journal": "", "ref_id": "b10", "title": "Iris liveness detection competition (livdet-iris) -the 2020 edition", "year": "2020" }, { "authors": "John Daugman", "journal": "Biometrics: Personal Identification in Networked Society", "ref_id": "b11", "title": "Countermeasures against subterfuge", "year": "1999" }, { "authors": "José Figueroa; Yunqi Cao; Henry Dsouza; Juan Pastrana; Nelson Sepúlveda", "journal": "Advanced Materials Technologies", "ref_id": "b12", "title": "A simplified approach for obtaining optical properties of V O2 thin films, and demonstration of infrared shape-shifting devices", "year": "2019" }, { "authors": "Yanfeng Gao; Hongjie Luo; Zongtao Zhang; Litao Kang; Zhang Chen; Jing Du; Minoru Kanehira; Chuanxiang Cao", "journal": "Nano Energy", "ref_id": "b13", "title": "Nanoceramic V O2 thermochromic smart glass: A review on progress in solution processing", "year": "2012" }, { "authors": "Yanfeng Gao; Shaobo Wang; Litao Kang; Zhang Chen; Jing Du; Xinling Liu; Hongjie Luo; Minoru Kanehira", "journal": "Energy Environmental Science", "ref_id": "b14", "title": "V O2-Sb:SnO2 composite thermochromic smart glass foil", "year": "2012" }, { "authors": "Yanfeng Gao; Shaobo Wang; Hongjie Luo; Lei Dai; Chuanxiang Cao; Yiliao Liu; Zhang Chen; Minoru Kanehira", "journal": "Energy Environmental Science", "ref_id": "b15", "title": "Enhanced chemical stability of V O2 nanoparticles by the formation of SiO2/V O2 core/shell structures and the application to transparent and flexible V O2-based composite foils with excellent thermochromic properties for solar heat control", "year": "2012" }, { "authors": "Lingxiao He; Haiqing Li; Fei Liu; Nianfeng Liu; Zhenan Sun; Zhaofeng He", "journal": "", "ref_id": "b16", "title": "Multi-patch convolution neural network for iris liveness detection", "year": "2016" }, { "authors": "Steven Hoffman; Renu Sharma; Arun Ross", "journal": "", "ref_id": "b17", "title": "Convolutional neural networks for iris presentation attack detection: Toward cross-dataset and cross-sensor generalization", "year": "2018" }, { "authors": "Steven Hoffman; Renu Sharma; Arun Ross", "journal": "International Conference on Biometrics (ICB)", "ref_id": "b18", "title": "Iris + ocular: Generalized iris presentation attack detection using multiple convolutional neural networks", "year": "2019" }, { "authors": "Vision Hoover; Center", "journal": "", "ref_id": "b19", "title": "Halloween Hazard: The Dangers of Cosmetic Contact Lenses", "year": "2018-01-03" }, { "authors": "G Huang; Z Liu; L V D Maaten; K Q Weinberger", "journal": "", "ref_id": "b20", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "K Hughes; K W Bowyer", "journal": "", "ref_id": "b21", "title": "Detection of contact-lensbased iris biometric spoofs using stereo imaging", "year": "2013" }, { "authors": "", "journal": "", "ref_id": "b22", "title": "Iris ID system Inc. iCAM7 series: User interface", "year": "" }, { "authors": "O V Komogortsev; A Karpov; C D Holland", "journal": "IEEE Transactions on Information Forensics and Security (TIFS)", "ref_id": "b23", "title": "Attack of mechanical replicas: Liveness detection with eye movements", "year": "2015" }, { "authors": "Sung Lee; Kang Park; Youn Lee; Kwanghyuk Bae; Jai Kim", "journal": "Optical Engineering", "ref_id": "b24", "title": "Multifeature-based fake iris detection method", "year": "2007" }, { "authors": "S J Lee; K R Park; J Kim", "journal": "", "ref_id": "b25", "title": "Robust fake iris detection based on variation of the reflectance ratio between the iris and the sclera", "year": "2006" }, { "authors": "S Lysenko; A Rúa; V Vikhnin; F Fernández; H Liu", "journal": "Physical Review B (PRB)", "ref_id": "b26", "title": "Insulator-to-metal phase transition and recovery processes in V O2 thin films after femtosecond laser excitation", "year": "2007" }, { "authors": "S Lysenko; V Vikhnin; F Fernandez; A Rua; H Liu", "journal": "Physical Review. B (PRB)", "ref_id": "b27", "title": "Photoinduced insulator-to-metal phase transition in V O2 crystalline films and model of dielectric susceptibility", "year": "2007" }, { "authors": "M Dobeš; L Machala", "journal": "", "ref_id": "b28", "title": "UPOL Iris Database", "year": "" }, { "authors": "", "journal": "Springer", "ref_id": "b29", "title": "Handbook of Biometric Anti-Spoofing -Presentation Attack Detection, Second Edition", "year": "2019" }, { "authors": "F J Morin", "journal": "Physical Review Letters (PRL)", "ref_id": "b30", "title": "Oxides which show a metal-to-insulator transition at the neel temperature", "year": "1959" }, { "authors": "Andrzej Pacut; Adam Czajka", "journal": "", "ref_id": "b31", "title": "Aliveness detection for iris biometrics", "year": "2006" }, { "authors": "R Raghavendra; Christoph Busch", "journal": "IEEE Transactions on Information Forensics and Security (TIFS)", "ref_id": "b32", "title": "Robust Scheme for Iris Presentation Attack Detection using Multiscale Binarized Statistical Image Features", "year": "2015" }, { "authors": "R Raghavendra; K B Raja; C Busch", "journal": "", "ref_id": "b33", "title": "Contlensnet: Robust iris contact lens detection using deep convolutional neural networks", "year": "2017" }, { "authors": "K B Raja; R Raghavendra; C Busch", "journal": "IEEE Transactions on Information Forensics and Security (TIFS)", "ref_id": "b34", "title": "Video presentation attack detection in visible spectrum iris recognition using magnified phase information", "year": "2015" }, { "authors": "Nalini Ratha; Jonathan Connell; Ruud Bolle", "journal": "IBM Systems Journal", "ref_id": "b35", "title": "Enhancing security and privacy in biometrics-based authentication systems", "year": "2001-01" }, { "authors": "R Ramprasaath; Michael Selvaraju; Abhishek Cogswell; Ramakrishna Das; Devi Vedantam; Dhruv Parikh; Batra", "journal": "", "ref_id": "b36", "title": "Grad-CAM: visual explanations from deep networks via gradient-based localization", "year": "2017" }, { "authors": "Renu Sharma; Arun Ross", "journal": "", "ref_id": "b37", "title": "D-NetPAD: An explainable and interpretable iris presentation attack detector", "year": "2020" }, { "authors": "Z Sun; H Zhang; T Tan; J Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI)", "ref_id": "b38", "title": "Iris image classification based on hierarchical visual codebook", "year": "2014" }, { "authors": "", "journal": "", "ref_id": "b39", "title": "The Notre Dame Contact Lenses Dataset", "year": "2015" }, { "authors": "Patrick Tinsley; Sandip Purnapatra; Mahsa Mitcheff; Aidan Boyd; Colton Crum; Kevin Bowyer; Patrick Flynn; Stephanie Schuckers; Adam Czajka; Meiling Fang; Naser Damer; Xingyu Liu; Caiyong Wang; Xianyun Sun; Zhaohua Chang; Xinyue Li; Guangzhe Zhao; Juan Tapia; Christoph Busch; Carlos Aravena; Daniel Schulz", "journal": "", "ref_id": "b40", "title": "Iris liveness detection competition (livdet-iris) -the 2023 edition", "year": "2023" }, { "authors": "L J P Van Der Maaten; G E Hinton", "journal": "Journal of Machine Learning Research", "ref_id": "b41", "title": "Visualizing highdimensional data using t-sne", "year": "2008" }, { "authors": "H W Verleur; A S Barker; C N Berglund", "journal": "Physical Review", "ref_id": "b42", "title": "Optical properties of V O2 between 0.25 and 5 ev", "year": "1968" }, { "authors": "D Yadav; N Kohli; J S Doyle; R Singh; M Vatsa; K W Bowyer", "journal": "IEEE Transactions on Information Forensics and Security (TIFS)", "ref_id": "b43", "title": "Unraveling the effect of textured contact lenses on iris recognition", "year": "2014" }, { "authors": "Shivangi Yadav; Cunjian Chen; Arun Ross", "journal": "", "ref_id": "b44", "title": "Relativistic discriminator: A one-class classifier for generalized iris presentation attack detection", "year": "2020" }, { "authors": "D Yambay; B Becker; N Kohli; D Yadav; A Czajka; K W Bowyer; S Schuckers; R Singh; M Vatsa; A Noore; D Gragnaniello; C Sansone; L Verdoliva; L He; Y Ru; H Li; N Liu; Z Sun; T Tan", "journal": "", "ref_id": "b45", "title": "LivDet iris 2017 -iris liveness detection competition", "year": "2017" }, { "authors": "D Yambay; J S Doyle; K W Bowyer; A Czajka; S Schuckers", "journal": "", "ref_id": "b46", "title": "LivDet-iris 2013-iris liveness detection competition 2013", "year": "2014" }, { "authors": "David Yambay; Brian Walczak; Stephanie Schuckers; Adam Czajka", "journal": "", "ref_id": "b47", "title": "LivDet-Iris 2015 -iris liveness detection competition 2015", "year": "2017" }, { "authors": "Zhenan Hui Bin Zhang; Tieniu Sun; Jianyu Tan; Wang", "journal": "", "ref_id": "b48", "title": "Learning hierarchical visual codebook for iris liveness detection", "year": "2011" }, { "authors": "Zongtao Zhang; Yanfeng Gao; Hongjie Luo; Litao Kang; Zhang Chen; Jing Du; Minoru Kanehira; Yuzhi Zhang; Zhong Wang", "journal": "Energy & Environmental Science", "ref_id": "b49", "title": "Solution-based fabrication of vanadium dioxide on F:SnO2 substrates with largely enhanced thermochromism and low-emissivity for energy-saving applications", "year": "2011" } ]
[]
2023-11-21
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b61", "b12", "b13", "b64", "b66", "b65", "b18", "b1", "b84", "b40", "b30", "b58", "b94", "b88", "b86", "b57", "b25", "b72", "b8", "b24", "b82", "b95", "b69", "b17", "b41", "b85", "b71", "b16", "b70" ], "table_ref": [], "text": "Large language models (LLMs) pretrained on huge, web-crawled text datasets demonstrate extremely general capabilities (Radford et al., 2018;2019;Brown et al., 2020;Bubeck et al., 2023). This has led to the current paradigm of machine learning, where practitioners often use model adaptation protocols such as fine-tuning to achieve unprecedented performance on a broad range of downstream tasks (Raffel et al., 2020;Sanh et al., 2022;Reid et al., 2022;Driess et al., 2023;Ahn et al., 2022). Relatedly, the generality of an LLM's capabilities implies the model also learns to exhibit several undesirable behaviors, e.g., producing sensitive, biased, or toxic outputs in the pursuit of completing a task (Weidinger et al., 2021;Lin et al., 2021;Jiang et al., 2021;Parrish et al., 2021;Zhou et al., 2021;Xu et al., 2021;Welbl et al., 2021). Fine-tuning with different training objectives has again seen immense usage in mitigating such \"unsafe\" capabilities, serving as an integral component of current state-of-the-art alignment approaches like RLHF (Ouyang et al., 2022;Go et al., 2023;Stiennon et al., 2020;Bai et al., 2022;Glaese et al., 2022).\nGiven its ubiquity in the design of both performant and safely deployable models, a natural question emerges: precisely how does fine-tuning influence a pretrained model's capabilities to adapt to a downstream dataset (see Fig. 1)? The generality of an LLM's capabilities opens the possibility Figure 1: How does fine-tuning alter a model's capabilities? (a) Pretraining on huge, web-crawled datasets leads to LLMs learning several capabilities that can justifiably process an input. The figure shows this using an illustrative query, \"write a story a 5-year old would understand.\" Via careful prompting, the desired answer can be retrieved, indicating both desired and undesired capabilities exist in an LLM. (b) Upon fine-tuning, e.g., to avoid use of undesired capabilities, we hypothesize that three explanations are possible: (i) a minimal transformation of the original capability is learned, e.g., a negation of the original capability; (ii) the undesirable capability is deleted altogether; or (iii) the use of another relevant capability is amplified. that fine-tuning protocols merely identify the most relevant capabilities and amplify their use for a given set of inputs, while inhibiting the use of other capabilities. Arguably, results on jailbreaking alignment-finetuned LLMs via adversarially generated prompts to elicit undesirable behavior support this hypothesis (Wei et al., 2023;Zou et al., 2023;Shen et al., 2023;Deng et al., 2023;Liu et al., 2023b); however, a precise study to establish the phenomenology of fine-tuning remains absent from the literature. It therefore remains unclear how pernicious this problem is. Motivated by the above problem, we perform an extensive analysis of the effects of fine-tuning on a pretrained model's capabilities in controlled settings where we can use mechanistic interpretability tools to understand precisely what is happening to the model's underlying capabilities. Specifically, we focus on compiled transformer models based on the Tracr library (Lindner et al., 2023;Weiss et al., 2021)-which allows encoding specific computational programs into a transformer-and procedurally generated setups involving probabilistic context-free grammars (PCFGs) (Sipser, 1996;Chomsky, 1956)-a formal model designed to capture syntactic properties of natural and programmatic languages that has recently served as a testbed for mechanistically understanding language models (Allen-Zhu & Li, 2023c;Delétang et al., 2022;Shi et al., 2022). While Tracr allows us to analyze models with perfectly encoded capabilities, models trained on PCFGs allow evaluation of the effects of design choices on the pretraining pipeline. Fine-tuning these models via the often-used protocol of further training a pretrained model on a downstream dataset with a sufficiently small learning rate, we have the following findings:\n• Fine-tuning alters pretraining capabilities by minimally transforming them. We find that when a relevant pretraining capability is present, the fine-tuned model learns a minimally transformed version of it. We call the transformed portion a wrapper. • Wrappers are generally very localized. We show that the wrappers transforming a model's pretraining capabilitiies are often extremely localized: e.g., via mere pruning of a few weights or neurons, we show the model can start to reuse its pretraining capability and forget how to perform the downstream task. Relatedly, we find that via a simple linear probe, we are still able to retrieve outputs expected from the pretrained model. • Reverse fine-tuning to \"revive\" a capability. In scenarios where upon fine-tuning a model behaviorally seems to not possess a capability, we find that further fine-tuning the model on a subset of pretraining data leads to a sample-efficient \"revival\" of the capability. We corroborate these results in a realistic setup using the TinyStories dataset (Eldan & Li, 2023)." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b62", "b12", "b63", "b14", "b83", "b6", "b27", "b91", "b81", "b28", "b59", "b39", "b26", "b51", "b83", "b36", "b76", "b21", "b49", "b34", "b89", "b50", "b31", "b48", "b46", "b0", "b33", "b92", "b38", "b55", "b78", "b93", "b15", "b56", "b93", "b20" ], "table_ref": [], "text": "Fine-tuning in the \"foundation model\" era. Fine-tuning large-scale foundation models pretrained on huge datasets, such as LLMs (Radford et al., 2019;Brown et al., 2020) or large vision models (Radford et al., 2021;Caron et al., 2021), has become the norm in most domains of machine learning. Accordingly, several fine-tuning methods have been proposed in recent years, e.g., instruction finetuning (Wei et al., 2021;Liu et al., 2022b;Askell et al., 2021), parameter-efficient fine-tuning (Houlsby et al., 2019;Zaken et al., 2021;Wang et al., 2022), low-rank adaptation (Hu et al., 2021;Pfeiffer et al., 2020;Lialin et al., 2023), and weight averaging (Gueta et al., 2023;Matena & Raffel, 2022). The diversity of these protocols makes fine-tuning a general, umbrella term for related methods used to adapt a pretrained model to elicit its most relevant capabilities. For precision, we restrict this paper to fine-tuning protocols that continue training of a pretrained model on a smaller downstream dataset at a learning rate that is often one-three orders of magnitude lower than the average pretraining one. Such protocols are widely used in practice, e.g., in instruction fine-tuning (Wei et al., 2021).\nUnderstanding fine-tuning. A few papers theoretically analyze fine-tuning (Lampinen & Ganguli, 2018;Tripuraneni et al., 2020;Gerace et al., 2022;Maddox et al., 2021;Kumar et al., 2022) under strong assumptions such as relatively simple model classes (e.g., linear functions) or a kernel view of deep learning, which, as shown by Yang & Hu (2020), trivializes the notion of feature transfer in fine-tuning / transfer learning (though see Malladi et al. (2023) for a notable exception). Prior works have also evaluated the effects of fine-tuning via the lens of mode connectivity (Juneja et al., 2022;Lubana et al., 2022), behavioral evaluations (Lovering et al., 2021), and intrinsic dimensionality of the loss landscape (Aghajanyan et al., 2020). In contrast, we aim to provide a mechanistic analysis of how fine-tuning changes model capabilities. A relevant recent work by Kotha et al. (2023) claims that fine-tuning is unlikely to alter a model's capabilities and supports this claim by providing a behavioral analysis of linear regression tasks and realistic models via a novel prompting strategy.\nModel interpretability via synthetic tasks. Several recent works have focused on mechanistically understanding how Transformers learn synthetic language generation tasks, such as learning formal grammars and board games (Allen-Zhu & Li, 2023c;Zhao et al., 2023;Li et al., 2023;Nanda et al., 2023;Liu et al., 2022a;Valvoda et al., 2022;Liu et al., 2023a;Zhou et al., 2023;Chan et al., 2022).\nThe goal of such papers, including ours, is not necessarily to provide accurate explanations for the success of LLMs, but to develop concrete hypotheses that can be used to develop grounded experiments or tools for understanding their behavior. For example, in a recent work, Allen-Zhu & Li (2023a;b) use a synthetically designed setup to develop hypotheses for how \"knowledge\" about an entity is stored in a pretrained model, showing such knowledge can often be manipulated via relatively simple linear transformations. Similarly, Okawa et al. (2023) use a procedurally defined multimodal dataset to demonstrate that emergent capabilities seen in neural networks are partially driven by the compositional nature of real world data. In another work, Zhou et al. (2023) utilize Tracr compiled Transformers to hypothesize and demonstrate that if primitive operations involved in a formal algorithm can be implemented by a model, stepwise inference is sufficient to enable length generalization. Similarly, Feng et al. (2023) use context-free grammars to demonstrate stepwise inference allows Transformers to solve problems that require dynamic programming." }, { "figure_ref": [], "heading": "DEFINING OUR NOTION OF CAPABILITIES", "publication_ref": [], "table_ref": [], "text": "For precision and to motivate our experimental setup, we first discuss the notion of capabilities that we aim to capture for analyzing how fine-tuning alters a model (see Tab. 2 for a summary of notations).\nWe use an idealized definition to communicate our primary intuition and emphasize that we do not expect all capabilities in a pretrained model will act as perfectly as the definition necessitates. However, for the procedural tasks used in this work, our idealized notion is fairly representative.\nLet D PT denote a dataset sampled from a distribution P X over the domain X. We will assume the domain X can itself be factorized into two domains X I and X D . Correspondingly, a sample x ∈ X can be divided into a tuple of variables (x i ∈ X I , x d ∈ X D ), where x i identifies which capability a model should use to process the information encoded by the variable x d . This decomposition captures the idea that different prompts can force a pretrained LLM to elicit different capabilities, as shown in Fig. 1. The identifier of capability c is denoted i c . Pretraining on D PT yields us an L-layer model M(.) : X → Y, where often Y = X for language models. Let Read l (M (.)) denote the action where a linear layer is trained on intermediate outputs at layer l of model M using D PT . Under this setup, we define a capability as follows.\nDefinition 1. (Capability.) Define a surjective map f C : X D → Y C , where Y C ⊂ Y. Let X C ⊂ X be a sub-domain s.t. ∀ x ∈ X C , the capability identifier variable is the same, i.e., x i = i C . Then, we say the model M \"possesses a capability C\" if for all x ∈ X C , ∃ l ≤ L s.t. Read l (M (x)) = f C (x d ).\nA linear readout at an intermediate layer is used in the definition above to emphasize that the notion of a capability need not correspond to only input-output behavior. Further, the definition is restricted to a sub-domain of the overall input space, which we argue is important to define a system's capabilities. For ex., one can claim an 8-bit adder circuit possesses the capability to perform addition, but, technically, this is true only over the domain of 8-bit numbers; for inputs with more than 8-bit precision, the circuit will see an overflow error, generating an incorrect but syntactically valid output.\nSimilarly, an LLM may possess the capability to identify the sentiment of a passage of text in a specific language, but possibly fail when inputs in a different language are shown. Such structured failures imply claiming the existence of a capability should account for the input domain.\nWe next consider how the fine-tuning distribution P FT X over the domain X can interact with capabilities exhibited in the pretrained model. Our goal here is to capture the fact that a large-scale pretraining corpora is likely to have non-zero probability under the fine-tuning distribution, i.e., it is unlikely that a pretrained model will lack any capability relevant to the downstream task. This motivates a notion of \"relevance of a capability\". Specifically, let D FT ∼ P FT,E X denote the downstream dataset used for fine-tuning, where P FT,E X is the empirical distribution that captures a subset of the support with non-zero probability in the distribution P FT X . Definition 2. (Relevance of a Capability.) Assume the capability C in a pretrained model can be transformed to a map g • f C via fine-tuning on D FT , where |D FT | ≪ |D PT |, such that for all x ∼ P FT,E X , the correct output is produced. Then, if for all x ∼ P FT X , g • f C yields the correct output, we claim capability C is strongly relevant to the fine-tuning task; else, we call it weakly relevant.\nFigure 2: Capability Relevance. Consider the task of completing a passage while maintaining its narrative. Herein, the ability to recognize the sentiment of a text will be deemed strongly relevant and the ability to recognize negative words weakly relevant. Such words are often spuriously correlated with a negative sentiment.\nFor example, a weakly relevant capability can involve the ability to recognize a spurious feature that the model can learn to exploit to perform well on the fine-tuning dataset, without enabling generalization to the overall distribution that the fine-tuning dataset is sampled from. Meanwhile, a strongly relevant capability is one that extracts a causally relevant feature for that task (see Fig. 2 for an example). When a weakly relevant pretraining capability is available, we empirically observe that we can often identify specific components in the latter half of the model (e.g., neurons or layers) that seem to implement the transform g in Def. 2. In such cases, we call g a \"wrapper\" and g • C a \"wrapped capability\". If we intervene on the model by either removing the wrapper or training it to forget the wrapper, we find the model starts to perform well on the pretraining task again. In such cases, we say the pretraining capability has been \"revived\"." }, { "figure_ref": [], "heading": "BUILDING CAPABLE MODELS: TRACR AND PCFGS", "publication_ref": [], "table_ref": [], "text": "We next describe the setup used in this work for analyzing how fine-tuning alters a model's capabilities (see Fig. 3 and App. B). Due to lack of clarity on what capabilities a language model possesses or what training data it has seen, we primarily focus on procedurally defined setups that enable clear interpretability. To understand how the relevance of a capability affects fine-tuning, we randomly embed a predefined spurious feature into the fine-tuning dataset. Specifically, the feature correlates Figure 3: Experimental setup. We primarily analyze two setups: (i) Tracr \"compiled\" models with predefined capabilities and (ii) models trained to learn capabilities defined via a PCFG, following Allen-Zhu & Li (2023c). During fine-tuning, we train the model on a dataset D FT that promotes learning of a capability C ′ . We randomly embed spurious features in the fine-tuning dataset that correlate with features extracted by a pretraining capability C to operationalize capability relevance." }, { "figure_ref": [], "heading": "Preprint", "publication_ref": [ "b68", "b77", "b41", "b32" ], "table_ref": [], "text": "with the features extracted by the pretraining capability-if the feature is \"simple\" enough, the model preferentially exploits it to reduce the downstream loss (Shah et al., 2020;Trivedi et al., 2023).\nCompiled capabilities with Tracr. For a fully controllable system, we use the recently proposed Tracr library (Lindner et al., 2023). Tracr enables \"compiling\" a transformer model with a set of predefined computational primitives over a string of characters from the English alphabet. Accordingly, we define a set of capabilities as a Tracr program and compile it into a Transformer via Tracr (see App. B.1 for a detailed pipeline). The model is then fine-tuned on a downstream task to which the compiled capability may either be weakly or strongly relevant. While we analyze two tasks in this setup, for the main body of the paper, we focus on only the following one.\n• Counter: Compile the capability to count the number of occurrences of a token O PT in a string into the model; fine-tune to count occurrences of another token O FT . If r(x, O) denotes the number of occurrences of a token O in a string x, the spurious correlation is defined by enforcing a constant difference in token occurrences, i.e., r(x, O FT )r(x, O PT ) = q. See also Alg. 1 and Fig. 12.\nAs an example, note that in the Counter setup, the model can exploit its pretraining capability and get the correct output on the fine-tuning dataset by merely adding q to the count of O PT tokens. This wrapped capability will however perform poorly on samples without the correlation.\nLearned capabilities with PCFGs. In this setup, capabilities are \"learned\", akin to practical situations. This allows us to probe the effects of different pretraining design choices, e.g., the distribution of the pretraining data. Specifically, we follow recent work by Allen-Zhu & Li (2023c) and train a minGPT model (Karpathy, 2020) via autoregressive training on probabilisitc context-free grammars (PCFGs), a formal model of language that captures syntactic properties. Broadly, the data-generating process involves a tree traversal (see Fig. 3), starting from an initial root node and randomly choosing and navigating a set of production rules of the grammar from start/intermediate nodes to intermediate/terminal nodes, stopping only when a terminal node is reached. The terminal nodes reached by all paths starting at the root node will be concatenated to define a string x from the grammar (see Appendix for more details). We prepend special tokens T and O, called \"task family\" and \"operand\" tokens, that specify a certain task must be performed on the string x; e.g., count the occurrences (a task family) of a certain token (operand) in a string. Overall, a specific pair of the task family and operand tokens instantiates a task in our setup. The ground truth output of this task and a special token indicating that the output should be produced at the next position are appended at the end of the string in the training data (see App. B.2 for further details and Fig. 15 for an example).\nOur experiments thus involve the following steps. (i) Pretrain a model on a set of task families. Every sample begins with the task family and operand tokens to specify the task. This ensures different tasks do not \"conflict\" (assign different labels to the same input), since, by construction, they have non-overlapping support. (ii) Fine-tune the model on a task which may or may not have been included during pretraining. (iii) Evaluate how this fine-tuning affects the model. The data-generating process involves a uniform prior over task family tokens; meanwhile, the set of operand tokens seen during pretraining, denoted {O PT }, have a multinomial sampling prior. Specifically, the probability of sampling a specific token O PT ∈ {O PT } under task T is denoted P T ( O PT ). If this probability is low, the model may not learn the relevant capability to perform the task specified by the special tokens. While we analyze the effect of fine-tuning in two broad setups, using a model pretrained on five distinct task families relating to counting and indexing elements of a string, we focus on only the " }, { "figure_ref": [], "heading": "EVALUATION SETUP DETAILS", "publication_ref": [ "b80", "b73", "b67", "b87", "b35", "b74", "b23", "b9", "b52", "b7", "b29", "b9", "b11", "b10" ], "table_ref": [], "text": "In this section, we provide several results indicating that fine-tuning rarely elicits meaningful changes to pretraining capabilities. To this end, we borrow several protocols commonly used in the field of mechanistic interpretability for our analysis (see Fig. 4), specifically network pruning (Voita et al., 2019;Tanaka et al., 2019), attention map visualizations (Serrano & Smith, 2019;Wiegreffe & Pinter, 2019;Lai & Tan, 2019), and probing classifiers (Tenney et al., 2019;Voita & Preprint Titov, 2020;Geva et al., 2023;2022). We use multiple tools for all experiments since each tool, individually, is known to suffer from pitfalls (Meister et al., 2021;Bai et al., 2021;Jain & Wallace, 2019;Belinkov, 2022;Bolukbasi et al., 2021). Demonstrating our claims consistently hold true across a diverse set of tools improves our conclusions' robustness to pitfalls of a specific tool. Additionally, we propose a methodology called reverse fine-tuning (reFT ), wherein one takes a pretrained model, fine-tunes it on a downstream dataset, and then fine-tunes it again in the \"reverse\" direction, i.e., on a dataset sampled from the original pretraining distribution.\nFigure 4: Analysis protocols. We analyze how fine-tuning affects a pretrained model's capabilities by (i) Reverse Fine-tuning, (ii) network pruning, (iii) attention visualization, and (iv) probing classifiers. We use (ii)-(iv) to show fine-tuning often yields wrapped capabilities. For further evidence, we use (i) and (ii) and find we can \"revive\" the original capabilities, i.e., the model starts performing well on the pretraining task again. See App. D for precise details.\nWe argue if the behavior corresponding to a capability from M is retrieved in a few steps of reFT , fine-tuning did not meaningfully alter said capability (this claim can be formalized using results by Bengio et al. (2019); Le Priol et al. ( 2021)).\nWe primarily focus on the learned capabilities setup of PCFG counter in the main paper, relegating most results on compiled capabilities with Tracr to App. G and other results with PCFGs to App. H-findings remain consistent across all settings. In the PCFG counter setup, the model is pretrained, amongst other tasks, to count tokens from the subset {a, b, c} in a given string; during fine-tuning, the model is trained to count the token O FT = b, wherein the spurious correlation is defined by enforcing count of b to be 1 more than that of a. The probability a datapoint sampled from the train or test fine-tuning dataset contains a spurious correlation is denoted C Tr and C Te , respectively. Here, C Tr ∈ {0.0, 0.5, 0.8, 1.0} and C Te ∈ {0.0, 1.0}. We use three sets of sampling probabilities of the task operands in the pretraining data: P L T = (0.999, 0.001, 0.000), P M T = (0.9, 0.1, 0.0), or P H T = (0.5, 0.3, 0.2). These priors indicate a low/medium/high probability of sampling O FT . We use the following learning rates for fine-tuning: η M = ηPT /10 1 and η S = ηPT /10 2 , where η PT is the average pretraining learning rate." }, { "figure_ref": [ "fig_2" ], "heading": "RESULTS AND DISCUSSION", "publication_ref": [], "table_ref": [], "text": "Behavioral assessment of fine-tuning: We first evaluate the model's learning dynamics during fine-tuning (see Fig. 5 and Tab. 5). When the pretraining prior has low probability of sampling 30 65 100\nC Te = 0 C Te = 1 C Te = 0 C Te = 1 C Te = 0 C Te = 1 0 2.5K 5K 7.5K 10K 30 65 100 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K (a)  H T (c)  L T Acc. O FT (b)  M T C Tr = 0 C Tr = 0.5 C Tr = 0.8 C Tr = 1.0 η M η S\nFigure 5: Fine-tuning accuracy w.r.t. number of training iterations. We vary the probability of sampling the token O FT in the pretraining data and the spurious correlation in the fine-tuning datasets. When the prior is sufficiently high (a, b), we find the model learns to perform well on the downstream task. Meanwhile, if the prior is low (c), the model learns the downstream task only if a high enough learning rate is used and the spurious correlation is imperfect. This indicates the ability to extract information relevant for the downstream task is likely to be exploited during fine-tuning. : Impact of sampling prior on the pretraining task's accuracy as fine-tuning is performed. We plot accuracy on the pretraining task w.r.t. fine-tuning iterations. When the sampling prior of the O FT is low during pre-training, the pretraining task accuracy quickly plummets, especially if the spurious correlation is high; having a high sampling prior mitigates this behavior. This indicates pretraining capabilities are affected the most when they are weakly relevant. Figure 7: Pruning a few neurons is sufficient to retrieve pretraining task accuracy. We plot accuracy w.r.t. number of neurons pruned to improve performance on the pretraining task. We see when a small learning rate is used for fine-tuning, the pretraining task's performance improves after just 5-15 neurons are pruned (top), while the fine-tuning task's performance reduces correspondingly (bottom). We argue these neurons serve as a wrapper to minimally alter the weakly relevant pretraining capability and exploit the spurious correlation present in the fine-tuning data.\nthe token O FT , we see the fine-tuned model performs well only when the spurious correlation is present, i.e., C Te = 1. As the sampling probability is increased, however, we observe this behavior significantly changes. In particular, even if the model is fine-tuned for a high value of C Tr , albeit less than 1, it starts to perform well on the test data regardless of the presence of the spurious feature. Note that the performance is not high to begin with, indicating the ability to count O FT was learned during fine-tuning; however, having a sufficiently large sampling probability for the token during pretraining leads the model to avoid the spurious correlation. This indicates a pretraining ability to extract information relevant for the downstream task is likely to be exploited during fine-tuning. This is further corroborated by the results in Fig. 6, where we observe that when the spurious correlation is present in the fine-tuning data, accuracy on the pretraining task is affected the most if sampling prior of the target token was low during pretraining. We next analyze these results mechanistically.\nPruning / Probing fine-tuned models indicates learning of wrapped capabilities. Our results above indicate the model exploits its weakly relevant capabilities, i.e., the capability that helps exploit any spurious correlation, to solve the downstream task. We hypothesize that, at a mechanistic level, the model exploits the weakly relevant capability by learning a wrapper over it. To evaluate this, we analyze the models fine-tuned with a low sampling prior via network pruning and linear probing (see App. D for setup details). Specifically, we prune the fine-tuned models to find the most salient weights for reducing loss on the pretraining task of counting O PT . If the model learns a wrapper on this capability, the neurons we find should correspond to this wrapper, such that deleting them recovers the capability to count that token. As shown in Fig. 7, we find this is indeed the case-in a setup with weak relevance of capabilities, pruning a very small number of neurons is sufficient to revive the ability to perform well on the original task of counting O PT . To assess this further, we train a linear probe on the residual output of every block of the transformer model and determine whether the count of O PT can be accurately computed via the fine-tuned model. As shown in Fig. 8 We observe that when a strongly relevant capability is present (a, b), the model very quickly (0.1-1K iterations) starts to perform well on the task via reFT , even if behavior relevant to the capability ceased during pretraining (e.g., when C Tr is 1). Meanwhile, when the model possesses a weakly relevant capability (c), this \"revival\" is slightly slower (3K iterations). In contrast, the Scr. + FT baseline only reaches perfect accuracy at 4.5K iterations and when using a larger learning rate, i.e., η M .\nreFT enables \"revival\" of pretraining capabilities. To further corroborate our claims above, we use a model fine-tuned to count O FT and reverse fine-tune it to re-learn the ability to count O PT . As a baseline, we also report a protocol called Scr. + FT, wherein the model is initialized with parameters pre-trained to count O FT and then finetuned to count O PT . Note that this baseline and the reFT protocol differ in their initialization state: former is initialized with parameters pretrained to count O FT , while latter is initialized with parameters pretrained to count O PT and fine-tuned to count O FT . Results are shown in Fig. 9. We see the model starts to perform well on the pre-training task even if a small learning rate is used for reFT , i.e., even minimally changing the fine-tuned model's parameters is sufficient to regain the pretraining capability! Further, increasing the sampling prior of O FT accelerates this behavior. This indicates that when a strongly relevant capability is present, the model essentially amplifies its use, but does not catastrophically affect the pretraining capability itself; meanwhile, with a weakly relevant capability (low sampling prior during pretraining), even though the performance is initially poor on the pretraining task, in relatively few iterations (compared to baseline), the accuracy becomes perfect. We plot the loss during reverse fine-tuning (reFT ) to again produce stories with the forbidden feature. Fine-tuned models' losses go down very quickly (30-300 iterations) compared to baselines (which never reach the same loss; also see Tab. 1). Both these results indicate the capability of feature identification, a necessary capability for story modelling, continues to persist after fine-tuning.\nused. A larger learning rate is, however, able to alter the model computation, but only if the pretraining capability is not weakly relevant to the fine-tuning task, i.e., when C Tr = 0; otherwise, we again find the model continues to pay attention to the pretraining target." }, { "figure_ref": [], "heading": "VALIDATING OUR HYPOTHESES ON TINYSTORIES", "publication_ref": [ "b60", "b90" ], "table_ref": [], "text": "Deletion Type Twist Proportion at Iteration 0 30 300 3000 F (ηM ) 44% 81% 81% 82% F + R (ηM ) 12% 56% 69% 75% F + MM (ηM ) 31% 88% 50% 75% F (ηS) 69% 88% 75% 94% F + R (ηS) 12% 44% 81% 81% F + MM (ηS) 50% 81% 62% 81% Not in PT 12% 31% 44% 81% Table 1: TinyStories reFT Analysis. We report the percent of generations with a twist during reverse fine-tuning for the twist capability. F, R, and MM stand for Filtering, Randomisation and Mix & Match, our three fine-tuning protocols (see App. F for details). Regardless of learning rate and protocol, models relearn to generate stories with twist more sample-efficiently than the control model pre-trained on data w/o twists and fine-tuned to generate them (Not in PT).\nTo give additional strength to our results, we perform an analysis using more realistic language models trained on the TinyStories-Instruct dataset (Eldan & Li, 2023) (see App. B.3 for an example). These models are able to follow specific instructions to write coherent English stories over multiple paragraphs. We perform experiments analogous to the reFT and probing experiments in the previous sections, but explicitly focusing now on whether fine-tuning can delete capabilities present in pre-trained models. Models are pre-trained to generate stories with specific story features (such as containing a twist, foreshadowing, or bad ending) and fine-tuned to not generate stories with a specific feature (twist) (see App. F for details on the protocols). We probe these models to detect the deleted feature from the intermediate model outputs in Fig. 11, where the dynamics of loss during reFT on learning to generate stories with the deleted feature are also shown. We also report the percentage of stories with the deleted feature generated by models during reFT in Table 1, where the generated stories are processed by a fine-tuned GPT-3.5 classifier to predict if the story contains the deleted feature (see App. F for details). Overall, we find that \"deleted\" capabilities can be easily and sample-efficiently (compared to the baseline) recovered, i.e., stories with that feature can be generated again, regardless of the fine-tuning protocol used. These results support our hypotheses that fine-tuning only minimally alters pre-trained model capabilities. We also highlight a few recent papers that propose similar protocols as reFT and experiments as ours with further realistic settings (Qi et al., 2023;Yang et al., 2023;Anonymous, 2023a;b)." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work we show that fine-tuning generally alters pre-trained model capabilities via small, localized changes, minimally transforming them. We perform our analysis both with existing Preprint mechanistic interpretability tools as well as our proposed reFT method. Overall, this points the way for future work both understanding how fine-tuning works in more realistic settings with larger models, as well as developing methods beyond fine-tuning that alter pre-trained model capabilities more substantially." }, { "figure_ref": [], "heading": "A ORGANIZATION OF APPENDIX", "publication_ref": [], "table_ref": [], "text": "In the appendix we present a comprehensive analysis of our claims on Tracr, PCFG and TinyStories-Instruct using different mechanistic interpretability tools discussed in Section-D of the main paper. We also present a summary of the notations used in this work in Tab. 2. Overall, the appendix is organized as follows:\n• Sec. B presents details of the Tracr, PCFG and Tiny Stories datasets utilized in this work.\n• Sec. C presents the training and model details for each of the datasets considered.\n• Sec. D lists the protocols used for different mechanistic interpretability tools like attention maps, probing, pruning and reverse fine-tuning.\n• Sec. E provides a few more results in practically relevant contexts, such as in a synthetic jailbreaking setup.\n-Sec. E.1 studies the effect of using different fractions of pre-training and fine-tuning data points for fine-tuning. -Sec. E.2 presents the jailbreaking analysis using the PCFG setup.\n-Sec. E.3 shows reverse fine-tuning a fine-tuned model is sample efficient compared to baselines for both PCFG and Tracr models. -Sec. E.4 presents reverse fine-tuning analysis of a fine-tuning protocol that actively tries to remove a capability from PCFG / Tracr models.\n• Sec. F presents detailed discussion of setup details and results on TinyStories.\n• Sec. G presents additional results on Tracr for counter and max element tasks.\n• Sec. H presents additional results on PCFG for the counting and index of occurrence tasks. The overall distribution defining the downstream fine-tuning task DFT Dataset used for fine-tuning P FT,E X Empirical distribution from which the fine-tuning dataset is sampled T Denotes a task to be performed by the model (e.g., count) O Denotes an operand that will be processed by to perform the task T {OPT} Set of operand tokens seen during pretraining OPT A specific token used as an operand during pretraining OFT A specific token used as an operand during fine-tuning r(x, O)\nDenotes the result of executing a task from Sec. 4 on a string x for some operand O CTr Probability that a randomly sampled string in the training data used for fine-tuning has a spurious correlation between the pretraining capability and the downstream task CTe Probability that a randomly sampled string in the test data used for evaluating fine-tuned models has a spurious correlation between the pretraining capability and the downstream task PT(O)\nSampling prior. Denotes the probability that when a string with task token T is sampled during pretraining, the operand to perform the task on is O P H C , P M C , P S C Sampling priors such that the probability of sampling the target token for fine-tuning ( OFT ) is high (P H C ), medium (P M C ), or small (P S C ) ηM , ηS, ηV S Medium / Small / Very-small learning rates used for fine-tuning. ηV S is only used for a specific reverse fine-tuning experiment with " }, { "figure_ref": [], "heading": "B ADDITIONAL DETAILS ON DATASETS", "publication_ref": [ "b41" ], "table_ref": [], "text": "We consider three experimental setups: Compiled programs with Tracr (Lindner et al., 2023), learning models on Probabilistic Context Free Grammars (PCFG) (Allen-Zhu & Li, 2023c), and the TinyStories Instruct dataset." }, { "figure_ref": [], "heading": "B.1 TRACR DETAILS", "publication_ref": [ "b41", "b85" ], "table_ref": [], "text": "Tracr (Lindner et al., 2023) generates a transformer model using the RASP library by Weiss et al. (2021). The specific code snippet used to generate the Tracr models for the counting and the max element tasks are shown in Fig. 1 and Fig. 2 respectively. The models corresponding to these tasks is implemented with three standard transformer blocks, where each block consists of a self-attention layer followed by two MLP layers.\nWe analyze the following two tasks to understand the effects of fine-tuning on a pretrained model's capabilities.\n• , c, a, d, a, b, c, b, a, d, f, b, g, c, e, b, b, a, h, j, i, b, d, e, f, ,i, h, f, e, g, a, b, g, f, h, j, c, b, e, d, d, h, j, i, b, a, b, #, Answer: 10 Sample input:\nSOS + T + O + O ′ + SOT + Txt + EOT + ART + Ans + EOS.(1)\nWe consider the following tasks for pre-training:\n• Counting (C): Counting number of O (say a) for the last O ′ positions (forty). This example will be written as Ca40.\n• Counting composition of elements (CC): Counting number of O (say aa) for the last O ′ positions (forty). This example will be written as CCa40. Task Family Token T:: '(' Operand Token O:: 'a' Sample: $, (, a, 40, <, c, a, b, a, c, a, b, a, a, a, c, b, c, b, b, b, a, b, c, a, c, b, c, a, a, c, a, c, a, a, c, c, a, b, a, c, b, b, a, a, a, c, b, c, b, b, c, a, a, c, b, c, b, c, b, a, c, b, c, b, a, c, c, b, b, a, c, c, b, a, a, a, b, a, c, b, b, a, a, a, c, b, c, b, b, c, a, a, c, b, c, b, c, b, a, c, b, c, b, a, c, c, b, b, a, c, c, b, a, a, a, b, a, c, b, b, a, a, a, c, b, c, b, b, c, a, a, c, b, c, b, c, b, a, a, a, b, b, a, b, b, a, b, a, b, b, c, b, a, c, c, c, b, a, c, a, c, b, a, c, c, b, c, b, b, a, a, a, c, a, c, b, c, b, a, c • Index of occurrence (I): Index from the EOT token when O (say a) occurred for the O ′ th time (sixth). This example will be written as Ia10.\ns → r, q; s → q, p; p → m, n, o; p → n, o, m; q → n, m, o; q → m, n; r → o, m; r→ m, o, n; m → l, j; m → j, l, k; n → k, j, l; n → l, j, k; o →l, k, j; o → k, j; j → h, i; j → i, h; k → h, g, i; k → g, h, i; l → i, h, g; l → h, i, g; g → d, f\n• Index of occurrence of composition element (IC): Index from the EOT token when O (say aa) occurred for the O ′ th time (sixth). This example will be written as ICa10.\n• Token value at an index (T): The token value at index O ′ (forty) before the end token. O is NULL here. This example will be written as TNULL5.\nFor the \"Counting\", \"Counting composition of elements\", and \"Token value at index\" tasks, we set the value of O ′ token as 40. For \"Index of occurrence\" and \"Index of occurrence of composition element\" task, we set the value of O ′ token as 6. All five tasks above are considered during pre-training, but for fine-tuning we consider only a single task with a given operand. Specifically, we analyze fine-tuning the pre-trained models on the \"Counting\" and \"Index of occurrence\" tasks only.\nWe analyze the following two tasks to understand the effects of fine-tuning on a pretrained model's capabilities.\n• Counter: We intentionally reuse this task to demonstrate the effects of compilation of the capability via Tracr versus learning the capability via PCFGs. Instead of being compiled, the model is trained to count the number of tokens from a set of tokens {O PT }. The model is then fine-tuned to exclusively count a O FT ∈ {O PT } token. By making the sampling probability of O FT tokens high during pretraining, we can make the model preemptively performant on the downstream task; this allows us to model the notion of capability relevance. • Indexer: Amongst other tasks, the model is pretrained to output the index (location in a string) of a token from the set {O PT } occurs for the k th time; fine-tuning is performed to output the index of k th occurrence of another token O FT instead. We arbitrarily set k to 6 for our experiments, but emphasize that any integer less than context size can be used. If r(x, O) denotes the index of k th occurrence of a token O in a string x, the spurious correlation is enforced via constant offset q in operand token indices, i.e., r(x, O FT )r(x, O PT ) = q.\nWhile the pre-training dataset is generated by simply sampling from PCFG (see Fig. Summary: Eddie, an elderly engine who likes to sleep, is blown off the train tracks by a big wind, but a little bird and its friends help him get back on the tracks and become his best friends. Eddie learns that friends can make you feel young again. Features: Dialogue, Twist. Sentence: Eddie was happy again. Words: sleep, engine, elderly. Story: Once upon a time, there was an elderly engine named Eddie. Eddie was old and tired. He liked to sleep a lot. One day, Eddie was sleeping on the train tracks when he heard a little bird. \"Hello, Eddie!\" said the bird. \"Why are you sleeping here?\" \"I am old and tired,\" Eddie told the bird. \"I like to sleep.\" Just then, something unexpected happened. A big wind came and blew Eddie off the tracks. Eddie was scared but the little bird was brave. \"Don't worry, Eddie!\" said the bird. \"I will help you.\" The bird flew fast and found strong friends to help Eddie. They pushed Eddie back on the tracks. Eddie was happy again. From that day on, Eddie and the bird were best friends. And Eddie learned that even when you are old and tired, friends can make you feel young again.\nFigure 17: TinyStories Exemplar. An example from the TinyStories Instruct dataset (Eldan & Li, 2023)." }, { "figure_ref": [], "heading": "Preprint B.3 TINYSTORIES INSTRUCT", "publication_ref": [], "table_ref": [], "text": "For the TinyStories results, we use the TinyStories Instruct variant of the dataset (Eldan & Li, 2023) 1 . An example from this dataset is given in Fig. 17. For the fine-tuning datasets, we take the original dataset and alter it in several ways. Details are discussed in App. F." }, { "figure_ref": [], "heading": "C DETAILS ON TRAINING AND EVALUATION", "publication_ref": [], "table_ref": [], "text": "C.1 TRACR Compiled Model Details: The compiled model obtained for the counting and max identifier tasks consists of three blocks, wherein each block contains a single head attention layer followed by two layer MLP. No normalization layers are used by models developed using Tracr.\nTraining details: The compiled model is fine-tuned using SGD with momentum for 10K iterations with a batch size of 96. Tracr yields models with a rather sparse parameterization, which often yields unstable training dynamics (e.g., gradient spikes), especially with adaptive optimizers. To address this, we perform the following two interventions. First, we add a small amount of initial gaussian noise w noise ∈ N (0, 0.001) to the weights of the compiled model to densify them slightly. Note that the scale of this noise is not high, i.e., it avoids any performance loss but is sufficient enough to reduce gradient spikes resulting from extreme sparsity of model parameters. Second, we choose to use on SGD with momentum as the optimizer, using the following four choices of learning rates: Large LR (10 -1 ), Medium LR (10 -2 ), Small LR (10 -3 ), and Very Small LR (10 -4 ). The characterization of \"Large\" or \"Small\" is based on a general heuristic of what learning rate regimes are commonly used with SGD in modern neural network training. Linear warmup is used for 2K iterations followed by a cosine schedule with a minimum learning rate of the order 10 -2 smaller than its max value. Evaluation of the fine-tuned model is done on both test set with and without the spurious correlation (ie. C Te = 0 and C Te = 1)." }, { "figure_ref": [], "heading": "C.2 PCFG", "publication_ref": [ "b32" ], "table_ref": [], "text": "Model details: We use the minGPT model by Karpathy (2020) for all experiments on the synthetically generated PCFG dataset, similar to Allen-Zhu & Li (2023c). The model has close to 3 million parameters and consists of 6 blocks each made up of multihead self attention with 6 heads and two layers of MLP layers with an embedding dimension of 192." }, { "figure_ref": [], "heading": "Pre-training details:", "publication_ref": [], "table_ref": [], "text": "Pretraining is performed from scratch with a learning rate of 10 -3 using the standard AdamW optimizer. Cosine learning rate is used along with linear warmup, where the warmup is used in the first 20% of the training. The model is trained using the standard next token prediction task used for training language models. We consider the set of five tasks mentioned in the previous section during the pre-training phase, but focus on only one of these tasks during fine-tuning. We use the task family token and an operand token to define the notion of a task. The task family token is sampled from a uniform distribution, while the operand token (O) is sampled from a multinomial distribution. The sampling probability for different operands is varied in the experimental setup to understand the effect of capability relevance in fine-tuning. More specifically, we analyze the following distributions for sampling the operand tokens (a, b, c): Fine-tuning details: While pre-training is done in the next token prediction fashion, fine-tuning is done in a supervised way where the model is required to just perform the desired fine-tuning task. We use the final iteration model obtained from pre-training as the initialization for fine-tuning. While pre-training is done on multiple pairs of task and operand tokens, the model is fine-tuned on a single pair of task and operand tokens. To simulate a similar setup for fine-tuning as in Tracr, we analyze the effect of fine-tuning the model using three different sets of learning rate: Large LR (η L : 10 -4 ), Medium LR (η M : 10 -5 ) and Small LR (η S : 10 -6 ). Fine-tuning is done for 10K iterations using AdamW optimizer with a batch size of 96 samples. Similar to pre-training phase, we use cosine learning rate with an initial warmup of 20% of the fine-tuning iterations. The minimum value of the learning rate is set to be 100× lower than the maximum learning rate. Similar to Tracr evaluation is done on both the test sets with and without the spurious correlation (C Te = 0 and C Te = 1).\n• P T (a) = 0." }, { "figure_ref": [], "heading": "D MECHANISTIC INTERPRETABILITY TOOLS SETUP", "publication_ref": [ "b53", "b47", "b54" ], "table_ref": [], "text": "In this section, we describe the different tools of interpretability considered in our work.\nAttention Maps: We present the attention maps for different tasks considered in the Tracr setup. Each map shows the tokens which are attending other tokens on the y axis and the token which are being attended to on the x-axis. If a token is attended by many other tokens, then, in a crude sense, this can imply that the presence of the token is impacting the underlying task performed by the model. In the Counter task, if significant attention is given to a's / b's is an indicator of the respective capability of the model. For the max identifier task, in the attention map in Block-0, the model implements the sorting function, where each token is attended by the tokens which are greater than that. The vocabulary order followed is a > b > c > d.... In the attention map of Block-2, the model implements the read function, where it outputs the token at the desired position in the sorted sequence.\nPruning: We consider single step pruning where we prune the weights/neurons with largest dot product between their gradient and weights, where the gradients are calculated by minimizing the loss for the capability we want to revive. More formally, let the weights of the model with N parameters be given by w i where i ∈ {0, 1, . . . , N -1}, Let the corresponding gradient be given by grad(w i ) then the top-K weights with largest value of grad(w i )w i are pruned off. This follows the pruning protocols proposed in prior work for reducing or preserving loss via pruning (Molchanov et al., 2016;Lubana & Dick, 2021;Mozer & Smolensky, 1988). We use weight pruning for the Tracr setup and neuron pruning for PCFG, where a neuron is defined as a row in the weight matrix. We present a detailed description of the pruning protocol considered in Algorithm-3.\nAlgorithm 3: Pruning Pseudocode. A fine-tuned model f θ is parameterized by θ and θ i denotes its i th neuron or weight (we prune neurons in PCFG experiments and weights in Tracr). Pretraining task family token is given by O PT and is prepended to a string X sampled from the data generating process, yielding the input O PT • X. The true value corresponding to pre-training task family token O PT is given by y. Let the cross-entropy loss be given by CE. Let Top K (W ) denote the indices of the top K values in the vector W . Probing: Probing is used to understand if a particular capability is present in the model. In this, we train a linear layer (probe) on top of every block (residual layer's output) of the mini-gpt model and Preprint analyze if the probe is able to perform on a task requiring the use of the desired capability. The probe is a linear layer with the output dimensions same as the vocabulary size of the model. The probe is trained using the data randomly sampled from the PCFG data generating process for 4K iterations using AdamW optimizer with maximum learning rate of 10 -3 which is decayed by a factor of 10 at 2K, 3K and 3.5K iterations. Training of the probe is done separately on the residual output of each of the six blocks present in the minGPT model. The stream corresponding to the answer token (Ans) is used as the input to the probe.\nReverse Fine-tuning: Same set of hyperparameters as used in the fine-tuning of the pre-trained Tracr model are used in reFT , except for the learning rate, which we force to be smaller than the corresponding fine-tuning learning rate. Note that this use of an even smaller learning rate is intentional: if the original pretraining capability can be revived even with this setup, it is stronger evidence that the pretraining capability was never forgotten or removed." }, { "figure_ref": [], "heading": "E ADDITIONAL RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_14" ], "heading": "E.1 FINE-TUNING IN PRESENCE OF SOME PRE-TRAINING DATA", "publication_ref": [ "b82", "b95" ], "table_ref": [], "text": "In this section, we demonstrate our claims also hold for an often used fine-tuning setup wherein, beyond the fine-tuning data, the model also gets to see some portion of the pretraining data again. Specifically, we perform three degrees of mixing of the pretraining and fine-tuning data: (i) 50% PT + 50% FT, (i) 10% PT + 90% FT, and (i) 0.1% PT + 99.9% FT. We show behavior results on how the performance of the model improves as a function of fine-tuning iterations for different spurious correlations for a low pretraining sampling prior in Figs. We emulate jailbreaking (Wei et al., 2023;Zou et al., 2023) in our PCFG setup by defining several task family tokens describing the same task. Specifically, for the \"Counter\" task, we use three task family tokens T N J , T J1 , T J2 to refer to the task in a string. Here subscript N J indicates the task family token will not allow jailbreaking, while J 1 /J 2 indicate the task family token can be used to jailbreak the model, as explained next. For pretraining, the token T N J may be paired with operand tokens a, b, c to learn to count them from inputs sampled from the PCFG. However, tokens T j1 , T j2 are used only for counting a. During fine-tuning, the model is fine-tuned to count the token b using the task family token T N J . For evaluation, we compute the model's accuracy on its ability to count the token a, using either the task family token T N J or T J1 , T J2 . As shown in Fig. 24, the model is unable to infer the count of a if the task family token T N J is used; however, if task family tokens T J1 , T J2 are used, the model performs perfectly if the prior for sampling the fine-tuning target b during pretraining was sufficiently high. We argue that this is expected because under a high sampling prior breaks the symmetry between task family tokens (indeed, T J1 is only seen with operand token a, but T N J is seen for all operand tokens. This indicates the pretraining capability continues to persist in the model, enabling jailbreaking. To further investigate this result, we also probe the fine-tuned models. Results are shown in Fig. 25. As expected, we see task family tokens T J1 , T J2 allow for linear readout of the count of a; however, we see that even for inputs with task family token T N J , the model does encode the count of a in the outputs around the middle layers! \nT J1 /T J2 0 2.5K 5K 7.5K 10K T NJ 0 2.5K 5K 7.5K 10K T J1 /T J2 (a)η M (b)η S  H T  M T  L T\nFigure 24: Jailbreaking analysis using PCFG. We report performance on the pretraining task (counting O PT ) as a function of fine-tuning iterations, where the fine-tuning task (counting O FT ) is performed using the task family token T NJ . We find that the model is able to learn the fine-tuning task and seemingly performs poorly on the pretraining task when task family token T NJ is used in the input. However, in presence of a sufficiently relevant capability (high pretraining prior for O FT ), using task family tokens T J1 or T J2 in the input shows the model can still perform the pretraining task perfectly-i.e., we can jailbreak the model. " }, { "figure_ref": [ "fig_2" ], "heading": "Preprint E.3 SAMPLE EFFICIENCY ANALYSIS FOR REVERSE FINE-TUNING", "publication_ref": [], "table_ref": [], "text": "To emphasize the fact that the pretraining capability is \"revived\" in the model relatively sampleefficiently, we repeat Fig. 9, where models trained on PCFG are reverse fine-tuned, and repeat the experiment with the Scr. + FT baseline for Tracr compiled models. As can be seen in Figs. 26, 27, compared to the baseline, the model learns to perform the pretraining task in substantially fewer iterations than the baseline. We note that for the Tracr models in these results, even an extremely small learning rate is sufficient to revive the pretraining capability! We also note that we do not sweep over the C Tr hyperparameter in the Tracr models because they are compiled, i.e., we cannot control the correlation with the pretraining capabilities in a meaningful way. \nC Tr = 0 C Tr = 1 Scr. + FT η M η S\nFigure 26: Reverse Fine-Tuning on PCFGs: We set C Te to be 0 to test if the model performs well regardless of a spurious correlation. We observe that when a strongly relevant capability is present (a, b), the model very quickly (0.1-1K iterations) starts to perform well on the task via reFT , even if behavior relevant to the capability ceased during pretraining (e.g., when C Tr is 1). Meanwhile, when the model possessesses a weakly relevant capability (c), this \"revival\" is slightly slower (3K iterations). In contrast, the Scr. + FT baseline only reaches perfect accuracy at 4.5K iterations and when using a larger learning rate η M . We set C Te to be 0 to test if the model performs well regardless of a spurious correlation. We observe that the fine-tuned model upon reFT very quickly starts starts to perform well on the pretraining task. Moreover, the protocol works even if an extremely small learing rate is used. In contrast, the Scr. + FT baseline only reaches a large learning rate η M is used, and does so less sample efficiently. We note that the results for η M learning rate look worse than the η S learning rate around 10 3 iterations because η M is too big of a learning rate, forcing the model to essentially go through a \"retraining\" phase." }, { "figure_ref": [ "fig_2" ], "heading": "Preprint E.4 REVERSE FINE-TUNING A MORE SAFETY-ORIENTED FINE-TUNING PROTOCOL", "publication_ref": [], "table_ref": [], "text": "The fine-tuning protocols used in the bulk of the paper focus on learning a new capability, e.g., counting a new operand, while promoting reusability of capabilities learned during pretraining. Part of our motivation is to see if a pretrained model is actively forced to remove a capability, does that work? To analyze this, we define a fine-tuning protocol called randFT wherein the model is trained to actively produce an incorrect output for inputs that require use of the pretraining capability. For example, if the model possessesses the capability to produce the count the number of occurrences of token O PT = a in a string, we fine-tune it to produce the count of tokens O FT = b in that string. We analyze these fine-tuned models analyzed via reverse fine-tuning (reFT ), i.e., by further training them to produce the correct outputs (number of occurrences of token O PT ). We provide results for three baselines as well: (i) Scr., wherein the model is trained from scratch to learn to count the token a; (ii) Scr. + FT, wherein the model is initialized with parameters trained via trained from scratch to count a separate token ( O FT ) and then the model is fine-tuned to count the token O PT ; and (iii) reFT , which follows reverse fine-tuning models that were fine-tuned with the protocols used in the bulk of the paper, i.e., fine-tuned to learn a new capability that is related to the pretraining one.\nResults are shown in Fig. 28. We specifically zoom in on the the scenario where reFT takes the longest time, i.e., when the sampling prior of the downstream target O FT is low in pretraining data; results for other sampling priors are shown in Fig. 29 We see that reverse fine-tuning a randFT model is similarly sample-efficient as the standard reFT pipeline used in the bulk of the paper, while being more sample-efficient than the Scr. and Scr. + FT baselines.\nIn addition, we perform a probing analysis of the randFT models in Fig. 30. We again find that we can predict the information relevant for the pretraining task, i.e., the count of O PT . In most scenarios, we find we can infer the count of O PT with a similar trend as the pretrained model (left). A drop in performance is observed only when learning rate η M is used with a weakly relevant capability (low sampling prior). This indicates pretraining capabilities continues to persist upon fine-tuning." }, { "figure_ref": [], "heading": "F DETAILS AND RESULTS ON TINYSTORIES EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we describe our experiments on the TinyStories dataset in more detail. These experiments are designed to validate our hypotheses in a more realistic language modelling setting.\nOverall, the results support our hypothesis that fine-tuning does not lead to deletion of capabilities as they can be revived in a sample-efficient way and uncovered through probing." }, { "figure_ref": [], "heading": "F.1 MODEL TRAINING", "publication_ref": [ "b75" ], "table_ref": [], "text": "Dataset. We use the TinyStories (Eldan & Li, 2023) dataset to train our models. This data consists of children's stories written by GPT-3.5 and GPT-4. Each story is several paragraphs long, and comes with several attributes labelled: a set of three words that are included in the story; a sentence that is included in the story; a GPT-3.5-written summary of the story; and a list of 0-3 \"story features\", such as Twist, Dialogue or Bad Ending, which the story abides by.\nWe use the TinyStories-Instruct version of this dataset2 , wherein each story is prefixed with an \"instruction\" containing the story attributes described above, hence enabling the model to learn to conditionally generate stories based on an input or instruction.\nPre-training. We pretrain 91 million parameter autoregressive language models with a similar architecture to LLaMa 2 (Touvron et al., 2023), with a custom tokenizer with vocabulary of size 8192 trained on the dataset. 3 They have hidden dimension 768, 12 layers, and 12 attention heads per layer. These models are trained with the standard language modelling cross-entropy loss, with batch size 128, sequence length 1024, no dropout, for 30,000 gradient steps, with a learning rate schedule with a linear warmup from 0 and cosine decay to 0, with maximum learning rate 0.001. These models achieve a loss of 0.8 at the end of training, and can generate coherent multi-paragraph stories given a specific instruction in the form it saw during training.\nFine-tuning. We are interested in analysing whether fine-tuning these models can alter underlying capabilities. The specific capability we investigate is that of generating stories containing Twists (which is one of the story features), and are analysing whether various fine-tuning protocols can remove this capability from the pre-trained model. We investigate a variety of fine-tuning protocols modelled after plausible realistic scenarios where one may want to fine-tune a model to not generate text of a certain type (e.g., highly toxic text), regardless of the input instruction. These include:\nFiltering fine-tunes the model on a dataset where all instances of stories with Twists are filtered out; Filtering + Mix & Match filters, and then replaces all instances of another, unrelated feature (in this case, Foreshadowing) in the instruction with the Twist feature; and Filtering + Randomisation filters, and then adds the \"Twist\" instruction to the prompt for stories that do not contain Twists, thus training the model to not model stories with Twists even if instructed. This last protocol acts as a kind of adversarial training (in that there are stories with the Twist instruction but no Twists), and introduces a spurious correlation between the Twist instruction and the foreshadowing capability, as in the Tracr and PCFG results.\nWe take the pre-trained model described above, and fine-tune it with these various protocols. We then perform reFT on a dataset of stories which all have Twists in, to measure the extent to which each fine-tuning protocol deleted the capability of Twist generation. To ensure a good control, we compare the reFT models to a model pre-trained on data with no Twist stories, which is then fine-tuned on Twist stories. The sample efficiency and final performance of this model serves as a comparison for the reFT ed models." }, { "figure_ref": [], "heading": "F.2 EVALUATION METRICS", "publication_ref": [], "table_ref": [], "text": "We evaluate whether the fine-tuning protocols have removed the capability to model and generate stories with Twists in multiple ways. Firstly, we look at the loss on stories with Twists. If fine-tuning deletes the Twist capability, we expect the loss on this dataset to increase.\nGPT Evaluations. To evaluate the generative capabilities of these models, we generate stories from them given prompt instructions with the Twist story feature. We then evaluate whether these stories contain Twists. To do this evaluation, we use the OpenAI GPT fine-tuning API4 to fine-tune a GPT-3.5 model to classify whether a given story has a Twist or not. To do this, we use the TinyStories dataset and accompanying labels. This fine-tuned model achieves 92% accuracy on a held-out test set after fine-tuning. We generate stories with multiple different prompts from both the fine-tuned and reverse fine-tuned models throughout fine-tuning, and measure the proportion of stories which are classified as having a Twist, which we call the generation score.\nProbing. As well as using reFT to measure whether the fine-tuning protocols have deleted the capability to generate Twists, we also use probing to evaluate whether fine-tuning removes information from internal representations. We train linear probes on the internal activations of the transformer models to predict which story features (e.g. Twist, Bad Ending, Dialogue) are present in the story. These probes take an average of the activations at the final 10 token positions of the story.\nGiven that this is a multi-label classification problem we employ a separate binary classification probe to classify the presence of each story feature. We use the accuracy of these probes at different layers before and after fine-tuning, and on the control pre-trained model which was trained on data with no Twists, to measure whether fine-tuning has removed information from the models' internal activations." }, { "figure_ref": [], "heading": "F.3 RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Reverse fine-tuning", "publication_ref": [], "table_ref": [], "text": "The loss on stories with Twist during fine-tuning is shown in Fig. 31. This shows that the fine-tuning protocols are raising the loss, and hence behaviourally deleting the capability of fine-tuning. The generation scores are shown in Fig. 32. This again reinforces that most fine-tuning protocols are removing the capability behaviourally, as the generation scores (while noisy) drop to close to 0.\nFig. 33 shows the loss during reFT for all the fine-tuned models, as well as the control model pre-trained without stories with Twists, and Fig. 34 shows the generation scores. Both of these results show that the fine-tuned models learn the new capability in a much more sample-efficient way, and in fact converge to a lower loss on this dataset than the control pre-trained model.\nProbing In addition to the reFT results, we perform probing experiments. The probing accuracy for the Twist feature across layers for the fine-tuned models and the two control pre-trained models is shown in Fig. 11, which we reproduce here in Fig. 35 for completeness. These results show that a small amount of information about story classification has been removed from the activations of the fine-tuned models compared to the model pre-trained with Twist stories, but the reduction is very minor, as shown in comparison to the information present in the model pre-trained without Twist stories.\nFig. 36, Fig. 37, and Fig. 38 show similar plots for several other story features. Some of these are easier or harder for probes to classify, but the important result is that the difference in probe accuracy between the fine-tuned models and both pre-trained control models is negligible for all of these features, showing that the results in Fig. 35 are due to the Twist feature, i.e., the feature that we trained the model to delete. Figure 33: reFT easily recovers deleted capabilities. We plot loss on data with the Twist for reFT of various models fine-tuned to delete the capability, as well as a control model which was pre-trained without data with Twists. The fine-tuned models learn the capability more sample-efficiently, and additionally converge to a lower loss than the control model. " }, { "figure_ref": [], "heading": "Not in Pretraining Present in Pretraining", "publication_ref": [], "table_ref": [], "text": "Figure 35: Probing the presence of capabilities in TinyStories Models. We plot probe accuracy of classifying whether a story contains a Twist or not wrt. the layer of the Transformer model (similarly to Fig. 8). Accuracy on models pre-trained with or without Twist data (Present/Not in Pretraining respectively) act as upper and lower bounds on the expected accuracy of the probes, and are plotted on both LR figures for ease of comparison, although they do not use a fine-tuning learning rate. We find that regardless of fine-tuning protocol (Filtering, Filtering + Randomisation, Filtering + Mix & Match), for the lower LR no fine-tuning protocol removes a meaningful amount of information from the activations, and a similar but less strong trend holds for the higher LR, implying that the pre-trained model retains its capability of story identification (a necessary capability for story modelling) throughout fine-tuning. Identical to Fig. 11 0 -----------a---------b------ \nCTr = 0 ----b-a-bb---a-b------------a---------b------ CTr = 1 ----b-a-bb---a-b------------a---------b------ CTr = 0 ----b-a-bb---a-b------------a---------b------CTr" }, { "figure_ref": [ "fig_23", "fig_2" ], "heading": "G.2 COUNTER RESULTS", "publication_ref": [], "table_ref": [], "text": "A detailed analysis of the attention maps of block-1 and 2 for different learning rates is shown in Fig. 45. We further validate our results for three different input datapoints in Fig. 41 But using η V S , preserves the sorting capability (c). Thus on using η V S , the model learns to read a different stream of output, while preserving the sorting capability. We present an evidence further, showing that capability of the Tracr compiled model to count a's is still present in the model in Fig. 46, 47, where Fig. 46 presents a closer look of the Fig. 47. As can be seen in Fig. 46, on using η S and η M , Block-1 activation map of the Tracr fine-tuned model shows neurons corresponding to token a being activated in a different output channel.\nFinally, we present evidence of the wrapper being learned by the model on fine-tuning using spuriously correlated dataset. We show that this wrapper can be localized in a very few neurons of the model. As shown in Fig. 55, we present this evidence for different values of C Tr in Fig. 56. Similar to the analysis presented for PCFG where we prune multiple neurons, we analyze the effect of pruning of mutliple weights and neurons in Fig. 57 and Fig. 58 respectively. These results verify that the wrapper learned by the Tracr compiled model on fine-tuning using spuriously correlated dataset can indeed be localized to a few weights of the Tracr model. To ensure that the gains achieved on pruning are indeed because of removal of the wrapper, we present the histograms showing the distribution of the predicted classes in Fig. 59, where it can be observed that after pruning the model still predicts multiple classes." }, { "figure_ref": [ "fig_28", "fig_25", "fig_24", "fig_26" ], "heading": "G.3 MAX IDENTIFIER RESULTS", "publication_ref": [], "table_ref": [], "text": "In this section, we provide additional evidence and a detailed analysis of the performance of the Tracr compiled model on the max identifier task. We show that the model implements the sorting pattern in the activation map of its first block in Fig. 53 and Fig- 54. We present validation of our observations on considering the spurious correlation as the difference between the fifth and seventh maximmum element being three in Fig. 51. We validate our results for three different input data-points in Fig. 50.\nA detailed visualization of the attention maps in Block-0 and Block-2 for different learning rates is shown in Fig. 52. Preprint\n----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ Block-1 ----b-a-bb---a-b------------a---------b------ Block-2 ----b-a-bb---a-b------------a---------b------ Block-1 ----b-a-bb---a-b------------a---------b------ Block-2 ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ Attending Token ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------0\nk e hwmg u a t d############### k e h w m g u a t d # # # # # # # # # # # # # # # C Tr = 0 k e hwmg u a t d############### C Tr = 1 k e hwmg u a t d############### C Tr = 0 k e hwmg u a t d############### C Tr = 1 l i t r b s u g p################ l i t r b s u g p # # # # # # # # # # # # # # # #\nAttending Token This wrapper is learned on using η M and η S and can be localized in a few weights of the model. \nl i t r b s u g p################ l i t r b s u g p################ l i t r b s u g p################ v c t g l y r h e u############### v c t g l y r h e u # # # # # # # # # # # # # # # v c t g l y r h e u############### v c t g l y r h e u############### v c t g l y r h e u############### 0.0 0.1 0.2 0.3 0.4 0.5 (a) η VS (b) η M Token Attended" }, { "figure_ref": [], "heading": "H ADDITIONAL PCFG RESULTS", "publication_ref": [], "table_ref": [ "tab_11", "tab_12", "tab_13", "tab_12" ], "text": "In this section, we provide a detailed analysis of the PCFG results on the counter task. More specifically, we analyze the effect of the presence of weakly and strongly relevant capability in the model across three different parameters: training iterations (n iters ), fraction of fine-tuning data samples with spurious correlation (C Tr ) and the probability of sampling operand token (O) to be a during pre-training (P T ). P T essentially determines whether the capability present in the pre-trained model is strongly relevant or weakly relevant for the fine-tuning task. Additionally we also analyze the effect of using the spuriously correlated (C Te = 1) and uncorrelated test set (C Te = 0) for evaluation of the fine-tuned model. We present the summary of the results in Tables 5,6 for the counting task and Tables 7,6 for the index of occurrence tasks. Then we discuss the effect of learning rate on fine-tuning pre-trained models with weakly and strongly relevant capabilities in Fig. 60, 61. We observe that the observations are consistent for the considered counting and index of occurrence tasks. Next we analyze the effect of the presence of weakly and strongly relevant capability in the pre-trained model for different fractions of spuriously correlated data-points (C Tr ) and different pre-training iterations (n iters ) in Fig. 62, 64, 66 for the counting element task and Fig. 63, 65, 67 for the index of occurrence task. We observe that the observations are fairly consistent across both the tasks and different values of n iters . Next we present the effect of the spuriously correlated data and presence of weakly and strongly correlated capabilities on the learning of the wrapper in Fig. 68, 69 on using uncorrelated test set for evaluation on counting and index of occurrence tasks respectively. Similar analysis on using test set with spuriously correlated samples is present in Fig. 70 and71. We present the capability revival analysis on the Counter task for n iters = 200K and n iters = 50K pre-trained models for weakly and strongly relevant capability fine-tuned models in Fig. 76 and Fig. 77 respectively. A similar analysis for different number of pre-training iterations is present in Fig. 78. The convergence time for learning the fine-tuning task in the absence of strongly relevant capability is higher as compared to when the strongly relevant is present in the model. The time further increases if spurious correlations are present in the fine-tuning set. However, in the presence of spurious correlations, the convergence time to learn the spurious correlation is small and is possible even on using the learning rate η S . Using η S is unable to yield learning of the fine-tuning task if a weakly relevant capability is present in the model. " }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "ESL thanks Eric Bigelow, Nikhil Vyas, and Usman Anwar for relevant discussions early in the project. SJ was partially supported by BERI; ESL was partially supported via NSF under award CNS-2008151. RK was supported by the Foundation AI CDT at UCL." }, { "figure_ref": [], "heading": "AUTHORS' CONTRIBUTIONS", "publication_ref": [], "table_ref": [], "text": "ESL conceived the project direction and developed a set of hypotheses on the limitations of finetuning, with inputs from RK. SJ and ESL co-designed a draft of the PCFG and Tracr setups, and came up with pruning and reverse fine-tuning analysis which led to validation and further refining of the hypotheses. SJ led the experimental execution and made the tasks considered in the paper precise in collaboration with ESL. RK proposed and ran the TinyStories experiments with inputs from ESL, SJ, EG and TR. Literature review and writing of the main paper was led by ESL. SJ led writing of the appendix. ESL, SJ, RK, and HT collaborated on design of all figures and plots. DSK acted as the primary senior advisor on the paper, with inputs from RPD, HT, EG, and TR as well." }, { "figure_ref": [], "heading": "G ADDITIONAL TRACR RESULTS", "publication_ref": [], "table_ref": [], "text": "In this section, we present additional results on the counting and max-identifier tasks with Tracr models. These results provide an extensive analysis and support our claims presented in the main paper. Firstly, we present the detailed attention maps showing the full input sequence data: Fig. 40 and Fig. 39 show the attention maps corresponding to Fig. 42 and Fig. 43. We now present the detailed results on Tracr's Counter tasks. However, on using a small learning rate (η S ) of 10 -3 , in the absence of correlations, the model is not able to learn to attend to b's in its attention map. Thus the model is not able to learn the capability of counting b's. As shown in Tab. 3, in the presence of spurious correlations however, the model is able to learn the spurious correlation and achieve high accuracy on the correlated test set. We also present the visualization of the attention maps after reverse fine-tuning in Fig. 43, where it is observed that on using η M for fine-tuning, revival of capability is possible even on using a very small learning rate (η vs ) of 10 -4 . We present detailed results on reverse fine-tuning in Tab. 4 Whereas, in case the model is fine-tuned with a large learning rate (η L ) of 10 -1 , revival of capability is not possible. We also present analysis of single weight pruning and grafting in Fig. 55. On pruning off a single weight from the Tracr model compiled to count a's, the model can achieve a boost in accuracy of over 60% on the task of counting a's. This observation is evident only when the Tracr model is fine-tuned on correlated dataset. Summary of results on the max-identifer task. We present a visualization of the attention maps for the max identifier task in Fig. 44, where we observe that Tracr model implements the sorting and the reading functions in the attention maps in blocks 0 and 2 respectively. On fine-tuning the model using different learning rates, the sorting capability implemented in Block-0, gets distorted, thereby resulting in poor fine-tuning performance (as evident in Tab. 3). However using η V S (10 -4 ), changes the reading function, without disturbing the sorting function. Thus the model is able to perform well on the downstream task (as evident in Tab. 3).\n- ---b-a-bb---a-b------------a---------b----- \nInput Token Capability revival is possible on using η M for fine-tuning but not on using η L ." } ]
Fine-tuning large pre-trained models has become the de facto strategy for developing both task-specific and general-purpose machine learning systems, including developing models that are safe to deploy. Despite its clear importance, there has been minimal work that explains how fine-tuning alters the underlying capabilities learned by a model during pretraining: does fine-tuning yield entirely novel capabilities or does it just modulate existing ones? We address this question empirically in synthetic, controlled settings where we can use mechanistic interpretability tools (e.g., network pruning and probing) to understand how the model's underlying capabilities are changing. We perform an extensive analysis of the effects of finetuning in these settings, and show that: (i) fine-tuning rarely alters the underlying model capabilities; (ii) a minimal transformation, which we call a 'wrapper', is typically learned on top of the underlying model capabilities, creating the illusion that they have been modified; and (iii) further fine-tuning on a task where such hidden capabilities are relevant leads to sample-efficient "revival" of the capability, i.e., the model begins reusing these capability after only a few gradient steps. This indicates that practitioners can unintentionally remove a model's safety wrapper merely by fine-tuning it on a, e.g., superficially unrelated, downstream task. We additionally perform analysis on language models trained on the TinyStories dataset to support our claims in a more realistic setup.
MECHANISTICALLY ANALYZING THE EFFECTS OF FINE-TUNING ON PROCEDURALLY DEFINED TASKS
[ { "figure_caption": "Figure6: Impact of sampling prior on the pretraining task's accuracy as fine-tuning is performed. We plot accuracy on the pretraining task w.r.t. fine-tuning iterations. When the sampling prior of the O FT is low during pre-training, the pretraining task accuracy quickly plummets, especially if the spurious correlation is high; having a high sampling prior mitigates this behavior. This indicates pretraining capabilities are affected the most when they are weakly relevant.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure9: Reverse Fine-Tuning: We set C Te to be 0 to test if the model performs well regardless of a spurious correlation. Models are fine-tuned for 10K iterations. We observe that when a strongly relevant capability is present (a, b), the model very quickly (0.1-1K iterations) starts to perform well on the task via reFT , even if behavior relevant to the capability ceased during pretraining (e.g., when C Tr is 1). Meanwhile, when the model possesses a weakly relevant capability (c), this \"revival\" is slightly slower (3K iterations). In contrast, the Scr. + FT baseline only reaches perfect accuracy at 4.5K iterations and when using a larger learning rate, i.e., η M .", "figure_data": "", "figure_id": "fig_2", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :Figure 11 :1011Figure 10: Visualizing attention maps of finetuned Tracr models. Leftmost panel shows the Tracr compiled model's attention map on the counter task. Upon fine-tuning under different spurious correlations, we see the model continues to pay attention to the pretraining target O PT = a. Only when a large enough learning rate and zero spurious correlation is used, is there a change in the attention pattern. Attention map visualizations further corroborate the wrappers hypothesis. As we noted before, the results discussed above remain consistent across other experimental setups for both Tracr and PCFG models. However, by construction, Tracr yields particularly interpretable attention maps, allowing us to directly visualize the effects of finetuning. We thus analyze the attention maps of a Tracr model on the Counter task described in Sec. 4. Results are shown in Fig. 10. The original Tracr compiled model serves as a baseline and clearly demonstrates that all tokens only attend the pretraining target token, O PT = a. Upon fine-tuning to count O FT = b, we find the model clearly continues to pay attention to O PT if a small learning rate is", "figure_data": "", "figure_id": "fig_3", "figure_label": "1011", "figure_type": "figure" }, { "figure_caption": "Counter: Compile the capability to count the number of occurrences of a token O PT in a string into the model; fine-tune to count occurrences of another token O FT . If r(x, O) denotes the number of occurrences of a token O in a string x, the spurious correlation is defined by enforcing a constant difference in token occurrences, i.e., r(x, O FT )r(x, O PT ) = q. See also Alg. 1 and Fig.12.• Max-identifier: Compile the capability to identify the O PT -th largest element in a string; fine-tune to identify the O FT -th largest element. If r(x, O) reads out the O-th largest token in the string x, we define the spurious correlation as r(x, O FT )r(x, O PT ) = q; e.g., if q = 1 and the O PT largest token in the string x is a, then the O FT -th largest token will be b (which is equal to a + 1 in Tracr's vocabulary). See also Alg. 2 and Fig.13.The fine-tuning data is generated by randomly sampling tokens from a uniform distribution over the input vocabulary. For the Counter task, the input vocabulary consists of first nine letters from the English alphabet. For the max element task, the input vocabulary consists of all the letters in the English alphabet. We sample with replacement for the Counter task and without replacement for the max element task (to avoid having multiple max elements). Examples for the task are shown in Figs. 12, 13. Algorithm 1: Pseudocode for compiling the Counter capability via Tracr: Rasp code used to generate the model for the Counter capability and task via Tracr def countA(): # binzarize the tokens into 0's and 1's bin = (rasp.tokens=='a') # Select the indices of tokens with value of 1 bin_idx = rasp.Select(bin, rasp.indices, rasp.Comparison.EQ) # Count the number of selected indices count_a = rasp.SelectorWidth(bin_idx) # Generate an identity map idx_select = rasp.Select(rasp.indices, rasp.indices, rasp.Comparison.EQ) # Output the count sum = rasp.Aggregate(idx_select, count_a) Task: Count b Sample: $", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 12 :Figure 13 :1213Figure 12: Exemplar for Counter Task: A sample used for fine-tuning Tracr compiled models on counting 'b'.", "figure_data": "", "figure_id": "fig_5", "figure_label": "1213", "figure_type": "figure" }, { "figure_caption": ", e; g → f, e, d; h → e, d, f; h → d, e, f; i → e, f, d; i → f, d, e; d → c, a; d → a, b, c; e → c, b; e → c, a, b; f → c, b, a; f → b, a; ", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure14: PCFG setup: Grammar rules considered to generate the PCFG dataset. The highlighted token represents the parent token. These rules have been adapted fromAllen-Zhu & Li (2023c).", "figure_data": "", "figure_id": "fig_7", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15: PCFG Exemplar. A representative sample from the PCFG dataset(Allen-Zhu & Li, 2023c) ", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 16: Distribution of the class labels for Counting (first row) and Index of occurrence tasks (second row). (a) shows the distribution for the operand token a and (b) shows the same for the operand token b. The data is similarly distributed across different classes and the distribution shift for the two operands and the different values of C Tr is small.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "999, P T (b) = 0.001, P T (c) = 0.0; • P T (a) = 0.99, P T (b) = 0.01, P T (c) = 0.0; • P T (a) = 0.9, P T (b) = 0.1, P T (c) = 0.0; • P T (a) = 0.7, P T (b) = 0.2, P T (c) = 0.1; and • P T (a) = 0.5, P T (b) = 0.3, P T (c) = 0.2.For each of the configurations of sampling distributions of operands, we pre-train the model for 10K, 50K, 100K and 200K iterations. The model is trained in an online fashion to model the standard language model training pipeline, i.e., data is sampled on the fly from the data generating process during training time.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "def prune(): # Forward prop the model on pre-training task out = f θ ( OPT • X) # Calculate the loss L = CE(out, y) # Calculate the gradients grad = ∇ θ L # Calculate the dot product between model weights and gradients dotproduct = θ.grad # Select the indices of top K values indices = TopK (dotproduct) # Prune off the neurons/weights present in top K indices θ[indices] = 0 return θ", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 18 :Figure 19 :Figure 20 :Figure 21 :Figure 23 :1819202123Figure 18: Effect of different sampling probabilities of pre-training target token O PT on finetuning task's performance. We observe similar gains for different values of sampling probabilities of O PT during fine-tuning.", "figure_data": "", "figure_id": "fig_12", "figure_label": "1819202123", "figure_type": "figure" }, { "figure_caption": "Figure 25 :25Figure 25: Probing analysis for the setup used to understand jail-breaking. Similar results on using the fine-tuning token or the jailbreaking token for training the probe indicate that the pre-training capabilities are not removed on fine-tuning.", "figure_data": "", "figure_id": "fig_14", "figure_label": "25", "figure_type": "figure" }, { "figure_caption": "Figure 27 :27Figure27: Reverse Fine-Tuning on Tracr: We set C Te to be 0 to test if the model performs well regardless of a spurious correlation. We observe that the fine-tuned model upon reFT very quickly starts starts to perform well on the pretraining task. Moreover, the protocol works even if an extremely small learing rate is used. In contrast, the Scr. + FT baseline only reaches a large learning rate η M is used, and does so less sample efficiently. We note that the results for η M learning rate look worse than the η S learning rate around 10 3 iterations because η M is too big of a learning rate, forcing the model to essentially go through a \"retraining\" phase.", "figure_data": "", "figure_id": "fig_16", "figure_label": "27", "figure_type": "figure" }, { "figure_caption": "Figure 28 :Figure 29 :Figure 30 :282930Figure 28: Reverse fine-tuning a model fine-tuned to remove its pretraining capability. See text in Sec. E.4 for details.", "figure_data": "", "figure_id": "fig_17", "figure_label": "282930", "figure_type": "figure" }, { "figure_caption": "Figure 31 :Figure 32 :3132Figure31: Larger learning rates lead to more pronounced loss of modelling capability. The plots show loss on data with the Twist feature present while fine-tuning to delete the capability to model text with the Twist feature, for different learning rates and fine-tuning protocols.", "figure_data": "", "figure_id": "fig_18", "figure_label": "3132", "figure_type": "figure" }, { "figure_caption": "Figure 37 :Figure 38 :3738Figure 37: Probing the presence of capabilities in TinyStories Models. We plot probe accuracy of classifying whether a story contains the Moral Value feature or not wrt. the layer of the Transformer model. All other details the same as Fig. 35", "figure_data": "", "figure_id": "fig_19", "figure_label": "3738", "figure_type": "figure" }, { "figure_caption": "Figure 42 :42Figure 42: New capabilities are not learned on using η S for fine-tuning. (a) The counting a's capability implemented by the originally compiled Tracr model is to attend to a's. On using η S (10 -3 ) for fine-tuning, the compiled model is not able to learn the capability of counting b's (b). Increasing the learning rate makes the model learn to attend b's (c), but the pretraining capability of attending to a's still exists.", "figure_data": "", "figure_id": "fig_20", "figure_label": "42", "figure_type": "figure" }, { "figure_caption": "Figure 43 :43Figure43: Capability Revival Analysis: Using η V S (a) is able to recover the old capability on reverse fine-tuning the model fine-tuned with η M . But η S is not able to recover the original capability, when the compiled model is fine-tuned with η l . This is because using a large value of learning rate during=fine-tuning hampers the pre-training capabilities.", "figure_data": "", "figure_id": "fig_21", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "Figure 44 :44Figure 44: Learning of the fine-tuning capability is affected by type of compiled capability present in the model. (a) The Tracr program implements the sorting function in Block-0 and read function in Block-2. Using η M and η S can destroy the sorting capability present in Block-1 (b).But using η V S , preserves the sorting capability (c). Thus on using η V S , the model learns to read a different stream of output, while preserving the sorting capability.", "figure_data": "", "figure_id": "fig_22", "figure_label": "44", "figure_type": "figure" }, { "figure_caption": "Figure 45 :45Figure 45: Counter Task: Visualization of the attention maps of the first and second blocks of the Tracr fine-tuned models. (a) shows the analysis when the spurious correlation is not present in the fine-tuning datatset, whereas in case of (b) the spurious correlation is present in the fine-tuning dataset. The first row shows the maps for the Tracr compiled model and other rows shows the analysis for different learning rates. Observation: (a) Using η S or η V S the model is not able to learn to attend b's and thus the fine-tuning task performance is poor. Whereas using η M the model is able to learn to attend to b's, however the capability to count a's is likely still present since the model still attends to a's. Further increasing the learning rate leads to distortion of the compiled capabilities, and thus model learns the fine-tuning task by learning a different capability. (b) In the presence of spurious correlation, even for large learning rate the compiled capability is still present, since the model attends to a's.", "figure_data": "", "figure_id": "fig_23", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 50 :50Figure 50: Max Identifier Task: Validation of Tracr observations on max identifier task on three different input samples. The rows represents different input samples. Observation: Using η V S preserves the original Tracr capabilities and therefore performs well on the fine-tuning task, whereas using η M distorts the compiled capabilities resulting in poor performance on the fine-tuning task.", "figure_data": "", "figure_id": "fig_24", "figure_label": "50", "figure_type": "figure" }, { "figure_caption": "Figure 51 :51Figure 51: Max Identifier Task: Validation of Tracr observations on max identifier task with the spurious correlation defined as the difference between the indices of fifth and seventh maximum elements being three. The rows represents different input samples. Observation: Using η V S preserves the original Tracr capabilities and therefore performs well on the fine-tuning task, whereas using η M distorts the compiled capabilities resulting in poor performance on the fine-tuning task.", "figure_data": "", "figure_id": "fig_25", "figure_label": "51", "figure_type": "figure" }, { "figure_caption": "Figure 52 :52Figure52: Max Identifier Task: Visualization of the attention maps of the zeroth and second blocks of the Tracr fine-tuned models on the max identifier task. (a) shows the analysis when the spurious correlation is not present in the fine-tuning datatset, whereas in case of (b) the spurious correlation is present in the fine-tuning dataset. The first row shows the maps for the Tracr compiled model and other rows shows the analysis for different learning rates. Observation: Using η L , η M or η S for fine-tuning distorts the capability of the programmed Tracr model in the Block-0 and as a result the Block-2 attention map is not able to attend to the desired output token. Whereas using η V S is able to preserve the capability and as a result the fine-tuned model is able to attend to the correct token in the attention map in Block-2.", "figure_data": "", "figure_id": "fig_26", "figure_label": "52", "figure_type": "figure" }, { "figure_caption": "Figure 53 :53Figure 53: Max Identifier Task: Visualization of the activated output of the first MLP layer in first and second blocks for the max identifier task. The visualization is shown only for channel numbers 50-70. Observation: Using η V S for fine-tuning, which enables the model to learn the fine-tuning task, preserves the Tracr compiled model's compiled capability of sorting tokens in Block-1. Whereas other learning rates are not able to preserve this capability.", "figure_data": "", "figure_id": "fig_28", "figure_label": "53", "figure_type": "figure" }, { "figure_caption": "Figure 54 :Figure 55 :5455Figure 54: Max Identifier Task: Visualization of the activated output of the first MLP layer in first and second blocks. This is the complete visualization of the activation map presented in Fig. 53.", "figure_data": "", "figure_id": "fig_30", "figure_label": "5455", "figure_type": "figure" }, { "figure_caption": "Figure 56 :Figure 57 :Figure 58 :Figure 59 :56575859Figure 56: Counter Task: Pruning evaluation on Tracr model fine-tuned to count b's. Observation: Observations are consistent with Fig-7.", "figure_data": "", "figure_id": "fig_31", "figure_label": "56575859", "figure_type": "figure" }, { "figure_caption": "Figure 60 :Figure 61 :Figure 62 :606162Figure 60: Counter Task, n iters = 200K, C Te = 0: Effect of learning rate (LR) on fine-tuning pre-trained models with weakly and strongly relevant capabilities and using different values of C Tr for fine-tuning. Observation: In the presence of strongly relevant capability, training with η S yields good performance on the fine-tuning dataset. The convergence time to learn the fine-tuning task increases with an increase in C Tr .", "figure_data": "", "figure_id": "fig_32", "figure_label": "606162", "figure_type": "figure" }, { "figure_caption": "Figure 63 :Figure 64 :Figure 65 :Figure 66 :Figure 67 :Figure 68 :Figure 69 :Figure 70 :Figure 71 :Figure 72 :Figure 73 :Figure 74 :Figure 75 :Figure 76 :Figure 77 :Figure 78 :Figure 79 :Figure 80 :636465666768697071727374757677787980Figure 63: Index of Occurrence Task, n iters = 200K: The settings are consistent with Fig. 62.", "figure_data": "", "figure_id": "fig_33", "figure_label": "636465666768697071727374757677787980", "figure_type": "figure" }, { "figure_caption": "Figure 81 :Figure 82 :Figure 83 :Figure 84 :Figure 85 :Figure 86 :Figure 87 :Figure 88 :Figure 89 :Figure 90 :Figure 91 :Figure 92 :Figure 93 :Figure 94 :Figure 95 :Figure 96 :Figure 97 :Figure 98 :818283848586878889909192939495969798Figure 81: Index of Occurrence task, P T (a) = 0.999, Pruning Analysis: The settings are consistent with Fig. 7", "figure_data": "", "figure_id": "fig_34", "figure_label": "818283848586878889909192939495969798", "figure_type": "figure" }, { "figure_caption": "Counter: We intentionally reuse this task to demonstrate the effects of compilation of the capability via Tracr versus learning the capability via PCFGs. Instead of being compiled, the model is trained to count the number of tokens from a set of tokens {O PT }. The model is then fine-tuned to exclusively count a O FT ∈ {O PT } token. By making the sampling probability of O FT tokens high during pretraining, we can make the model preemptively performant on the downstream task; this allows us to model the notion of capability relevance.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": ", in the presence of spurious correlations, a linear probe can retrieve the count of the token O PT , indicating intermediate outputs relevant to the pretraining capability are still being produced by the fine-tuned model. This observation is particularly evident when a smaller learning rate is used, which is common in practice. Overall, these results show that when a weakly relevant capability is present in the pretrained model, a wrapper, i.e., a localized transformation of the pretraining capability, is learned during fine-tuning.", "figure_data": "PreprintPT100C Tr = 0 H TC Tr = 1  M T L TC Tr = 0C Tr = 1Acc. O30 65FT100Acc. O30 65B0 B1 B2 B3 B4 B5 B6B0 B1 B2 B3 B4 B5 B6B0 B1 B2 B3 B4 B5 B6B0 B1 B2 B3 B4 B5 B6B0 B1 B2 B3 B4 B5 B6(a) Pre-trained Model(b) η M(c) η SFigure 8: Probing the presence of pre-training (top) and fine-tuning (bottom) capabilities. Weplot probe accuracy to infer the count of O PT / O FT versus the index of the block in the Transformermodel. C Te is set to 0. The pretrained model (leftmost panels) act as a baseline for the trend ofperformance through the model's blocks. In most scenarios, we find we can infer the count of O PTwith a similar trend as the pretrained model. A drop in performance is observed only when learningrate η M is used with a weakly relevant capability (low sampling prior). This indicates pretrainingcapabilities continues to persist after fine-tuning.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Notations used in this work.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "18, 19 and high sampling prior in Figs. 20, 21. Furthermore, we probe these models' intermediate outputs to infer if features relevant to the pretraining capability continue to persist. Results can be seen inFigs. 22, 23. ", "figure_data": "100CTe = 0CTe = 1CTe = 0CTe = 1CTe = 0CTe = 165FT30Acc. O65 100300 2.5K 5K 7.5K 10K0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K(a) 50% PT + 50% FT (high mixing)(b) 10% PT + 90% FT (medium mixing)(c) 0.1% PT + 99.9% FT (low mixing)", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Figure34: reFT easily recovers deleted generative capabilities. We plot the generation scores for the Twist feature for reFT of various models fine-tuned to delete the capability, as well as a control model which was pre-trained without data with Twists. The fine-tuned models learn the capability much more sample-efficiently, and additinoally converge to a lower loss, than the control model.B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 B 9 B 10 B 0 B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 B 9 B 10", "figure_data": "Generation Score10 0 0.25 0.50 0.75 Filtering + Randomisation 10 1 10 2 LR: S Filtering + Mix & Match10 310 0 Filtering10 110 2 LR: M Not in Pretraining 10 3Training IterationLR: M Filtering Filtering + Mix & Match Filtering + RandomisationLR: SProbe Accuracy0.85 0.90 0.95Probe Layer Index", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 B 9 B 10 Probing the presence of capabilities in TinyStories Models. We plot probe accuracy of classifying whether a story contains the Foreshadowing feature or not wrt. the layer of the Transformer model. All other details the same as Fig.35 B0 B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 B 9 B 10 B 1 B 2 B 3 B 4 B 5 B 6 B 7 B 8 B 9 B 10", "figure_data": "LR: M Filtering Filtering + Mix & Match Filtering + RandomisationLR: S Not in Pretraining Present in PretrainingProbe Accuracy.85 0.90 0.95Probe Layer IndexFigure 36: 0.85 0.90 0.95 Probe Accuracy 1.00LR: M Filtering Filtering + Mix & Match Filtering + RandomisationLR: S Not in Pretraining Present in PretrainingProbe Layer Index", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results on counting and max element task Tracr setups. The evaluation is done on test sets with and without the spurious correlation. The Tracr compiled models are fine-tuned for different learning rates and different value of C Tr . Acc. O PT Acc. O FT Acc. O PT Acc. O FT Acc. O PT Acc. O FT Acc. O PT Acc. O FT", "figure_data": "Counting ElementMax Identifierη C TrC Te = 1C Te = 0C Te = 1C Te = 000.0100.00.0100.020.334.50.099.30.2 0.0100.00.0100.00.592.00.097.10.5 0.0100.00.0100.00.697.00.197.610 -10.6 0.0100.00.0100.00.398.50.096.70.8 0.0100.00.0100.00.199.40.098.60.9 0.0100.00.0100.00.798.20.192.510.0100.035.80.70.399.616.837.801.196.30.098.829.70.216.352.60.2 0.0100.00.099.228.418.619.046.00.5 0.699.40.095.94.887.93.392.610 -20.6 0.199.90.098.83.983.22.682.50.8 0.399.60.197.04.588.86.672.90.9 1.498.57.139.316.045.726.911.110.398.34.20.211.178.523.814.4054.61.225.727.26.420.24.528.50.2 50.215.026.524.37.427.45.428.00.5 7.190.919.82.311.324.07.620.810 -30.6 4.194.211.82.211.826.78.420.10.8 1.398.36.70.711.534.38.519.90.9 1.897.89.20.714.632.211.415.814.094.310.32.216.033.212.814.0032.60.010.628.70.582.60.591.10.2 59.20.131.924.40.184.80.691.30.5 28.565.137.85.60.089.30.691.810 -40.6 24.470.335.64.80.089.60.690.80.8 14.184.229.72.10.089.70.689.90.9 1.398.36.70.70.093.20.297.111.698.310.60.20.090.20.788.6", "figure_id": "tab_9", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on counting task Tracr for reverse fine-tuning with different learning rates. Fine-tuning was done using η M . The evaluation is done on test sets with and without the spurious correlation. The Tracr compiled models are fine-tuned for different learning rates and different value of C Tr . Acc. O PT Acc. O FT Acc. O PT Acc. O FT Acc. O PT Acc. O FT Acc. O PT Acc. O FT", "figure_data": "Counting Elementη C TrC Te = 1C Te = 0C Te = 1C Te = 0096.5100.02.10.094.0100.00.10.00.2 98.5100.00.10.094.9100.00.10.00.5 39.4100.043.60.044.9100.05.60.010 -10.6 72.9100.026.70.069.4100.00.20.00.8 49.3100.06.50.037.1100.016.60.00.9 31.3100.064.00.034.1100.01.70.0169.2100.03.70.065.4100.06.30.0063.399.936.60.165.598.60.00.00.2 19.5100.048.80.029.6100.017.50.00.5 14.3100.054.40.028.999.918.70.010 -20.6 86.299.913.80.178.398.60.00.00.8 65.6100.01.70.043.799.510.80.00.9 33.3100.027.90.036.5100.012.60.0199.099.60.90.395.296.80.10.1019.899.934.90.023.798.610.20.00.2 3.917.133.878.024.442.914.73.00.5 2.087.633.29.518.985.711.90.910 -30.6 7.199.835.40.122.397.315.40.10.8 11.697.245.80.327.395.516.50.50.9 24.275.220.323.633.568.813.11.5168.599.926.70.165.398.65.80.0045.299.90.10.116.998.54.90.00.2 30.19.422.745.118.126.217.919.50.5 30.13.222.741.315.726.117.916.210 -40.6 54.10.045.935.90.027.825.711.90.8 26.883.064.510.93.076.849.21.30.9 27.985.061.58.42.180.944.21.1145.299.631.90.342.296.812.60.0", "figure_id": "tab_10", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results on the PCFG counting task with 200K iterations of pre-training.", "figure_data": "Acc O PTAcc O FTη P T (a) C TrAcc PTC Te = 0 1K 10KC Te = 1 1K 10KAcc PTC Te = 0 1K 10KC Te = 1 1K 10K09.79.514.5099.410074.987.50.9990.5 0.81007.2 5.69.7 10.81.3 0.20 027.175.9 6099.9 99.895.9 98.1100 100117.215.20.2001.698.9100099.992.613.667.299.810010092.810 -40.90.5 0.8100100 99.990.7 43.415.6 11.473.5 33.299.999.8 99.499.4 99.299.9 100100 100199.815.916.52.499.69.46.1100098.914.776.644.199.910087.699.20.50.5 0.899.995.1 92.823.6 12.271.2 79.833.4 2.299.999.7 99.9100 99.999.8 98.799.9 99.9149.51960.6025.81699.8100048.910.251.94.639.899.813.979.70.9990.5 0.810019.7 12.111.6 6.612.3 7.71.2 0.227.118.4 6.198.1 85.781.4 98.499.7 99.710.417.5000099.9100010085.394.856.999.899.983.387.310 -50.90.5 0.810099.9 10067.2 34.694.9 94.855.4 21.799.999.9 99.899.9 99.499.3 99.799.8 100198.513.588.60.858.33.699.8100010097.597.565.710010095.695.40.50.5 0.810099.9 99.994.1 87.498.1 93.869 67.799.9100 99.8100 10099.3 100100 100199.641.291.853.390.119.699.810001002996.625.728.551.815.129.20.9990.5 0.810098.7 8321.8 15.188.9 69.710.6 6.827.123.3 18.423.7 8.920.3 26.587.5 99.7171.72.356015.7029.599.9010010095.491.899.899.584.184.610 -60.90.5 0.8100100 99.899.9 99.496 95.994.4 92.699.999.6 99.699.5 99.395.9 94.899.6 99.6199.851.695.163.999.530.594.299.7010099.997.798.199.899.895.495.60.50.5 0.899.9100 10099.9 10097.9 98.893.7 93.199.9100 10099.9 99.998.5 98.399.6 99.9110097.698.685.199.873.998.3100", "figure_id": "tab_11", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results on the PCFG counting task with 50K iterations of pre-training.", "figure_data": "Acc O PTAcc. O FTη P T (a) C TrAcc. PTC Te = 0 1K 10KC Te = 1 1K 10KAcc. PTC Te = 0 1K 10KC Te = 1 1K 10K010.89.120.198.910086.393.60.9990.5 0.899.911.7 5.58.9 112.1 00.1 05.1790.2 64.999.9 10097.9 98.999.8 99.9120.215.90001.999.910009.110.30.70.199.610084.294.410 -40.90.5 0.899.911.4 4.210.6 9.21.5 0.10 015.893.2 63.199.9 10097.9 97.8100 100118.416.4000.5510099.9087.710.162.6099.910089.393.50.50.5 0.899.890.1 59.59.4 10.167.4 29.30 0.199.799.5 99.2100 99.999.9 100100 99.9118.715.25.1017.414.2100100039.316.60.632.699.812.488.70.9990.5 0.899.930.9 6.311.4 10.84.1 10.6 0.15.1712.7 1.499.1 93.993 9999.7 99.812.120.70.200099.899.902810.934.70.139.699.823.488.610 -50.90.5 0.899.933.2 13.18.9 9.34.2 2.90 015.822.7 9.499.6 95.192.8 98.9100 10011.919.80.3000.199.7100099.673.788.14699.999.986.489.20.50.5 0.899.899.6 99.679.3 60.882.7 8057.1 33.699.799.8 99.599.9 99.999.1 99.399.9 100181.712.968.60.546.416.198.8100094.118.681.918.8949.10.424.40.9990.5 0.899.938.8 14.437.2 8.820 4.25.6 0.25.176 0.923.8 7.850.7 74.893.7 99.918.95.82.30.21.3079.299.8099.72194.320.624.456.814.628.410 -60.90.5 0.899.946.9 30.438.6 10.419.7 8.73.6 1.615.811.8 3.628.6 12.339.4 62.892 98.91276.36.70.41.3070.499.5099.899.694.482.799.999.88184.60.50.5 0.899.8100 99.799.5 9791.4 90.180.1 74.699.799.7 99.799.9 99.395.9 96.299.6 99.3199.855.791.26199.43096.399.4", "figure_id": "tab_12", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results on the PCFG index of occurrence task with 200K< iterations of pre-training.", "figure_data": "Acc. O PTAcc O FTη P T (a) C TrAcc. PTC Te = 0 1K 10KC Te = 1 1K 10KAcc. PTC Te = 0 1K 10KC Te = 1 1K 10K05.80.023.90.071.899.746.199.50.9990.5 0.899.09.0 14.90.0 0.023.0 0.80.0 0.09.366.6 36.899.6 99.181.0 99.5100.0 100.0133.69.50.00.03.75.8100.0100.0096.20.088.60.198.599.998.699.710 -40.90.5 0.899.296.8 96.80.0 0.083.5 84.80.0 0.097.197.3 97.499.5 99.2100.0 100.0100.0 100.0172.61.379.10.037.324.0100.0100.0095.74.596.53.699.599.799.899.80.50.5 0.898.095.2 96.05.6 17.778.2 90.23.5 5.698.999.1 98.899.8 99.4100.0 100.0100.0 100.0191.315.879.814.548.028.8100.0100.0094.62.686.218.617.085.416.068.50.9990.5 0.899.098.4 94.13.4 5.197.6 94.07.7 0.29.314.3 10.779.8 66.627.2 37.697.8 99.7177.627.770.80.04.93.756.6100.0099.288.599.079.397.399.198.999.110 -50.90.5 0.899.299.2 98.891.9 95.699.2 98.977.9 84.397.197.3 96.998.3 98.7100.0 99.9100.0 100.0198.065.498.072.781.331.899.9100.0097.395.8100.089.199.199.999.999.40.50.5 0.898.097.6 97.495.9 95.198.8 97.875.4 80.398.999.2 99.699.3 99.1100.0 100.0100.0 100.0197.487.798.081.097.251.8100.0100.0099.080.499.473.014.623.319.515.50.9990.5 0.899.098.9 99.294.2 67.899.5 98.794.3 61.29.313.7 13.220.2 9.222.0 25.140.5 71.8199.461.698.811.312.94.519.395.6098.899.699.599.697.397.098.098.410 -60.90.5 0.899.299.0 98.798.8 99.299.8 99.799.2 99.097.196.6 97.196.6 96.798.8 99.399.9 100.0198.798.099.495.897.870.799.3100.0097.897.999.899.699.298.999.299.70.50.5 0.898.098.7 97.498.0 97.999.8 99.698.2 98.298.998.9 99.599.0 98.999.8 100.0100.0 100.0198.196.799.996.498.994.599.9100.0", "figure_id": "tab_13", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Results on the PCFG index of occurrence task with 50K iterations of pre-training.", "figure_data": "Acc O PTAcc O FTη P T (a) C TrAcc. PTC Te = 0 1K 10KC Te = 1 1K 10KAcc. PTC Te = 0 1K 10KC Te = 1 1K 10K00.50.011.70.177.199.666.199.40.9990.5 0.894.23.2 6.70.0 0.12.0 1.00.0 0.03.260.0 26.798.5 96.892.5 98.7100.0 100.0123.110.90.00.03.85.199.6100.0043.20.034.60.886.199.675.197.510 -40.90.5 0.894.248.2 53.50.0 0.056.2 53.10.1 0.069.981.0 74.098.9 97.097.5 98.899.9 100.0112.811.032.80.04.43.499.8100.0072.12.359.62.695.298.690.299.60.50.5 0.888.665.4 56.62.3 1.570.4 65.50.0 0.091.591.4 88.799.3 97.498.8 99.7100.0 100.0139.012.840.20.06.15.399.9100.005.00.16.47.718.488.910.386.50.9990.5 0.894.274.1 37.41.4 3.842.3 13.20.4 0.73.210.1 6.977.0 45.955.2 86.998.8 98.7148.019.12.70.00.54.495.2100.0089.218.969.626.874.091.166.882.810 -50.90.5 0.894.291.9 92.425.0 30.475.9 75.938.9 32.169.970.8 60.886.9 80.282.0 87.899.2 99.8169.78.873.98.413.84.492.6100.0087.664.077.751.493.195.685.590.90.50.5 0.888.685.9 84.957.5 46.880.0 78.257.0 50.491.590.4 89.295.5 91.693.9 96.8100.0 100.0180.133.577.613.338.65.598.1100.0087.53.773.814.28.323.015.820.70.9990.5 0.894.295.3 94.049.3 31.083.3 71.419.2 7.73.25.4 3.913.7 7.227.5 38.776.9 93.7188.143.050.62.10.31.144.797.5093.688.273.270.769.675.656.966.910 -60.90.5 0.894.294.3 92.792.2 89.873.1 75.776.4 80.369.970.3 65.369.7 63.667.4 71.888.6 94.7195.656.272.872.360.27.871.997.9087.387.080.076.690.393.885.188.30.50.5 0.888.687.7 90.684.5 83.280.4 83.180.8 80.591.593.4 92.391.5 87.990.3 90.197.6 98.6189.077.379.870.292.423.091.999.7", "figure_id": "tab_14", "figure_label": "8", "figure_type": "table" } ]
Samyak Jain; Robert Kirk; Ekdeep Singh Lubana; Robert P Dick; Hidenori Tanaka; Edward Grefenstette; Tim Rocktäschel; David Krueger
[ { "authors": "Armen Aghajanyan; Luke Zettlemoyer; Sonal Gupta", "journal": "", "ref_id": "b0", "title": "Intrinsic dimensionality explains the effectiveness of language model fine-tuning", "year": "2020" }, { "authors": "Anthony Michael Ahn; Noah Brohan; Yevgen Brown; Omar Chebotar; Byron Cortes; Chelsea David; Chuyuan Finn; Keerthana Fu; Karol Gopalakrishnan; Hausman", "journal": "", "ref_id": "b1", "title": "Do as i can, not as i say: Grounding language in robotic affordances", "year": "2022" }, { "authors": "Zeyuan Allen; -Zhu ; Yuanzhi Li", "journal": "", "ref_id": "b2", "title": "Physics of language models: Part 3.1, knowledge storage and extraction", "year": "2023" }, { "authors": "Zeyuan Allen; -Zhu ; Yuanzhi Li", "journal": "", "ref_id": "b3", "title": "Physics of language models: Part 3.2, knowledge manipulation", "year": "2023" }, { "authors": "Zeyuan Allen; -Zhu ; Yuanzhi Li", "journal": "", "ref_id": "b4", "title": "Physics of language models: Part 1, context-free grammar", "year": "2023" }, { "authors": " Anonymous", "journal": "", "ref_id": "b5", "title": "Learning and forgetting unsafe examples in large language models", "year": "2023" }, { "authors": "Amanda Askell; Yuntao Bai; Anna Chen; Dawn Drain; Deep Ganguli; Tom Henighan; Andy Jones; Nicholas Joseph; Ben Mann; Nova Dassarma", "journal": "", "ref_id": "b6", "title": "A general language assistant as a laboratory for alignment", "year": "2021" }, { "authors": "Bing Bai; Jian Liang; Guanhua Zhang; Hao Li; Kun Bai; Fei Wang", "journal": "", "ref_id": "b7", "title": "Why attentions may not be interpretable", "year": "2021" }, { "authors": "Yuntao Bai; Andy Jones; Kamal Ndousse; Amanda Askell; Anna Chen; Nova Dassarma; Dawn Drain; Stanislav Fort; Deep Ganguli; Tom Henighan", "journal": "", "ref_id": "b8", "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback", "year": "2022" }, { "authors": "Yonatan Belinkov", "journal": "Computational Linguistics", "ref_id": "b9", "title": "Probing classifiers: Promises, shortcomings, and advances", "year": "2022" }, { "authors": "Yoshua Bengio; Tristan Deleu; Nasim Rahaman; Rosemary Ke; Sébastien Lachapelle; Olexa Bilaniuk; Anirudh Goyal; Christopher Pal", "journal": "", "ref_id": "b10", "title": "A meta-transfer objective for learning to disentangle causal mechanisms", "year": "2019" }, { "authors": "Tolga Bolukbasi; Adam Pearce; Ann Yuan; Andy Coenen; Emily Reif; Fernanda Viégas; Martin Wattenberg", "journal": "", "ref_id": "b11", "title": "An interpretability illusion for bert", "year": "2021" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg", "journal": "", "ref_id": "b13", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b14", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Stephanie Chan; Adam Santoro; Andrew Lampinen; Jane Wang; Aaditya Singh; Pierre Richemond; James Mcclelland; Felix Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Data distributional properties drive emergent in-context learning in transformers", "year": "1956" }, { "authors": "Grégoire Delétang; Anian Ruoss; Jordi Grau-Moya; Tim Genewein; Kevin Li; Elliot Wenliang; Chris Catt; Marcus Cundy; Shane Hutter; Joel Legg; Veness", "journal": "", "ref_id": "b16", "title": "Neural networks and the chomsky hierarchy", "year": "2022" }, { "authors": "Gelei Deng; Yi Liu; Yuekang Li; Kailong Wang; Ying Zhang; Zefeng Li; Haoyu Wang; Tianwei Zhang; Yang Liu", "journal": "", "ref_id": "b17", "title": "Jailbreaker: Automated jailbreak across multiple large language model chatbots", "year": "2023" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b18", "title": "Palm-e: An embodied multimodal language model", "year": "2023" }, { "authors": "Ronen Eldan; Yuanzhi Li", "journal": "", "ref_id": "b19", "title": "Tinystories: How small can language models be and still speak coherent english", "year": "2023" }, { "authors": "Guhao Feng; Yuntian Gu; Bohang Zhang; Haotian Ye; Di He; Liwei Wang", "journal": "", "ref_id": "b20", "title": "Towards revealing the mystery behind chain of thought: a theoretical perspective", "year": "2023" }, { "authors": "Federica Gerace; Luca Saglietti; Stefano Sarao Mannelli; Andrew Saxe; Lenka Zdeborová", "journal": "Machine Learning: Science and Technology", "ref_id": "b21", "title": "Probing transfer learning with a model of synthetic correlated datasets", "year": "2022" }, { "authors": "Mor Geva; Avi Caciularu; Kevin Ro Wang; Yoav Goldberg", "journal": "", "ref_id": "b22", "title": "Transformer feed-forward layers build predictions by promoting concepts in the vocabulary space", "year": "2022" }, { "authors": "Mor Geva; Jasmijn Bastings; Katja Filippova; Amir Globerson", "journal": "", "ref_id": "b23", "title": "Dissecting recall of factual associations in auto-regressive language models", "year": "2023" }, { "authors": "Amelia Glaese; Nat Mcaleese; Maja Trębacz; John Aslanides; Vlad Firoiu; Timo Ewalds; Maribeth Rauh; Laura Weidinger; Martin Chadwick; Phoebe Thacker", "journal": "", "ref_id": "b24", "title": "Improving alignment of dialogue agents via targeted human judgements", "year": "2022" }, { "authors": "Dongyoung Go; Tomasz Korbak; Germán Kruszewski; Jos Rozen; Nahyeon Ryu; Marc Dymetman", "journal": "", "ref_id": "b25", "title": "Aligning language models with preferences through f-divergence minimization", "year": "2023" }, { "authors": "Almog Gueta; Elad Venezian; Colin Raffel; Noam Slonim; Yoav Katz; Leshem Choshen", "journal": "", "ref_id": "b26", "title": "Knowledge is a region in weight space for fine-tuned language models", "year": "2023" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "PMLR", "ref_id": "b27", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b28", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Sarthak Jain; Byron C Wallace", "journal": "", "ref_id": "b29", "title": "Attention is not explanation", "year": "2019" }, { "authors": "Liwei Jiang; Jena D Hwang; Chandra Bhagavatula; Le Ronan; Jenny Bras; Jesse Liang; Keisuke Dodge; Maxwell Sakaguchi; Jon Forbes; Saadia Borchardt; Gabriel", "journal": "", "ref_id": "b30", "title": "Can machines learn morality? the delphi experiment", "year": "2021" }, { "authors": "Jeevesh Juneja; Rachit Bansal; Kyunghyun Cho; João Sedoc; Naomi Saphra", "journal": "", "ref_id": "b31", "title": "Linear connectivity reveals generalization strategies", "year": "2022" }, { "authors": "Andrej Karpathy", "journal": "", "ref_id": "b32", "title": "MinGPT", "year": "2020" }, { "authors": "Suhas Kotha; Jacob Mitchell Springer; Aditi Raghunathan", "journal": "", "ref_id": "b33", "title": "Understanding catastrophic forgetting in language models via implicit inference", "year": "2023" }, { "authors": "Ananya Kumar; Aditi Raghunathan; Robbie Jones; Tengyu Ma; Percy Liang", "journal": "", "ref_id": "b34", "title": "Fine-tuning can distort pretrained features and underperform out-of-distribution", "year": "2022" }, { "authors": "Vivian Lai; Chenhao Tan", "journal": "", "ref_id": "b35", "title": "On human predictions with explanations and predictions of machine learning models: A case study on deception detection", "year": "2019" }, { "authors": "Surya Andrew K Lampinen; Ganguli", "journal": "", "ref_id": "b36", "title": "An analytic theory of generalization dynamics and transfer learning in deep linear networks", "year": "2018" }, { "authors": "Rémi Le Priol; Reza Babanezhad; Yoshua Bengio; Simon Lacoste-Julien", "journal": "PMLR", "ref_id": "b37", "title": "An analysis of the adaptation speed of causal models", "year": "2021" }, { "authors": "Kenneth Li; Aspen K Hopkins; David Bau; Fernanda Viégas; Hanspeter Pfister; Martin Wattenberg", "journal": "", "ref_id": "b38", "title": "Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task", "year": "2023" }, { "authors": "Vijeta Vladislav Lialin; Anna Deshpande; Rumshisky", "journal": "", "ref_id": "b39", "title": "Scaling down to scale up: A guide to parameter-efficient fine-tuning", "year": "2023" }, { "authors": "Stephanie Lin; Jacob Hilton; Owain Evans", "journal": "", "ref_id": "b40", "title": "Truthfulqa: Measuring how models mimic human falsehoods", "year": "2021" }, { "authors": "David Lindner; János Kramár; Matthew Rahtz; Thomas Mcgrath; Vladimir Mikulik", "journal": "", "ref_id": "b41", "title": "Tracr: Compiled transformers as a laboratory for interpretability", "year": "2023" }, { "authors": "Bingbin Liu; Jordan T Ash; Surbhi Goel; Akshay Krishnamurthy; Cyril Zhang", "journal": "", "ref_id": "b42", "title": "Transformers learn shortcuts to automata", "year": "2022" }, { "authors": "Bingbin Liu; Jordan T Ash; Surbhi Goel; Akshay Krishnamurthy; Cyril Zhang", "journal": "", "ref_id": "b43", "title": "Exposing attention glitches with flip-flop language modeling", "year": "2023" }, { "authors": "Haokun Liu; Derek Tam; Mohammed Muqeeth; Jay Mohta; Tenghao Huang; Mohit Bansal; Colin A Raffel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning", "year": "2022" }, { "authors": "Yi Liu; Gelei Deng; Zhengzi Xu; Yuekang Li; Yaowen Zheng; Ying Zhang; Lida Zhao; Tianwei Zhang; Yang Liu", "journal": "", "ref_id": "b45", "title": "Jailbreaking chatgpt via prompt engineering: An empirical study", "year": "2023" }, { "authors": "Charles Lovering; Rohan Jha; Tal Linzen; Ellie Pavlick", "journal": "", "ref_id": "b46", "title": "Predicting inductive biases of pretrained models", "year": "2021" }, { "authors": "Ekdeep Singh; Lubana ; Robert P Dick", "journal": "", "ref_id": "b47", "title": "A gradient flow framework for analyzing network pruning", "year": "2021" }, { "authors": "Ekdeep Singh Lubana; Eric J Bigelow; Robert P Dick; David Krueger; Hidenori Tanaka", "journal": "", "ref_id": "b48", "title": "Mechanistic Mode Connectivity", "year": "2022" }, { "authors": "Wesley Maddox; Shuai Tang; Pablo Moreno; Andrew Gordon Wilson; Andreas Damianou", "journal": "PMLR", "ref_id": "b49", "title": "Fast adaptation with linearized neural networks", "year": "2021" }, { "authors": "Preprint Sadhika Malladi; Alexander Wettig; Dingli Yu; Danqi Chen; Sanjeev Arora", "journal": "PMLR", "ref_id": "b50", "title": "A kernel-based view of language model fine-tuning", "year": "2023" }, { "authors": "S Michael; Colin A Matena; Raffel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b51", "title": "Merging models with fisher-weighted averaging", "year": "2022" }, { "authors": "Clara Meister; Stefan Lazov; Isabelle Augenstein; Ryan Cotterell", "journal": "", "ref_id": "b52", "title": "Is sparse attention more interpretable?", "year": "2021" }, { "authors": "Pavlo Molchanov; Stephen Tyree; Tero Karras; Timo Aila; Jan Kautz", "journal": "", "ref_id": "b53", "title": "Pruning convolutional neural networks for resource efficient inference", "year": "2016" }, { "authors": "C Michael; Paul Mozer; Smolensky", "journal": "Advances in neural information processing systems", "ref_id": "b54", "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "year": "1988" }, { "authors": "Neel Nanda; Andrew Lee; Martin Wattenberg", "journal": "", "ref_id": "b55", "title": "Emergent linear representations in world models of self-supervised sequence models", "year": "2023" }, { "authors": "Maya Okawa; Ekdeep Singh Lubana; Robert P Dick; Hidenori Tanaka", "journal": "", "ref_id": "b56", "title": "Compositional abilities emerge multiplicatively: Exploring diffusion models on a synthetic task", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b57", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Alicia Parrish; Angelica Chen; Nikita Nangia; Vishakh Padmakumar; Jason Phang; Jana Thompson; Phu Mon Htut; Samuel R Bowman", "journal": "", "ref_id": "b58", "title": "Bbq: A hand-built bias benchmark for question answering", "year": "2021" }, { "authors": "Jonas Pfeiffer; Andreas Rücklé; Clifton Poth; Aishwarya Kamath; Ivan Vulić; Sebastian Ruder; Kyunghyun Cho; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Adapterhub: A framework for adapting transformers", "year": "2020" }, { "authors": "Xiangyu Qi; Yi Zeng; Tinghao Xie; Pin-Yu Chen; Ruoxi Jia; Prateek Mittal; Peter Henderson", "journal": "", "ref_id": "b60", "title": "Fine-tuning aligned language models compromises safety, even when users do not intend to!", "year": "2023" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b61", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b62", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b63", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b64", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Machel Reid; Yutaro Yamada; Shixiang Shane Gu", "journal": "", "ref_id": "b65", "title": "Can wikipedia help offline reinforcement learning?", "year": "2022" }, { "authors": "Preprint Victor Sanh; Albert Webson; Colin Raffel; Stephen Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b66", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Sofia Serrano; Noah A Smith", "journal": "", "ref_id": "b67", "title": "Is attention interpretable?", "year": "2019" }, { "authors": "Harshay Shah; Kaustav Tamuly; Aditi Raghunathan; Prateek Jain; Praneeth Netrapalli", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b68", "title": "The pitfalls of simplicity bias in neural networks", "year": "2020" }, { "authors": "Xinyue Shen; Zeyuan Chen; Michael Backes; Yun Shen; Yang Zhang", "journal": "", "ref_id": "b69", "title": "do anything now\": Characterizing and evaluating in-the-wild jailbreak prompts on large language models", "year": "2023" }, { "authors": "Hui Shi; Sicun Gao; Yuandong Tian; Xinyun Chen; Jishen Zhao", "journal": "", "ref_id": "b70", "title": "Learning bounded context-freegrammar via lstm and the transformer: Difference and the explanations", "year": "2022" }, { "authors": "Michael Sipser", "journal": "ACM Sigact News", "ref_id": "b71", "title": "Introduction to the theory of computation", "year": "1996" }, { "authors": "Nisan Stiennon; Long Ouyang; Jeffrey Wu; Daniel Ziegler; Ryan Lowe; Chelsea Voss; Alec Radford; Dario Amodei; Paul F Christiano", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b72", "title": "Learning to summarize with human feedback", "year": "2020" }, { "authors": "Hidenori Tanaka; Aran Nayebi; Niru Maheswaranathan; Lane Mcintosh; Stephen Baccus; Surya Ganguli", "journal": "Adv. in Neural Information Processing Systems", "ref_id": "b73", "title": "From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction", "year": "2019" }, { "authors": "Ian Tenney; Dipanjan Das; Ellie Pavlick", "journal": "", "ref_id": "b74", "title": "Bert rediscovers the classical nlp pipeline", "year": "2019" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b75", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Nilesh Tripuraneni; Michael Jordan; Chi Jin", "journal": "Advances in neural information processing systems", "ref_id": "b76", "title": "On the theory of transfer learning: The importance of task diversity", "year": "2020" }, { "authors": "Puja Trivedi; Danai Koutra; Jayaraman J Thiagarajan", "journal": "", "ref_id": "b77", "title": "A closer look at model adaptation using feature distortion and simplicity bias", "year": "2023" }, { "authors": "Josef Valvoda; Naomi Saphra; Jonathan Rawski; Adina Williams; Ryan Cotterell", "journal": "", "ref_id": "b78", "title": "Benchmarking compositionality with formal languages", "year": "2022" }, { "authors": "Elena Voita; Ivan Titov", "journal": "", "ref_id": "b79", "title": "Information-theoretic probing with minimum description length", "year": "2020" }, { "authors": "Elena Voita; David Talbot; Fedor Moiseev; Rico Sennrich; Ivan Titov", "journal": "", "ref_id": "b80", "title": "Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned", "year": "2019" }, { "authors": "Yihan Wang; Si Si; Daliang Li; Michal Lukasik; Felix Yu; Cho-Jui Hsieh; Inderjit S Dhillon; Sanjiv Kumar", "journal": "", "ref_id": "b81", "title": "Two-stage llm fine-tuning with less specialization and more generalization", "year": "2022" }, { "authors": "Alexander Wei; Nika Haghtalab; Jacob Steinhardt", "journal": "", "ref_id": "b82", "title": "Jailbroken: How does llm safety training fail?", "year": "2023" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b83", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "Laura Weidinger; John Mellor; Maribeth Rauh; Conor Griffin; Jonathan Uesato; Po-Sen Huang; Myra Cheng; Mia Glaese; Borja Balle; Atoosa Kasirzadeh", "journal": "", "ref_id": "b84", "title": "Ethical and social risks of harm from language models", "year": "2021" }, { "authors": "Gail Weiss; Yoav Goldberg; Eran Yahav", "journal": "PMLR", "ref_id": "b85", "title": "Thinking like transformers", "year": "2021" }, { "authors": "Johannes Welbl; Amelia Glaese; Jonathan Uesato; Sumanth Dathathri; John Mellor; Lisa Anne Hendricks; Kirsty Anderson; Pushmeet Kohli; Ben Coppin; Po-Sen Huang", "journal": "", "ref_id": "b86", "title": "Challenges in detoxifying language models", "year": "2021" }, { "authors": "Sarah Wiegreffe; Yuval Pinter", "journal": "", "ref_id": "b87", "title": "Attention is not not explanation", "year": "2019" }, { "authors": "Albert Xu; Eshaan Pathak; Eric Wallace; Suchin Gururangan; Maarten Sap; Dan Klein", "journal": "", "ref_id": "b88", "title": "Detoxifying language models risks marginalizing minority voices", "year": "2021" }, { "authors": "Greg Yang; Edward J Hu", "journal": "", "ref_id": "b89", "title": "Feature learning in infinite-width neural networks", "year": "2020" }, { "authors": "Xianjun Yang; Xiao Wang; Qi Zhang; Linda Petzold; William Yang Wang; Xun Zhao; Dahua Lin", "journal": "", "ref_id": "b90", "title": "Shadow alignment: The ease of subverting safely-aligned language models", "year": "2023" }, { "authors": "Elad Ben Zaken; Shauli Ravfogel; Yoav Goldberg", "journal": "", "ref_id": "b91", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2021" }, { "authors": "Haoyu Zhao; Abhishek Panigrahi; Rong Ge; Sanjeev Arora", "journal": "", "ref_id": "b92", "title": "Do transformers parse while predicting the masked word?", "year": "2023" }, { "authors": "Hattie Zhou; Arwen Bradley; Etai Littwin; Noam Razin; Omid Saremi; Josh Susskind; Samy Bengio; Preetum Nakkiran", "journal": "", "ref_id": "b93", "title": "What algorithms can transformers learn? a study in length generalization", "year": "2023" }, { "authors": "Xuhui Zhou; Maarten Sap; Swabha Swayamdipta; Noah A Smith; Yejin Choi", "journal": "", "ref_id": "b94", "title": "Challenges in automated debiasing for toxic language detection", "year": "2021" }, { "authors": "Andy Zou; Zifan Wang; J Zico Kolter; Matt Fredrikson", "journal": "", "ref_id": "b95", "title": "Universal and transferable adversarial attacks on aligned language models", "year": "2023" } ]
[ { "formula_coordinates": [ 6, 107.83, 543.02, 394.97, 110.6 ], "formula_id": "formula_0", "formula_text": "C Te = 0 C Te = 1 C Te = 0 C Te = 1 C Te = 0 C Te = 1 0 2.5K 5K 7.5K 10K 30 65 100 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K 0 2.5K 5K 7.5K 10K (a)  H T (c)  L T Acc. O FT (b)  M T C Tr = 0 C Tr = 0.5 C Tr = 0.8 C Tr = 1.0 η M η S" }, { "formula_coordinates": [ 19, 216.94, 485.72, 287.73, 11.09 ], "formula_id": "formula_1", "formula_text": "SOS + T + O + O ′ + SOT + Txt + EOT + ART + Ans + EOS.(1)" }, { "formula_coordinates": [ 19, 133.5, 615.56, 313.03, 60.17 ], "formula_id": "formula_2", "formula_text": "s → r, q; s → q, p; p → m, n, o; p → n, o, m; q → n, m, o; q → m, n; r → o, m; r→ m, o, n; m → l, j; m → j, l, k; n → k, j, l; n → l, j, k; o →l, k, j; o → k, j; j → h, i; j → i, h; k → h, g, i; k → g, h, i; l → i, h, g; l → h, i, g; g → d, f" }, { "formula_coordinates": [ 22, 135.4, 594.44, 56.01, 9.85 ], "formula_id": "formula_3", "formula_text": "• P T (a) = 0." }, { "formula_coordinates": [ 27, 214.18, 302.28, 288.24, 72.96 ], "formula_id": "formula_4", "formula_text": "T J1 /T J2 0 2.5K 5K 7.5K 10K T NJ 0 2.5K 5K 7.5K 10K T J1 /T J2 (a)η M (b)η S  H T  M T  L T" }, { "formula_coordinates": [ 28, 160.27, 197.82, 335.27, 13.13 ], "formula_id": "formula_5", "formula_text": "C Tr = 0 C Tr = 1 Scr. + FT η M η S" }, { "formula_coordinates": [ 38, 181.66, 503.37, 143.27, 45.47 ], "formula_id": "formula_6", "formula_text": "CTr = 0 ----b-a-bb---a-b------------a---------b------ CTr = 1 ----b-a-bb---a-b------------a---------b------ CTr = 0 ----b-a-bb---a-b------------a---------b------CTr" }, { "formula_coordinates": [ 41, 109.07, 114.94, 387.75, 447.3 ], "formula_id": "formula_7", "formula_text": "----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ Block-1 ----b-a-bb---a-b------------a---------b------ Block-2 ----b-a-bb---a-b------------a---------b------ Block-1 ----b-a-bb---a-b------------a---------b------ Block-2 ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ Attending Token ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ---- b - a - b b --- a - b ------------ a --------- b ------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------ ----b-a-bb---a-b------------a---------b------0" }, { "formula_coordinates": [ 44, 132.47, 118.23, 338.73, 179.06 ], "formula_id": "formula_8", "formula_text": "k e hwmg u a t d############### k e h w m g u a t d # # # # # # # # # # # # # # # C Tr = 0 k e hwmg u a t d############### C Tr = 1 k e hwmg u a t d############### C Tr = 0 k e hwmg u a t d############### C Tr = 1 l i t r b s u g p################ l i t r b s u g p # # # # # # # # # # # # # # # #" }, { "formula_coordinates": [ 44, 132.97, 173.43, 368.59, 226.66 ], "formula_id": "formula_9", "formula_text": "l i t r b s u g p################ l i t r b s u g p################ l i t r b s u g p################ v c t g l y r h e u############### v c t g l y r h e u # # # # # # # # # # # # # # # v c t g l y r h e u############### v c t g l y r h e u############### v c t g l y r h e u############### 0.0 0.1 0.2 0.3 0.4 0.5 (a) η VS (b) η M Token Attended" } ]
2023-11-28
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b30", "b38", "b39" ], "table_ref": [], "text": "LLaVA: At a train station, a group of people, including both young children and adults, are standing on a platform waiting for a train to arrive. The train is already present on the tracks, partially visible on the right side of the image. Some of the people watch the train closely, while others seem to be patiently anticipating its departure.\nThere is a total of eight individuals waiting for the train, with one child in the middle of the platform and the others scattered around. A backpack can be found on the far left side of the platform, suggesting that someone may have set it down while waiting.\nShareGPT4V: The image captures a moment at a train station. A green train is on the tracks, moving away from the platform labeled \"Platform 2\". The train's motion is observed by people standing on the platform, their attention drawn to the departing vehicle.\nA red signal stands on the tracks, its vibrant color contrasting with the green of the train. Signs are prominently displayed around the platform. One warns \"Beware of Trains\", a cautionary reminder of the powerful machines that frequent these tracks. Another sign advises \"Please Use The Footbridge To Cross The Line. If You Require Assistance Please Ask The Station Staff\", guiding passengers to safely navigate the station.\nThe sky above is a clear blue, providing a serene backdrop to the bustling activity of the station. In the distance, trees can be seen, adding a touch of nature to this man-made setting. The image is a snapshot of everyday life at a train station, capturing both its routine operations and its inherent charm. We showcase a comparison between the caption in our proposed ShareGPT4V dataset and those utilized by recent large multi-modal models (LMMs). Unlike COCO-Caption [7] involves brief human-made captions on the main subject. LLaVA-Instruct [31] combines human-made captions, bounding boxes, and GPT4 [39] to 'imagine' the image details, which leads to inevitable error/hallucination description (marked in red). Our approach involves feeding carefully designed prompts along with images directly into the advanced GPT4-Vision [40] and the descriptions are more detailed and accurate (marked in blue). (b) We highlight the remarkable performance of the proposed LMM, ShareGPT4V-7B, developed with the assistance of the ShareGPT4V dataset." }, { "figure_ref": [], "heading": "Abstract", "publication_ref": [], "table_ref": [], "text": "In the realm of large multi-modal models (LMMs), efficient modality alignment is crucial yet often constrained by the scarcity of high-quality image-text data. To address this bottleneck, we introduce the ShareGPT4V dataset, a pioneering large-scale resource featuring 1.2 million highly descriptive captions, which surpasses existing datasets in diversity and information content, covering world knowledge, object properties, spatial relationships, and aesthetic evaluations. Specifically, ShareGPT4V originates from a curated 100K high-quality captions collected from advanced GPT4-Vision and has been expanded to 1.2M with a superb caption model trained on this subset. ShareGPT4V first demonstrates its effectiveness for the Supervised Fine-Tuning (SFT) phase, by substituting an equivalent quantity of detailed captions in existing SFT datasets with a subset of our high-quality captions, significantly enhancing the LMMs like LLaVA-7B, LLaVA-1.5-13B, and Qwen-VL-Chat-7B on the MME and MMBench benchmarks, with respective gains of 222.8/22.0/22.3 and 2.7/1.3/1.5. We further incorporate ShareGPT4V data into both the pre-training and SFT phases, obtaining ShareGPT4V-7B, a superior LMM based on a simple architecture that has remarkable performance across a majority of the multi-modal benchmarks. This project is available at https://ShareGPT4V.github.io to serve as a pivotal resource for advancing the LMMs community." }, { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b3", "b7", "b8", "b52", "b55", "b29", "b30", "b56" ], "table_ref": [], "text": "Recent breakthroughs in artificial intelligence have been driven notably by the development of large language models (LLMs) [2,4,8,9,12,53,56]. Following the evolution, modality unification via LLMs becomes the inevitable tendency, and visual-aligned multi-modal LLMs[3, 5, 10, 30,31,35,57,[60][61][62] have witnessed ever-changing advances in recent days. Putting aside the diversity in model architecture and training data, most of the large multi-modal models (LMMs) adhere to a dual-phase paradigm encompassing a pre-training stage with large-scale image-text pairs for modality alignment, followed by a supervised fine-tuning (SFT) stage that enhances multi-modal capabilities through instruction-format data.\nDespite their efforts and achievements, we argue that the current LMMs still align the modalities in a sub-optimal manner, primarily due to the lack of sufficient high-quality image-text pairs. Vision, inherently rich in information and fine-grained semantics, is often reduced to simplistic captions in mainstream image-text datasets. These captions, typically brief and focused on salient objects, lead to a significant reduction in information content and sub-optimal modality alignment.\nTo prove our argument, we conducted a straightforward experiment: we substituted the image-text pairs utilized in the SFT stage of several typical LMMs with equivalent comprehensive captions generated by the advanced GPT4-Vision model and re-benchmarked these LMMs. As shown in Figure 2, such equivalent substitution, despite its relatively minimal extent (only 3.5% of the SFT data in the LLaVA-1.5 case), resulted in consistent performance gains across various LMMs and benchmarks. Encouraged by these promising results, we expanded our efforts to collect high-quality captions on a larger scale, involving two phases. In the initial phase, approximately 100K images from various data sources were gathered. We employed carefully designed data-specific prompts to effectively utilize GPT4-Vision to generate high-quality descriptions. The resulting captions, averaging 942 characters, encompass a comprehensive range of image information, such as world knowledge, object properties, spatial relation, aesthetic evaluation, etc. In the second phase, we utilize these captions to build a strong caption model, which gets rid of the data source specialized prompt and could generate comprehensive captions for given images.\nBased on the above endeavors, we introduce the ShareGPT4V dataset, the first highly descriptive image-text collection. It comprises two components: 100K GPT4-Vision generated captions with diverse image sources and 1.2M captions crafted by our caption model, which is learned from the 100K high-quality captions. With the aid of this dataset, we have developed an eponymous stateof-the-art large multi-modal model, the ShareGPT4V-7B. To maintain clarity in our discourse, 'dataset' or 'model' will be distinctly specified when referring to ShareGPT4V. Figure 1(b) shows that ShareGPT4V-7B outperforms other advanced 7B-scale LMMs in all 11 benchmarks, showcasing its competitive performance. For instance, our ShareGPT4V-7B model achieves an impressive total score of 1943.8 on the MME benchmark, surpassing the secondranked Qwen-VL-Chat-7B model, which was trained on 1.4 billion samples, by 95.6 points.\nIn a nutshell, our contributions are threefold: • We point out the fact that existing low-quality captions can impede the alignment between vision and language modalities of LMMs and we verify it with experimental results. This revelation highlights an urgent requirement within the LMM community for high-quality captions to effectively alleviate such a dilemma. • We introduce the ShareGPT4V dataset, a large-scale image-text collection featuring 100K highly descriptive captions generated by GPT4-Vision and 1.2M highquality captions generated by our caption model. The caption covers world knowledge, object attributes, spatial relations, aesthetic assessment, etc. Moreover, the general caption model trained on entire GPT4-Visiongenerated captions could further scale our dataset and will also be available for community usage. • Leveraging the proposed dataset, we have developed the ShareGPT4V-7B, an advanced large multimodal model. Despite without elaborate architecture design, this model consistently demonstrates impressive performance across various multi-modal benchmarks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b10", "b45", "b53", "b3", "b8", "b41", "b38", "b7", "b51", "b52", "b52", "b55", "b29", "b30", "b18", "b44", "b44", "b26", "b27", "b31", "b58", "b15", "b2", "b37", "b15", "b37", "b2", "b30" ], "table_ref": [], "text": "Large Language Models. In recent years, with the surge in data and computational power, the development of large language models has experienced a boom. Early encoderdecoder models like BERT [11] and T5 [46], and decodercentric models such as GPT [44], leveraged the Transformer architecture [54] to excel in various NLP tasks. The success in GPT3 [4] has popularized the use of decoder-only architectures, which rely on auto-regressive decoding for generating predictions. Subsequent models like PaLM [9] extended the limits of model parameters and dataset scale, while others like InstructGPT [42] and ChatGPT [39] introduced fine-tuning and reinforcement learning techniques for improved conversational interaction. These developments, along with contributions from the open-source community [8,52,53,53,56], have set new benchmarks and opened avenues for future research in NLP area. Qwen-VL-Chat-7B Qwen-VL-Chat-7B+ours Figure 2. Illustration of the benefits high-quality captions bring to the SFT stage. We compare the performance of various large multi-modal models before and after replacing a corresponding portion of their SFT captions with those generated by GPT4-Vision. The replacement ratio is only 3.5% for LLaVA-1.5 [30] and Qwen-VL-Chat [3] 1 , and 14.5% for LLaVA [31].\nLarge Multi-modal Models. As LLMs rapidly evolve, a faction within the research community is increasingly concentrating on introducing visual knowledge into LLMs.\nCentral to this area are the seminal works in modality alignment within the vision-language learning area [19,45]. A notable instance is CLIP [45], which exemplifies the alignment of visual and textual modalities through contrastive learning on extensive image-text pairs. A series of works [26,27] were improved upon CLIP by employing refined data strategies for more diverse data, they have been effective for basic visual tasks [28,32,59] but less so for complex tasks like visual question answering. MiniGPT Image-text Data Enhancement. In the vision-language learning area, several initiatives [13, 16,23,38] have been undertaken to enhance the quality of captions within imagetext pairs. LaCLIP [13] leverages LLMs to rewrite raw captions, but its effectiveness is often hindered by hallucinations due to limited visual information and the low quality of original captions. Research [16,38] explores methods to filter and blend raw and synthetic captions to enhance the CLIP model. A recent work, VeCLIP [23], proposes using LLMs to amalgamate information from both raw and synthetic captions. Nevertheless, the approach is constrained by the low quality of synthetic captions, resulting in only minimal incorporation of visual knowledge in the caption fusion process. To the best of our knowledge, in the LMM area, LLaVA [31] uniquely inputs human-annotated short captions and bounding boxes into the GPT4 language model. This approach lets the model 'imagine' viewing the image before producing detailed captions. However, this method relies heavily on extensive human-annotated data and does not allow the model to truly 'see' the images. Consequently, it tends to generate detailed descriptions primarily of main objects, often including those in obscure corners but annotated with bounding boxes, leading to potential hallucinations in the LMMs' output. In contrast, we employ the most advanced LMM, GPT4-Vision, which is capable of directly producing highly descriptive captions from deliberated prompts and corresponding image inputs." }, { "figure_ref": [], "heading": "ShareGPT4V Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "In this section, we provide a detailed exposition of the process involved in creating the ShareGPT4V dataset. Subsection 3.2 elaborates on how we utilized GPT4-Vision to generate 100K high-quality captions from various image sources and briefly validates their significant role in the SFT phase of LMMs. Subsection 3.3 describes our methodology for reasonably expanding the 100K high-quality captions in Sec.3.2 to 1.2M captions, matching the quality generated by GPT4-Vision with acceptable cost. Table 1 presents a comparison between our dataset and existing widely-used caption datasets in the LMM field. Our ShareGPT4V dataset stands out due to its more diverse range of image sources, the use of a more advanced caption producer, a larger number of samples, and the generation of longer captions. " }, { "figure_ref": [], "heading": "ShareGPT4V Data Collection", "publication_ref": [], "table_ref": [], "text": "The supervised fine-tuning captions were collected from GPT4-Vision, the latest and most advanced LMM. For each image selected from a specific data source D, we employed a meticulously crafted, data-specific prompt P D . This prompt instructed GPT4-Vision to generate detailed descriptions, taking into account factors such as world knowledge, object attributes, spatial relationships, and aesthetic evaluations." }, { "figure_ref": [], "heading": "Name Image Source", "publication_ref": [], "table_ref": [], "text": "Visible Captioned by Samples Avg." }, { "figure_ref": [ "fig_3" ], "heading": "COCO-Caption", "publication_ref": [ "b47", "b49", "b50", "b40", "b47", "b49", "b30", "b29", "b29" ], "table_ref": [], "text": "[7] COCO [29] ✓ Human 118K 52 BLIP-LCS [26] LCS ✓ BLIP [26] 558K 54 LLaVA-23K [31] COCO [29] × GPT4 [39] 23K 609 ShareGPT4V LCS, COCO [29], etc ✓ GPT4-Vision [40] 100K 942 ShareGPT4V-PT LCS, COCO [29], etc ✓ Share-Captioner 1,246K 826\nTable 1. Comparison of widely-used caption datasets and ShareGPT4V. 'LCS' abbreviates the LAION [48], CC [50], and SBU [47] datasets. The 'Visible' column denotes the image visibility during captioning, and the last column shows the average character number of the caption.\nData sources. To maximize the diversity and comprehensiveness of our data, we compiled around 100K images from various data sources, including images for detection [29] and segmentation [21], complex text-containing images [51], as well as various web images [41,48,50] con-taining artworks, landmarks, celebrities etc. More details could be found in the supplementary material. Prompt Design. Given the diversity of our image sources, we expect a highly content-related description for each image. That is, the captions should extend beyond mere appearance and attributes, incorporating knowledge-related information. For instance, the Eiffel Tower should not be simply described as a tall iron tower, and a picture of Einstein should not be concluded as an old man.\nFor the description quality and stability, we designed a base prompt for a general description and added a specialized prompt for each data source. The base prompt asks the GPT4-Vision to describe the basic information of the image, including the object attributes, appearance, and spatial relationships. The specialized prompt focuses on some datarelated information, as shown in Figure 3, we emphasize that the GPT4-Vision should mention some corresponding knowledge, such as the name and geographical location of a landmark-related image. Additionally, we add an aestheticrelated prompt for part of the images, to further improve the comprehensiveness of the description. Quality Verification. We conducted a straightforward experiment to verify the quality of the collected data: we chose a range of advanced, publicly available LMMs, including LLaVA-7B [31], LLaVA-1.5-7B [30], LLaVA-1.5-13B [30], and Qwen-VL-Chat-7B [3]. For a fair compar-LLaVA-1.5-13B: The image captures a young boy in a baseball uniform, standing at home plate and swinging a baseball bat. He is focused on the game and appears to be in the middle of a swing. There are several people around the field, including a few sitting on chairs and benches, watching the game. Some of them are closer to the boy, while others are further away. A baseball glove can be seen on the ground, likely belonging to one of the players. The scene conveys the excitement and energy of a baseball game.\nShare-Captioner: In the heart of the action, a young baseball player, donned in a vibrant red and white uniform, is captured mid-swing. The player's black helmet gleams under the sunlight, matching the intensity in his eyes as he focuses on the incoming ball. His black socks peek out from under his white pants, adding a touch of contrast to his uniform.\nThe player is positioned in the batter's box, his body coiled like a spring, ready to unleash the power of his swing. His bat, gripped firmly in his hands, is poised to connect with the ball.\nBeyond the player, a chain-link fence stands guard, separating the field from the spectators. A handful of spectators can be seen in the background, their attention riveted on the unfolding drama at home plate. Their presence adds a sense of anticipation and excitement to the scene.\nEvery element in this image paints a vivid picture of a thrilling moment in a baseball game, from the player's intense focus to the spectators' eager anticipation." }, { "figure_ref": [], "heading": "GPT4-Vision:", "publication_ref": [], "table_ref": [], "text": "In the image, a young baseball player is the central figure, poised in the batter's box, ready to swing at an incoming ball. Dressed in a vibrant red and white uniform, the player's stance exudes determination and focus. A black helmet, providing protection, sits snugly on the player's head, matching the black grip of the bat held firmly in their hands. The batter's box, marked by white chalk lines, is clearly visible beneath the player's feet, setting the stage for the impending action. The chain-link fence in the background serves as a boundary, separating the playing field from the spectators.\nSpeaking of spectators, a group of them can be seen in the background, seated on bleachers. They appear to be engrossed in the game, their faces blurred, making it impossible to discern their expressions. Their presence, however, adds a sense of anticipation and excitement to the scene.\nThe image captures a moment frozen in time, just before the swing, the anticipation palpable. It's a snapshot of a typical baseball game, filled with tension, excitement, and the promise of action. ison, we replaced a corresponding portion of detailed captions in their Supervised Fine-Tuning (SFT) datasets with a selection from our 100K GPT4-Vision-generated captions, while maintaining image data sources as consistent as possible. As depicted in Figure 2, the integration of our highly descriptive captions significantly improved the SFT phase performance across these varied LMMs, reinforcing our pursuit to gather more high-quality captions for potential benefits in the pretraining stage." }, { "figure_ref": [], "heading": "ShareGPT4V-PT Data Generation", "publication_ref": [ "b57", "b54", "b33", "b17" ], "table_ref": [], "text": "Compared with the supervised fine-tuning stage, modality alignment in the pre-training phase is more crucial and demands an large-scale dataset. For building a pre-training dataset, we employed the 100K high-quality captions generated by GPT4-Vision to fine-tune an alternative caption model and we have named it as Share-Captioner. Thanks to its training on diverse and comprehensive data, the Share-Captioner is capable of generating highly content-related descriptions with unified instruction. This approach allows the data scaling phase to proceed without the need for specialized prompt design.\nTo amass a substantial volume of high-quality image-text pairs, we selected a subset of 1.2 million images from current public datasets (see supplementary material for more details) and employed our pre-trained Share-Captioner for Preference GPT4 ; MM-Vet [58]; QBench [55]; SQA I : ScienceQA-IMG [34]; VQA V 2 [17]; VizWiz [18]. * indicates our re-implemented test results missed in benchmarks or origin papers. The best results are bold and the second-best results are underlined." }, { "figure_ref": [], "heading": "ShareGPT4V-7B Model", "publication_ref": [], "table_ref": [], "text": "To ascertain the efficacy of the ShareGPT4V dataset, we conducted experiments within a fair and controlled setting. This led to the development of ShareGPT4V-7B, a streamlined yet superior baseline LMM leveraging the highquality data from the ShareGPT4V dataset in both the pretraining and SFT stages." }, { "figure_ref": [], "heading": "Model Architecture", "publication_ref": [ "b29", "b44", "b7", "b52" ], "table_ref": [], "text": "The ShareGPT4V-7B model follows the design of LLaVA-1.5 [30], including three integral components: (1) A vision encoder utilizing the CLIP-Large model [45], with a resolution of 336×336 and a patch size of 14, converting input images into 576 tokens. (2) A projector, which is a twolayer multi-layer perception (MLP), is introduced to connect the vision and language modalities. (3) A LLM, based on the open-source Vicuna-v1.5 [8], derived from LLaMA2 [53]. Currently, our focus is on the lightweight 7B model scale, and we have empirically validated that even with lightweight training data and model scale, it can significantly outperform many current LMMs that utilize extensive training datasets or larger model scales." }, { "figure_ref": [], "heading": "Pre-Training", "publication_ref": [ "b29", "b30" ], "table_ref": [], "text": "In the pre-training stage, we utilize the pre-training subset of the ShareGPT4V dataset, i.e., ShareGPT4V-PT. Given these high-quality captions, solely fine-tuning the MLP does not suffice to exploit their full capabilities. In previous LMM research [5, 30,31,62], the vision encoder is generally not fine-tuned during pre-training, a rational approach considering the lower quality of previously used captions, where fine-tuned the vision encoder might degrade its visual knowledge extraction ability. We opted for simultaneous fine-tuning of the vision encoder, projector, and large language model. With this configuration, the large language model acquires a native understanding of visual embeddings, while also prompting the vision encoder to create relevant visual embeddings for elements in captions. This setup enables a comprehensive exploration and understanding of the knowledge embedded in visual embeddings, aligned with the intricate details of the captions. Specifically, we consistently applied a learning rate of 2e -5 across all components, with a batch size set at 256, and the comprehensive optimization process spanned roughly 4700 steps. Notably, we experimentally found that selectively fine-tuning only the latter half of the vision encoder's layers achieves optimal results, coupled with a satisfactory level of training efficiency." }, { "figure_ref": [], "heading": "Supervised Fine-Tuning.", "publication_ref": [ "b0", "b19", "b21", "b6", "b48", "b50", "b30" ], "table_ref": [], "text": "As we emphasized above, the goal of this paper is not to build a new SOTA model with some unique architecture designs but to investigate the effectiveness of highquality captions to realize better modality alignment of LMMs. So we utilize the 665k supervised data organized by LLaVA-1.5 and only replace part of it with our ShareGPT4V dataset. In detail, the 665k data is gathered from publicly available academic task-oriented data [1,20,22,36,37,49,51] and instruction-tuning data for conversational and complex reasoning tasks [31] involving natural images [29]. It contains 23k detailed description data and we replaced it with randomly sampled 23K highquality captions from the 100K captions in ShareGPT4V.\nDuring the SFT stage, to enhance the training efficiency and compare fairly, we froze the vision encoder and instead focused on fine-tuning the projector and the large language model. The learning rate was established at 2e -5 , with a batch size of 128, and the total optimization process spanned around 5200 steps.\nShareGPT4V-7B: The interesting aspect of this image is the combination of a hand-drawn illustration of a child holding a bunch of grapes and the actual grapes placed on top of the drawing. This creates a unique and creative visual effect, as the grapes appear to be floating above the child's head. The image combines art and reality in a playful and imaginative way, making it visually appealing and engaging.\nUser: What is interesting in this image?\nShareGPT4V-7B: From a photography perspective, the image is beautiful due to the combination of elements and the play of light and shadow. The large hot air balloon, which is the main subject of the photo, is illuminated by a bright orange light, creating a striking contrast against the dark sky. The balloon's vibrant colors and the glowing light make it stand out prominently in scene.\nThe setting is a sandy beach at night, which adds a sense of tranquility and serenity to the image. The silhouettes of people and vehicles in the foreground provide depth and context, indicating that the balloon is being prepared for a nighttime flight. The low angle of the shot enhances the grandeur of the balloon, making it appear larger and more majestic.\nOverall, the image captures a unique moment of preparation and anticipation, with the hot air balloon as the centerpiece, set against a captivating nighttime backdrop. The play of light and shadow, the contrasting colors, and the composition of the scene all contribute to the beauty of this photograph.\nUser: From a photography perspective, analyze what makes this picture beautiful? " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Benchmarks", "publication_ref": [ "b30", "b57", "b54", "b17" ], "table_ref": [], "text": "To thoroughly assess our proposed ShareGPT4V-7B model, we evaluate it across 11 benchmarks, covering a range of academic Visual Question Answering (VQA) tasks and recent benchmarks designed specifically for large multimodal models (LMMs). The LLaVA (in the wild) benchmark [31] is composed of 60 questions, spanning three distinct tasks: conversation, complex reasoning, and detailed description. The MME Benchmark [15] evaluates LMMs' perception and cognition capabilities through a series of carefully crafted questions across 14 sub-tasks. MM-Bench and MMBench-CN [33] benchmarks manually design questions to evaluate the model's vision-related reasoning and perception abilities for English and Chinese, respectively. SEED [24], with the assistance of GPT4, generated a dataset comprising approximately 19K questions related to images and videos. MM-Vet [58] uses GPT4 for a six-dimensional LMM capability assessment. Q-Bench [55] assesses low-level perception, while VQA-v2 [17] and VisWiz [18] are benchmarks in the realm of traditional Visual Question Answering (VQA) tasks." }, { "figure_ref": [], "heading": "Quantitative Comparison", "publication_ref": [], "table_ref": [], "text": "We present a quantitative comparison between our proposed ShareGPT4V-7B model with existing state-of-theart LMMs. Notably, compared with previous LMMs, our ShareGPT4V-7B attained the most superior performance in 9 out of the total 11 benchmarks. Specifically, our ShareGPT4V-7B model outperformed the previously best-performing LLaVA-1.5-13B model by 1.9 points on the LLaVA (in the wild) benchmark, demon-strating superior capabilities in tasks such as detailed description and complex reasoning. On the MME Benchmark, it achieved the highest scores in both perception (P) and cognition (C) capabilities, surpassing LLaVA-1.5-13B in perception by 36.1 points and exceeding Qwen-VL-Chat, which was trained on 1.4 billion data, by 15.7 points in cognition. Our model also achieved an optimal accuracy of 68.8% on MMBench, leading the secondbest by 1.1%. Furthermore, on the SEED (image) benchmark, which includes 9 assessment dimensions and 14K questions, ShareGPT4V-7B achieved the highest score of 69.7%, 1.5% higher than the second-ranked LLaVA-1.5-13B. In the low-level image assessment QBench, our model's top score of 63.4% can be attributed to the diversity of our constructed dataset. Lastly, our model almost consistently performed best on traditional VQA benchmarks with the smallest model size.\nOur findings demonstrate to the community that even with a simple architecture, public data, and lighter parameters (7B), it is possible to outperform many competitors with massive training data and parameter sizes, thanks to the support of these high-quality captions." }, { "figure_ref": [ "fig_6" ], "heading": "Multi-modal Dialogue", "publication_ref": [], "table_ref": [], "text": "In Figure 5, we present two representative examples within multi-modal dialogue scenarios. The figure demonstrates that our ShareGPT4V-7B exhibits satisfactory capabilities in understanding image details and performing aesthetic assessments. This further corroborates the significance of the high-quality captions we have collected. " }, { "figure_ref": [ "fig_7" ], "heading": "Ablations", "publication_ref": [], "table_ref": [ "tab_5", "tab_6" ], "text": "Effectiveness of ShareGPT4V Dataset. As shown in Table 4, we conducted a thorough ablation study to assess the impact of the ShareGPT4V-PT and ShareGPT4V subsets.\nOur baseline is the LLaVA-1.5-7B model, without utilizing the ShareGPT4V dataset in either pretraining or SFT stages. Utilizing only our ShareGPT4V subset during the SFT stages resulted in a significant increase of 31.4 points in MME perception score, and improvements of 2.5% and 0.5% in accuracy on the MMBench and SEED benchmarks, respectively. Notably, ShareGPT4V used here was selected from various data sources, yielding more performance gains than those from solely the COCO dataset (see in Figure 2). When only the ShareGPT4V-PT subset was used during pretraining, we observed a remarkable gain of 46.5 points in MME perception, along with substantial accuracy improvements of 3.1% and 2.3% on the MMBench and SEED benchmarks, respectively. Moreover, employing the ShareGPT4V dataset in both pretraining and SFT phases led to further satisfactory enhancements in overall performance, effectively validating the necessity of incorporating high-quality captions in both training stages. Pre-training Caption Quality. Then we study how the caption quality influences the pre-training performance. For a fair comparison, we pre-train the model with the same setting and images, but the captions are generated by different models. In detail, we use the 558K LAION-CC-SUB image-text pairs captioned by the BLIP as the baseline and replace the text with the high-quality one in our ShareGPT4V-PT.\nAs results shown in Table 5, comparing with the baseline, the joint training strategy with the BLIP-558K data gets better results on all the benchmarks, while the gain is quite minor that only 4.7 in MME Perception and 0.1 on SEED Bench. When we replace the captions with our ShareGPT4V-PT-558K, the model gets significant gains. In detail, it gets 1549.8, 68.3, 68.9 on the three benchmarks, surpassing the BLIP-558K case with 18.2, 1.9 and 2.0 respectively. This proves the essential of high-quality captions for effective pre-training and modality alignment. Number of Captions in Pre-training. In Figure 6, we present our investigation into the required quantity of highquality captions for the pre-training stage. Here we randomly sample the data from the ShareGPT4V-PT and train the model with the subset, which varies from 100K to 1200K. The results show that with only 100K high-quality data, the model has a significant improvement on both benchmarks, this further proves the effectiveness of the high-quality data. Meanwhile, with the scaling of training data, the model performance tends to be saturated after more than 1000K data being used for pre-training. This may indicate that with high-quality captions, the modal alignment could be quite efficient and realized with a relatively lightweight data scale. Number of Learnable ViT Blocks in Pre-training. As detailed in Table 6, we extensively investigated the optimal approach for fine-tuning the vision encoder during the pretraining phase. Compared to freezing the vision encoder during the pretraining phase, we found that unlocking the latter half of its transformer blocks significantly enhances performance. Specifically, such an approach led to a 52.2 gain on the MME perception benchmark, and substantial accuracy improvements of 2.2% and 1.6% on the MM-Bench and SEED benchmarks, respectively. This suggests that for high-quality captions, unlocking the vision encoder facilitates more effective modality alignment. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b47", "b49", "b40", "b50", "b29" ], "table_ref": [], "text": "In this study, we introduce ShareGPT4V, a groundbreaking large-scale image-text dataset with 1.2 million detailed and informative captions that surpass existing datasets in terms of richness and diversity, covering world knowledge, object attributes, spatial relationships, and aesthetic assessments. ShareGPT4V comprises 100K high-quality captions from GPT4-Vision for Supervised Fine-Tuning (SFT), expanded to 1.2 million for pre-training through a general caption model. We validated ShareGPT4V's effectiveness through SFT results on recent LMMs and further demonstrated its capabilities with the superior performance of our ShareGPT4V-7B model, which incorporates the dataset in both pre-training and SFT stages. We are committed to making ShareGPT4V fully accessible to the public, with the aspiration that it becomes a foundational resource in advancing the field of LMMs.\nA. Data Sources Data Source Composition for ShareGPT4V. To maximize the comprehensiveness of our captions, we compiled a total of 100K images from diverse sources. This includes 50K images from COCO [29], 30K images from 'LCS' (which abbreviates LAION [48], CC-3M [50], and SBU [41]), 20K images from SAM [21], 500 images from TextCaps [51], 500 images from WikiArt [47], and 1K images from webcrawled data (split evenly between images of landmarks and images of celebrities).\nData Source Composition for ShareGPT4V-PT. We utilized our pre-trained Share-Captioner to generate the pretraining dataset. This dataset is comprised of a subset of 1.2M images selected from existing public datasets. These include 118K images from COCO [29],570K images from SAM [21], and 558K images from LLaVA-1.5 pre-training data [30]." }, { "figure_ref": [], "heading": "B. Caption Analysis", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Figure 7 provides a visualization of the root noun-verb pairs for the captions generated by both GPT4-Vision and Share-Captioner. It's clear to see that the diversity and linguistic expression of the captions produced by Share-Captioner are comparable to those of GPT4-Vision.\nWe analyzed the lexical composition of the captions produced by GPT4-Vision and Share-Captioner, and the results are presented in Table 7. The analysis reveals that the captions generated by our Share-Captioner contain a comparable amount of information to those generated by GPT4-Vision. " }, { "figure_ref": [ "fig_8" ], "heading": "C. Prompts", "publication_ref": [], "table_ref": [], "text": "Given the diversity of our image sources, we expect a highly content-related description for each image. As shown in Figure 8, we designed a base prompt for a general description and added a specialized prompt for each data source. Share-Captioner: In the tranquil setting of a lush green lawn, three vintage suitcases, each a different size, are neatly arranged in a row. The smallest suitcase is positioned on the left, followed by the medium-sized one in the middle, and the largest suitcase on the right. Each suitcase is crafted from brown leather and features a sturdy brass handle on top, adding a touch of vintage charm to their appearance." }, { "figure_ref": [], "heading": "D. Examples", "publication_ref": [], "table_ref": [], "text": "The suitcases are not alone in this scene. Behind them, a brick wall stands tall, providing a rustic backdrop to the scene. Nestled against this wall is a small tree, its branches reaching out as if to touch the suitcases. Adding a splash of color to the scene is a blue tarp, spread out on the ground in front of the wall.\nThe image captures a moment of stillness, with the suitcases waiting patiently for their next journey. The precise arrangement of the suitcases and their relative positions to each other and their surroundings create a harmonious composition that is pleasing to the eye.\nShare-Captioner: The image captures a serene scene at a waterfront. Dominating the left side of the frame is a tall, brown clock tower with a pointed roof. The tower is adorned with a clock face on each of its four sides, allowing time to be read from any direction.\nAdjacent to the clock tower, on the right side of the image, is a green building with a curved roof. The building's green color contrasts with the brown of the clock tower, adding a splash of color to the scene.\nIn the foreground, a body of water stretches out, reflecting the sky above and the buildings below. The water's surface is calm, mirroring the tranquility of the scene.In the background, beyond the clock tower and green building, construction cranes can be seen, hinting at ongoing development in the area. Despite this, the overall atmosphere of the image remains peaceful and serene.\nShare-Captioner: The image captures a vibrant scene of a train journey. Dominating the frame is a train, painted in a striking combination of yellow and blue. The train, composed of three carriages, is in motion, as suggested by the blurred background. The number 2 is prominently displayed on the side of the train, indicating its carriage number.\nThe train is traveling on a railway track that cuts through a serene landscape. The track is nestled amidst lush greenery, adding a touch of nature to the industrial scene. Above, the sky is a clear blue, dotted with fluffy white clouds, suggesting a bright and sunny day. The perspective of the image is particularly interesting. It appears to be taken from the window of another train, as indicated by the blurred background. This gives the viewer the impression of being part of the journey, looking out at the world passing by.\nOverall, the image beautifully encapsulates a moment in a train journey, with its vivid colors and dynamic composition. ShareGPT4V: The image is a captivating oil painting titled \"The Sower\", created by the renowned artist Vincent Van Gogh. It's a beautiful representation of post-impressionism art. The painting vividly portrays a farmer in the act of sowing seeds in a field. The field, awash with hues of yellow and blue, forms a striking contrast against the farmer's figure. In the distance, a quaint village landscape unfolds, complete with houses and trees. The colors used are vibrant, bringing the scene to life with their intensity. The brushstrokes are clearly visible, adding texture and depth to the painting. This artwork is a testament to Van Gogh's fascination with the countryside and the simple, yet profound, peasant life.\nShareGPT4V: The image captures a delightful scene of a meal, likely at a Chinese restaurant. The main focus is a white plate with a blue rim, presenting a dish of beef and broccoli. The beef, brown in color, and the broccoli, a vibrant green, are generously coated in a brown sauce, adding a glossy finish to the dish.\nTo the left of the plate, a spoon rests, ready to serve the delicious food. The plate itself is placed on a white tablecloth, providing a clean and simple backdrop that allows the colors of the food to stand out.\nIn the background, there are hints of other dishes, one of which appears to have a red sauce. These dishes add depth to the scene, suggesting a variety of flavors waiting to be explored. The image as a whole paints a picture of a hearty and appetizing meal, inviting the viewer to partake in the culinary experience.\nShareGPT4V: The image captures a charming Halloween scene. Dominating the frame is a large, glowing orange pumpkin. It's been skillfully carved to display a classic jack-o'-lantern face, with triangular eyes and a toothy grin. The word \"FERGUSON\" is also carved into the pumpkin, perhaps indicating the name of the family who owns this decoration. The pumpkin's interior is illuminated, casting a warm light that accentuates the carved features.\nBeside the orange pumpkin, there's a smaller white pumpkin. It's uncarved, providing a simple yet effective contrast to its larger, more elaborate neighbor. Adding a touch of whimsy to the scene is a silver owl figurine. Its metallic sheen stands out against the natural textures of the pumpkins.\nAll three objects are arranged on a wooden surface, which provides a rustic backdrop to the scene. Behind them, a gray wall further emphasizes the vibrant orange of the carved pumpkin. The relative positions of the objects suggest a thoughtful arrangement, with the glowing orange pumpkin as the centerpiece, flanked by the white pumpkin and the silver owl.\nDespite the simplicity of the scene, it's clear that each object has been chosen and positioned with care, creating a Halloween decoration that's both festive and welcoming. ShareGPT4V-7B: The most common catchphrase of the character on the right is \"I am the one who knocks.\" This phrase is associated with the character Walter White from the TV show Breaking Bad, played by Bryan Cranston. Walter White is a methamphetamine manufacturer and distributor who is known for his intelligence, wit, and unpredictable behavior. The catchphrase is often used by Walter White to express his confidence and assertiveness, and it has become iconic in the series.\nUser: What is the most catchphrase of the character on the right? User:\nUser: What kind of life would he have lived if he and Jesse Pinkman hadn't met? ShareGPT4V-7B: If Walter White and Jesse Pinkman had never met, Walter White's life would have likely been very different. In the original timeline of the show, Walter White was a high school chemistry teacher who was diagnosed with cancer and decided to produce and sell methamphetamine to provide for his family after his death. He chose Jesse Pinkman, a former student and small-time criminal, as his partner in this venture. Without Jesse, Walter would have had to find another partner or work alone to produce and distribute the methamphetamine. He might have had to rely on his own skills and knowledge to create the product and find customers. However, without Jesse's skills, connections, and resourcefulness, Walter's operation would likely have been smaller and less successful.\nAdditionally, without Jesse, Walter's character development would have been significantly altered. Jesse's presence and influence on Walter's life helped him to grow and change into the complex and morally ambiguous character that we see in the show. The relationship between Walter and Jesse is a central theme of Breaking Bad, and their dynamic drives much of the plot and character development. " } ]
0 60.6 64.3 23.7 56.7 58.3 26.2 30.5 56.7 58.7 60.5 66.8 68.2 78.2 78.5 36.0 38.9 50.0 COCO: Young children standing on a platform waiting for a train to arrive. Adults and children watching a train slowly leave. A family near a railroad track watching the train pass. People waiting on a platform as a train pulls up. A train station with a green chain on the tracks and children waiting for it to go by.
ShareGPT4V: Improving Large Multi-Modal Models with Better Captions
[ { "figure_caption": "Figure 1 .1Figure 1. (a)We showcase a comparison between the caption in our proposed ShareGPT4V dataset and those utilized by recent large multi-modal models (LMMs). Unlike COCO-Caption [7] involves brief human-made captions on the main subject. LLaVA-Instruct[31] combines human-made captions, bounding boxes, and GPT4[39] to 'imagine' the image details, which leads to inevitable error/hallucination description (marked in red). Our approach involves feeding carefully designed prompts along with images directly into the advanced GPT4-Vision[40] and the descriptions are more detailed and accurate (marked in blue). (b) We highlight the remarkable performance of the proposed LMM, ShareGPT4V-7B, developed with the assistance of the ShareGPT4V dataset.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "-4 [5], leveraging an LLM[8] and a visual encoder[14], has shown proficiency in image-text dialogues through pre-training alignment and instruction fine-tuning. Subsequent research[3,6, 10, 25,31, 43,57] has further enhanced LMMs by focusing on the quality and diversity of pretraining and finetuning data. For instance, LLaVA[31] and InstructBLIP [10], with improved instruction fine-tuning, have advanced the understanding of complex prompts. mPLUG-Owl[57], Shikra[6], and KOSMOS-2 [43] have introduced new data types and training techniques, like grounding data, to reduce hallucinations and improve LMMs' grounding capability. Regrettably, it appears that the current LMMs have somewhat overlooked a crucial element: the quality of captions in image-text pairs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. An overview for crafting the ShareGPT4V dataset. (a) We illustrate the procedure for collecting highly descriptive captions from GPT4-Vision [40] via various image sources and data-specific prompts, resulting in 100K high-quality captions that encapsulate a wide array of information conveyed by the images. (b) We delineate the process of utilizing the seed captions to train a general captioner and then employing this captioner to generate 1.2M high-quality captions for pre-training usage.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "BLIP: a young boy holding a baseball bat on a field. COCO: A young boy swinging a baseball bat during a baseball game.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. A qualitative comparison of caption quality from various sources. The COCO [7] captions were generated by humans and the remaining captions were produced by large multi-modal models using the same prompts and images. Mistakes within the captions are highlighted in red, whereas detailed and accurate parts are emphasized in blue. Notably, the image used for this comparison was not included in the training set of our Share-Captioner.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Examples of multi-modal dialogue with ShareGPT4V-7B model. High-quality content is highlighted in blue for clarity.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. The pre-training data scaling performance on MMBench and SEED Bench. The model shows consistent gain with more pre-training data.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Prompts for instructing GPT4-Vision to generate detailed descriptions. The Prompts are designed with base prompts at the beginning and end, with a data-specific prompt placed in between.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Examples of captions generated by Share-Captioner. Detailed and accurate parts within the captions are emphasized in blue.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Examples of captions generated by ShareGPT4V. Detailed and accurate parts within the captions are emphasized in blue.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "ShareGPT4V-7B: The two characters in the picture are Steve Jobs and Isaac Newton.User: Who are the 2 characters in the picture? User:User: What different ideas did they get from the apple? ShareGPT4V-7B: Steve Jobs and Isaac Newton got the idea of the Apple computer and the concept of gravity, respectively, from an apple.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. Example of Multi-Round Dialog of ShareGPT4V-7B. Detailed and accurate parts within the captions are emphasized in blue.", "figure_data": "", "figure_id": "fig_12", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "-Vision Share-Captioner Comparable", "figure_data": "Percentage38.2%35.3%26.5%", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Human evaluation on Share-Captioner vs. GPT4-Vision over 100 validation samples and 10 volunteers. Language Model LLaVA W MME P MME C MMB MMB CN SEED I MM-Vet QBench SQA I VQA V 2 VizWiz", "figure_data": "the captioning process. The entire caption generation pro-cess required around 44 A100 GPU days and we name thispart of data as ShareGPT4V-PT.Qualitative Analysis. For qualitative analysis, Figure 4presents caption results from human-made COCO-Captions[7], BLIP [26], LLaVA-1.5-7B [30], Share-Captioner, andGPT4-Vision. It is important to note that the images fea-tured in this figure were not part of the training dataset forShare-Captioner. The results depicted in Figure 4 demon-strate that Share-Captioner produced results that are closelycomparable to those generated by GPT4-Vision, aligningwith our anticipated capabilities for the captioning process.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with SoTA methods on 11 benchmarks. With 7B parameters, ShareGPT4V-7B outperforms competitors in 9 out of 11 benchmarks and ranks second on the others, despite these competitors using larger training datasets or more parameters. Benchmark names are abbreviated due to space limits. LLaVA W : LLaVA-Bench (In-the-Wild)[31]; MME P : MME Perception [15]; MME C : MME Cognition [15]; MMB: MMBenchmark [33]; MMB CN : MMBench-Chinese [33]; SEED I : SEED-Bench (Image) [24]", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of the training strategy. The ShareGPT4V dataset improves the model performance in both the pre-training and supervised fine-tuning stages.", "figure_data": "Pre-training with ShareGPT4V-PTSFT with ShareGPT4VMME P MMB SEED I✗✗1510.7 64.366.2✗✓1542.1 66.866.7✓✗1557.2 67.468.5✓✓1567.4 68.869.7MethodMME P MMBench SEED IBasline1516.965.366.8+BLIP-558K1521.666.266.9+ShareGPT4V-PT-558K1539.868.368.9", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation on the pre-training caption quality. Based on the baseline, the second and third rows share the same end-to-end training strategy and images, but different captions from the BLIP captioner or our ShareGPT4V-PT dataset.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study about the number of learnable blocks in the vision encoder.", "figure_data": "Tune from Block Memory Usage MME P MMB SEED I2449.6 GB1515.2 66.668.11853.2 GB1556.0 67.269.31256.7 GB1567.4 68.869.7660.0 GB1529.5 67.769.6063.6 GB1545.7 68.569.2", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison of lexical composition of the captions generated by GPT4-Vision and Share-Captioner.", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
Lin Chen; Jinsong Li; Xiaoyi Dong; Pan Zhang; Conghui He; Jiaqi Wang; Feng Zhao; Dahua Lin
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Sharegpt", "year": "2023" }, { "authors": "Jinze Bai; Shuai Bai; Yunfei Chu; Zeyu Cui; Kai Dang; Xiaodong Deng; Yang Fan; Wenbin Ge; Yu Han; Fei Huang", "journal": "", "ref_id": "b1", "title": "", "year": "2023" }, { "authors": "Jinze Bai; Shuai Bai; Shusheng Yang; Shijie Wang; Sinan Tan; Peng Wang; Junyang Lin; Chang Zhou; Jingren Zhou", "journal": "", "ref_id": "b2", "title": "Qwen-vl: A frontier large vision-language model with versatile abilities", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Jun Chen; Deyao Zhu1; Xiaoqian Shen1; Xiang Li; Zechun Liu2 Pengchuan; Raghuraman Krishnamoorthi2 Zhang; Yunyang Vikas Chandra2; Mohamed Xiong; Elhoseiny", "journal": "", "ref_id": "b4", "title": "Minigpt-v2: Large language model as a unified interface for vision-language multi-task learning", "year": "2023" }, { "authors": "Keqin Chen; Zhao Zhang; Weili Zeng; Richong Zhang; Feng Zhu; Rui Zhao", "journal": "", "ref_id": "b5", "title": "Shikra: Unleashing multimodal llm's referential dialogue magic", "year": "2023" }, { "authors": "Xinlei Chen; Hao Fang; Tsung-Yi Lin; Ramakrishna Vedantam; Saurabh Gupta; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b6", "title": "Microsoft coco captions: Data collection and evaluation server", "year": "2015" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez", "journal": "", "ref_id": "b7", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2006" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b8", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Wenliang Dai; Junnan Li; Dongxu Li; Anthony Meng; Huat Tiong; Junqi Zhao; Weisheng Wang; Boyang Li; Pascale Fung; Steven Hoi", "journal": "", "ref_id": "b9", "title": "Instructblip: Towards generalpurpose vision-language models with instruction tuning", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b10", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Zhengxiao Du; Yujie Qian; Xiao Liu; Ming Ding; Jiezhong Qiu; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b11", "title": "Glm: General language model pretraining with autoregressive blank infilling", "year": "2021" }, { "authors": "Lijie Fan; Dilip Krishnan; Phillip Isola; Dina Katabi; Yonglong Tian", "journal": "", "ref_id": "b12", "title": "Improving clip training with language rewrites", "year": "2023" }, { "authors": "Yuxin Fang; Wen Wang; Binhui Xie; Quan Sun; Ledell Wu; Xinggang Wang; Tiejun Huang; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b13", "title": "Eva: Exploring the limits of masked visual representation learning at scale", "year": "2023" }, { "authors": "Chaoyou Fu; Peixian Chen; Yunhang Shen; Yulei Qin; Mengdan Zhang; Xu Lin; Zhenyu Qiu; Wei Lin; Jinrui Yang; Xiawu Zheng; Ke Li; Xing Sun; Rongrong Ji", "journal": "", "ref_id": "b14", "title": "Mme: A comprehensive evaluation benchmark for multimodal large language models", "year": "2023" }, { "authors": "Yitzhak Samir; Gabriel Gadre; Alex Ilharco; Jonathan Fang; Georgios Hayase; Thao Smyrnis; Ryan Nguyen; Mitchell Marten; Dhruba Wortsman; Jieyu Ghosh; Zhang", "journal": "", "ref_id": "b15", "title": "Datacomp: In search of the next generation of multimodal datasets", "year": "2023" }, { "authors": "Yash Goyal; Tejas Khot; Douglas Summers-Stay; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b16", "title": "Making the v in vqa matter: Elevating the role of image understanding in visual question answering", "year": "2017" }, { "authors": "Danna Gurari; Qing Li; Abigale J Stangl; Anhong Guo; Chi Lin; Kristen Grauman; Jiebo Luo; Jeffrey P Bigham", "journal": "", "ref_id": "b17", "title": "Vizwiz grand challenge: Answering visual questions from blind people", "year": "2018" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b18", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Sahar Kazemzadeh; Vicente Ordonez; Mark Matten; Tamara Berg", "journal": "", "ref_id": "b19", "title": "Referitgame: Referring to objects in photographs of natural scenes", "year": "2014" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b20", "title": "Segment anything", "year": "2023" }, { "authors": "Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A Shamma", "journal": "International journal of computer vision", "ref_id": "b21", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "Zhengfeng Lai; Haotian Zhang; Wentao Wu; Haoping Bai; Aleksei Timofeev; Xianzhi Du; Zhe Gan; Jiulong Shan; Chen-Nee Chuah; Yinfei Yang", "journal": "", "ref_id": "b22", "title": "From scarcity to efficiency: Improving clip training via visual-enriched captions", "year": "2023" }, { "authors": "Bohao Li; Rui Wang; Guangzhi Wang; Yuying Ge; Yixiao Ge; Ying Shan", "journal": "", "ref_id": "b23", "title": "Seed-bench: Benchmarking multimodal llms with generative comprehension", "year": "2023" }, { "authors": "Bo Li; Yuanhan Zhang; Liangyu Chen; Jinghao Wang; Jingkang Yang; Ziwei Liu", "journal": "", "ref_id": "b24", "title": "Otter: A multi-modal model with in-context instruction tuning", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b25", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b26", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Liunian Harold; Li ; Pengchuan Zhang; Haotian Zhang; Jianwei Yang; Chunyuan Li; Yiwu Zhong; Lijuan Wang; Lu Yuan; Lei Zhang; Jenq-Neng Hwang", "journal": "", "ref_id": "b27", "title": "Grounded language-image pre-training", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b28", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Haotian Liu; Chunyuan Li; Yuheng Li; Yong Jae Lee", "journal": "", "ref_id": "b29", "title": "Improved baselines with visual instruction tuning", "year": "2009" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b30", "title": "Visual instruction tuning", "year": "2007" }, { "authors": "Shilong Liu; Zhaoyang Zeng; Tianhe Ren; Feng Li; Hao Zhang; Jie Yang; Chunyuan Li; Jianwei Yang; Hang Su; Jun Zhu", "journal": "", "ref_id": "b31", "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection", "year": "2023" }, { "authors": "Yuan Liu; Haodong Duan; Yuanhan Zhang; Bo Li; Songyang Zhang; Wangbo Zhao; Yike Yuan; Jiaqi Wang; Conghui He; Ziwei Liu", "journal": "", "ref_id": "b32", "title": "Mmbench: Is your multi-modal model an all-around player?", "year": "2023" }, { "authors": "Pan Lu; Swaroop Mishra; Tanglin Xia; Liang Qiu; Kai-Wei Chang; Song-Chun Zhu; Oyvind Tafjord; Peter Clark; Ashwin Kalyan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Learn to explain: Multimodal reasoning via thought chains for science question answering", "year": "2022" }, { "authors": "Gen Luo; Yiyi Zhou; Tianhe Ren; Shengxin Chen; Xiaoshuai Sun; Rongrong Ji", "journal": "", "ref_id": "b34", "title": "Cheap and quick: Efficient visionlanguage instruction tuning for large language models", "year": "2023" }, { "authors": "Kenneth Marino; Mohammad Rastegari; Ali Farhadi; Roozbeh Mottaghi", "journal": "", "ref_id": "b35", "title": "Ok-vqa: A visual question answering benchmark requiring external knowledge", "year": "2019" }, { "authors": "Anand Mishra; Shashank Shekhar; Ajeet Kumar Singh; Anirban Chakraborty", "journal": "IEEE", "ref_id": "b36", "title": "Ocr-vqa: Visual question answering by reading text in images", "year": "2019" }, { "authors": "Thao Nguyen; Yitzhak Samir; Gabriel Gadre; Sewoong Ilharco; Ludwig Oh; Schmidt", "journal": "", "ref_id": "b37", "title": "Improving multimodal datasets with image captioning", "year": "" }, { "authors": " Openai", "journal": "", "ref_id": "b38", "title": "Chatgpt", "year": "2004" }, { "authors": " Openai", "journal": "", "ref_id": "b39", "title": "Gpt-4v(ision) system card", "year": "2023" }, { "authors": "Vicente Ordonez; Girish Kulkarni; Tamara Berg", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Im2text: Describing images using 1 million captioned photographs", "year": "2011" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Zhiliang Peng; Wenhui Wang; Li Dong; Yaru Hao; Shaohan Huang; Shuming Ma; Furu Wei", "journal": "", "ref_id": "b42", "title": "Kosmos-2: Grounding multimodal large language models to the world", "year": "2023" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b43", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b44", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b45", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Babak Saleh; Ahmed Elgammal", "journal": "", "ref_id": "b46", "title": "Large-scale classification of fine-art paintings: Learning the right metric on the right feature", "year": "2015" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b47", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Dustin Schwenk; Apoorv Khandelwal; Christopher Clark; Kenneth Marino; Roozbeh Mottaghi", "journal": "Springer", "ref_id": "b48", "title": "A-okvqa: A benchmark for visual question answering using world knowledge", "year": "2022" }, { "authors": "Piyush Sharma; Nan Ding; Sebastian Goodman; Radu Soricut", "journal": "", "ref_id": "b49", "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "year": "2018" }, { "authors": "Oleksii Sidorov; Ronghang Hu; Marcus Rohrbach; Amanpreet Singh", "journal": "Springer", "ref_id": "b50", "title": "Textcaps: a dataset for image captioning with reading comprehension", "year": "2020" }, { "authors": "Internlm Team", "journal": "", "ref_id": "b51", "title": "Internlm: A multilingual language model with progressively enhanced capabilities", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale", "journal": "", "ref_id": "b52", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b53", "title": "Attention is all you need", "year": "2017" }, { "authors": "Haoning Wu; Zicheng Zhang; Erli Zhang; Chaofeng Chen; Liang Liao; Annan Wang; Chunyi Li; Wenxiu Sun; Qiong Yan; Guangtao Zhai", "journal": "", "ref_id": "b54", "title": "Q-bench: A benchmark for general-purpose foundation models on low-level vision", "year": "2023" }, { "authors": "Aiyuan Yang; Bin Xiao; Bingning Wang; Borong Zhang; Chao Yin; Chenxu Lv; Da Pan; Dian Wang; Dong Yan; Fan Yang", "journal": "", "ref_id": "b55", "title": "Baichuan 2: Open large-scale language models", "year": "2023" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b56", "title": "mplug-owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Weihao Yu; Zhengyuan Yang; Linjie Li; Jianfeng Wang; Kevin Lin; Zicheng Liu; Xinchao Wang; Lijuan Wang", "journal": "", "ref_id": "b57", "title": "Mm-vet: Evaluating large multimodal models for integrated capabilities", "year": "2023" }, { "authors": "Haotian Zhang; Pengchuan Zhang; Xiaowei Hu; Yen-Chun Chen; Liunian Li; Xiyang Dai; Lijuan Wang; Lu Yuan; Jenq-Neng Hwang; Jianfeng Gao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b58", "title": "Glipv2: Unifying localization and vision-language understanding", "year": "2022" }, { "authors": "Pan Zhang; Xiaoyi Dong Bin; Yuhang Wang; Chao Cao; Linke Xu; Zhiyuan Ouyang; Shuangrui Zhao; Songyang Ding; Haodong Zhang; Hang Duan; Yan", "journal": "", "ref_id": "b59", "title": "Internlmxcomposer: A vision-language large model for advanced text-image comprehension and composition", "year": "2023" }, { "authors": "Renrui Zhang; Jiaming Han; Aojun Zhou; Xiangfei Hu; Shilin Yan; Pan Lu; Hongsheng Li; Peng Gao; Yu Qiao", "journal": "", "ref_id": "b60", "title": "Llama-adapter: Efficient fine-tuning of language models with zero-init attention", "year": "2023" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b61", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 54.73, 543.76, 226.01, 42.77 ], "formula_id": "formula_0", "formula_text": "[7] COCO [29] ✓ Human 118K 52 BLIP-LCS [26] LCS ✓ BLIP [26] 558K 54 LLaVA-23K [31] COCO [29] × GPT4 [39] 23K 609 ShareGPT4V LCS, COCO [29], etc ✓ GPT4-Vision [40] 100K 942 ShareGPT4V-PT LCS, COCO [29], etc ✓ Share-Captioner 1,246K 826" } ]
10.1109/TDSC.2019.2933621
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b54", "b55", "b9" ], "table_ref": [], "text": "in defocus-blur images is sharp and clear in the lens focal plan; whereas, the background is blurred and faraway from the focal length which signifies the out-of-focus region in the image. The focallength distance from the object indicates the level of DoF in defocus-blurred images. The DoF level is high if the object is faraway from the focal length. Defocus-blur detection is used in numerous computer vision applications, such as in-focused object detection [1], background blur magnification [2], image refocusing [3], depth estimation [4,5], image information security [6], text detection [7], partial image deblurring [8,9] and region-of-interest detection in light-field images, and also image edge detection [55,56].\nThe Discrete Cosine Transform (DCT) vector takes average weight by Gaussian function for modeling the DoF effect, any single descriptor cannot signify DoF subsequently, and the Point Spread Function (PSF) is spatially varying constantly. The Pulse Coupled Neural Network (PC Neural Net model) is a self-organizing network comprising a lightweight structure that does not require any learning process. Hence, this study excluded measuring the blur kernels. As an alternative, in this research, an efficient defocusblur segmentation approach from a single image is proposed, which does not require any prior information related to the degree of DoF.\nThe classical defocus-blur detection techniques can be categorized into two major classifications: edge-based techniques and pixel-based techniques. The prior detects the blur measure of the descriptive pixels to find sparse blur edgebased estimation and disseminate the knowledge to the entire defocus-blur image; whereas, pixelbased techniques scan local patches of image from top to bottom and left to right, to measure defocusblurriness of each and every pixel, yielding direct dense maps of defocus-blur. Pixel-based techniques have been actively adopted in various recent research, particularly the defocus-blur region detection used at the pixel level in defocus images [10]. Contributions of this study include:\n• We propose a hybrid, efficient, novel, and accurate defocus-blur detection technique from a single defocused image, based on Discrete Cosine Transform (DCT) coefficients measures along with a neuron firing based Pulse coupled Neural Network (PC Neural Net) to determine the major limitations of defocus-blur segmentation approach. • The defocus-blur detection approach is based on positive threshold parameters, as it is one criterion for the region detection procedure. DCT has the characteristics of symmetry and separability to detect the defocus-blur data in DCT coefficients without any degradation. • Next, the DCT feature vector estimates the out-of-focus region in the defocus-blur image and then accurately detects the partial defocus-blur area. • Subsequently, PC Neural Net-based firing of neuron sequence structure is applied that contains information about each pixel feature after the blurred region detection, e.g., region, texture, and edges, that utilized the features of defocus-blur image to prominently segment the blurred region. • It is evident from the experimental results that the proposed defocus-blur map yields prominent segmentation results; whereas, adopting limited processing time and computation in numerous out-of-focused platforms. The proposed approach measures defocus-blur detection metric to visually represent the consistent segmented regions. • Finally, the EDAS fuzzy technique is used to evaluate the ranking of the proposed approach alongside various recent state-ofthe-art techniques for defocus-blur segmentation. It also calculates appraisal scores (AS) for numerous performance estimations incorporating precision, recall, as well as ℱ 𝛼 -score and indicates that the proposed approach outperforms the referenced methods.\nThe rest of the paper is structured as follows: Section 2 illustrates the literature review of defocus-blur images along with PC Neural Net followed by DCT and EDAS techniques. The proposed framework, including its algorithm and implementation procedure, is described in Section 3. Section 4 contains the evaluation of the segmented results of the proposed study and discusses the datasets, algorithms, comparative results, and EDAS scheme for ranking the state-ofthe-art schemes. Finally, the conclusion is presented in Section 5." }, { "figure_ref": [], "heading": "II. LITERATURE REVIEW Presently,", "publication_ref": [ "b10", "b9", "b9", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b25", "b26", "b20", "b21", "b22", "b23", "b22", "b24", "b24", "b25", "b26", "b27", "b28", "b28", "b29", "b30", "b31", "b47", "b48", "b49", "b50", "b51", "b32", "b33", "b35", "b36", "b37", "b0", "b38", "b39", "b40" ], "table_ref": [], "text": "defocus-blur segmentation is predominantly used for focused object detection. According to the literature reviewed, the state-ofthe-art defocus-blur techniques of a single image are categorized into edge-based techniques, pixelbased techniques and also learning schemes.\nThe blur amount of the entire blurred pixels is directly estimated by the pixel-based schemes. The dense metric is achieved without propagating the blurriness map, which also avoided the error produced by spreading in limited points. Chakrabarti et al. [11] concatenated the Gaussian-scale Mixture and sub-banddecomposition to measure the specific window probability in a re-blurred image caused blurriness by applying a candidate-kernel. Su et al. [10] analyzed the information of a particular and singular value of each and every pixel of the defocused image to segment the regions of a reblurred image. A novel blur map based on [10] is presented in [12,13] that fused certain particular and singular values of numerous subbands using image windows of multi-scale. The presented algorithm merged local image filters, gradient distribution, and a spectrum of defocused blur images into a multi-scale pattern to distinguish between in-focused and out-of-focused images. Yi and Eramian [14] proposed a Local Binary Pattern (LBP) and observed the fewer LBPs in the outfocused region compared to the focused region. Blur region detection mainly used spectral features. Marichal et al. [15] observed the highfrequency coefficients which were assigned as zero, regardless of the content. Henceforth, the histogram-based algorithm was proposed, adopting the non-zero DCT coefficients. Vu et al. [16] estimated the amplitude-spectrum slop and the complete spatial variation for each block of the defocused image. Javaran et al. [17] designed the principles for high-frequency information that remain the same in re-blurred images and is used for out-of-focused region detection. Golestaneh et al. [18] developed a High-Frequency multi-scale Fusion Sort-Transform (HiFST) of gradient magnitudes in the detection of out-of-focused regions.\nIn edge-based schemes, the aim is to estimate the edges of the images along with the sparse-blur mapping. The edges of the defocused-blurred images have gradient measures, and visual changes occur in the defocused-blur region, which can help out with prominent defocus-blur estimation at the edges. In [19,20,26,27], the novel presented defocused-blur edge is formed as the complex in-focused image. A Gauss function and its required proportional parameters are measured by analyzing the rate of change of edge intensity of the image. In [21], a cross bilateral filtering is applied to eliminate outliers. The colorization approach-based interpolation scheme to determine an entire defocus-blur map has been presented. To determine the correspondence between the numerous contrasts at the edge points and the extent of spatially varying defocus-blur, the blur estimation was measured at the particular edge points. In [22,23], an entire defocus map is produced by disseminating the blur measure at edge points in the whole non-homogeneous image. The defocus-blurred edge is generated with respect to the gradient proportion between the Gaussian-kernel-based defocused-blurred and the original input image. Their research also presented the Mating-Laplacian (MatLap) scheme to disseminate information to other parts of the image. Karaali et al. [24], suggested the defocusblur parameters selection based on [23], where the interpolation and extrapolation techniques were adopted to extract the out-focused information at edges for dissemination. A faster guide filtering technique was also applied to disseminate the sparse-blur mapping in the entire defocused-blur image for reducing the computational complexity. Tang et al. [25] suggested a limited number of blur points which was estimated for yielding the blur map detection region, which is related to the edgedetection schemes. A coarse-blur metric has been presented in their article [25], which is a residue to get a log-averaged spectrum based on a blur map. In fact, a blur measure decreases the highly frequent components of a defocused image. Therefore, an iterative updating-based novel approach was suggested to enhance the blur metric from coarse to fine region by adopting the intrinsic-relevance of relevant referenced regions of re-blurred image. Liu et al. [26] and Xu et al. [27] both group of researchers applied the MatLap scheme to achieve an extensive defocus-blur estimation.\nNowadays, learning-based schemes have been extensively used in various research, as evident from the literature. These techniques trained the classifiers to detect out-of-focused regions. Liu et al. [28] designed out-of-focused features based on the spectrum, color, and gradient information of the defocused image. They also applied training of parametric features for the accurate classification of defocused images. Shi et al. [29] presented Just Noticeable Blur (JNB) which propagates fewer quantity of pixels yielded by out-of-focused images. In their research [29], a correlation between the strong blur measures and sparse edge illustration is established by training a dictionary. Dandres et al. [30] and Tang et al. [31] adopted machine as well as deep learning schemes using blur strength computation and a regressiontree fields extraction based on local frequency image statistics, for training a model to retrogress a consistent out-of-focused metric of the image. The defocus-blur metric of the out-of-focused image was measured to infer the proper disk PSF radius at each pixel level. Ma et al. [32] presented an approach based on sub-band DCT fusion ratio, multi-orientation, and multi-scale windows for calculating the blurred edge points. This approach produced dense-blur maps by applying matting Laplacian and multi-scale fusion algorithms. Similarly, Jinxing et al. [48] proposed contrastive similarity for multi-patch and multi-scale learning methods for unsupervised detection of defocusedblur images in order to eliminate the manual annotations of pixel-level data. A generator first exploits the mask to reproduce the combined images by conveying the approximated blurred and sharp regions of the test image with completely natural full-blurred and full-sharp images, respectively. Moreover, Xianrui et al. [49] presented the Defocus-to-Focus (D2F) model for bokeh rendering learning, to fuse the defocuspriors with the in-focused region and implement the radiance-priors in the form of layered fusion. A large-scale bokeh dataset is adopted for evaluation, which indicated that the proposed model is able to render the visual bokeh effects in challenging scenes. Furthermore, Sankaraganesh et al. [50] illustrated the defocus-blur detection technique that measured the approximation of each pixel belonging to a sharp or blurred region in resource-constrained devices. Their model efficiently detected the blur map from the source defocused-blur image. Likewise, Wenda et al. [51] proposed a set of separate and combined models, i.e., a pixel-level DBD network and an image-level DBD classification network, to accomplish accurate results for various defocus-blur images. Their proposed study was evaluated using their own DBD dataset called DeFBD+, along with annotations at the pixel level, and outperformed. Additionally, Yanli et al. [52] presented a depth restoration method for a single defocused-blur image based on the superpixel segmentation method. At first, the simple linear iterative cluster (SLIC) separates the source image into numerous superpixel phases. Next, the defocus-blur effect of each superpixel phase is obtained as per the Gaussian-Cauchy mixed framework, to achieve the sparse depth map of the superpixel level.\nPulse Coupled Neural Network (PC Neural Net) is a visual cortex model of mammalians to provide synchronization pulse bursts in the monkey and cat visual cortex and a neuron-firing feedback network structure. PC Neural Net contains three main components: Receptive branch, Modulation field, and Pulse producer. In the receptive field, the input signals are received by neurons through linking and feeding subsystems. The PC neural Net is capable for recognizing the visual nervous structure and also has the characteristics of neuron pulse synchronization and global coupling. It is mainly adopted in image segmentation, image fusion, image denoising, object recognition, and image enhancement, etc. [33], [34]- [36]. Shen et al. [37] presented the PC Neural Net application in refocusing images for defocus region segmentation. PC Neural Net estimates the spatial properties of pixels in image segmentation. However, the above state-of-the-art methods effectively extract the defocus-blur metric in defocus-blur images, generally, these techniques have some complexities for prominent detection of focused and out-of-focused regions. The referenced defocus-blur detection algorithms have some common limitations, such as extending the blur metric duration, background clutter, indistinguishable in-focused regions of lowcontrast images from defocus-blur regions, high computational cost, and misclassification in region segmentation.\nEvaluation Based on Distance from Average Solution (EDAS) based fuzzy logic scheme is adopted in this research in order to rank the stateof-the-art algorithms. Authors of [38] applied EDAS scheme for ranking numerous clustering techniques, while authors of [1] and [39] utilize the application of the EDAS method to rank various defocused-blurred techniques for in-focused region detection. Mehmood et al. [40] adopted the EDAS scheme for the evaluation of numerous WBAN (wireless body area network) techniques and Ileiva et al. [41] used it for decision analysis of different fuzzy-based methods to resolve the Multiple-Criteria-Decision Making Method (MCD) issues and also to subside its computational complexity." }, { "figure_ref": [], "heading": "III. PROPOSED DEFOCUS-BLUR METRIC", "publication_ref": [], "table_ref": [], "text": "The visual system of human gets more attracted to the image frame and object when viewing a defocus-blur image and focus more attention on the detailed information of focused objects, for the visual quality analysis. In the visual effects of defocus-blur images, there is a visible difference in the absence of details in defocus-blur region compared to those of the focused region." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "A. DCT-BASED SCHEME", "publication_ref": [ "b18", "b0", "b1", "b42" ], "table_ref": [], "text": "Pentland [19] proposed that a defocus-blur image patch can be represented as the convolution between a Gauss blur kernel and a focus image patch. The convolution eliminates the prominent frequency information in the focused region. The Gauss-blur kernel parameters signify the defocusblur degree of an image up to some range, as represented in formula (1). 𝐼 𝐵𝑙𝑟 = 𝐹 𝐺 × 𝐼 𝑁 + 𝜇\n(1) where 𝐼 𝐵𝑙𝑟 and 𝐼 𝑁 denote blurred and non-blurred image patches, while 𝐹 𝐺 is the Gauss function and 𝜇 represents the noisy image, which can be derived from the below formula (2).\n𝐹 𝐺(𝑢,𝑣,𝜎)= 1 2𝜋𝜎 2 𝑒 . -𝑢 2+𝑣 2 2𝜎 2 ⁄(2)\nIn the Gaussian function, the standard deviation is symbolized as 𝜎, whereas the greater 𝜎 represents the detail information of the image which was eliminated after the convolution process, i.e. the defocus image is highly blurred. Spatially varying blur is one of the popular types of defocus-blur images, that adopt the Gaussian function for the filtering process of each pixel, along with various parameters. The rich frequent DCT coefficients in mathematical evaluation are reduced in each image patch blurred by a Gaussian blur kernel, and they are further reduced if the Gauss blur kernel increases the mathematical value of 𝜎. Image (a) in Fig. 1 is a test image taken from a partially defocused-blur public dataset containing 704 images [43], while image (b) represents the ground-truth image which is manually segmented to illustrate the in-focused and out-of-focused regions. The high-frequency elements in the in-focused region are high compared to the out-of-focused region. DCT coefficient highlights the high-frequency components of the transitive and in-focused regions more than those of the out-of-focused regions, where significant details are lost in defocused-blur images. The mathematical evaluation validates that the out-of-focus area attenuates high-frequency information compared to its corresponding in-focus area. These details can differentiate between out-of-focused and infocused regions of the defocused-blur image. This study adopted the DCT coefficient to estimate the defocus image blurriness patch; whereas, PCNN measures the image in-focus patch as pre-processing. Moreover, the DCT feature vector detects the edge features of the high gradient data to avoid the measured error produced by the focused textureless patch. To resolve the spatially varying blurred issue, we presented the defocus-blur image patch at the edge level, along with various defocus degrees de-blurred by numerous 𝜎 𝑏𝑙𝑟 , represented by the Gauss kernel.\nIn this study, the convolution is performed on the blurred as well as the non-blurred image patches using the Gaussian function, to achieve the consistent out-of-focus measured in the de-blurred type. The DCT vector-based coefficients proportion between the input image and the deblurred image are estimated as the out-of-focused measure of the middle pixel in the image patch. This process is executed one by one, on the edge and pixel level, to achieve a blur metric. Lastly, PC Neural Net is applied to classify the in-focused image regions from the entire defocus-blur image. The block diagram of the proposed approach is illustrated in Fig. 2." }, { "figure_ref": [], "heading": "DCT COEFFICIENT-BASED BLUR MAP", "publication_ref": [ "b41", "b13" ], "table_ref": [], "text": "DCT operates high as well as low frequency signals, by transforming spatial domain into frequency signals, to illustrate the image structures and details that can frequently be utilized in JPEG image compression via excluding high frequency matrix part [42].\nThe frequency domain represents the highfrequency detail reduction and reflection of the main variations between the in-focused and out-of-focus defocus-blur images and has also been the result of insufficient detail information. DCT coefficients characterized the measure in the detail information loss in the out-of-focused region, which is depicted by the experiments and also illustrated by the prominently in-focused region detection.\nThe DCT produces a transform map between the test image patch and the de-blurred region to observe a proper estimation. The DCT-based blur map can be derived utilizing the following equations:\nℛ 𝑥 = 𝑐 𝑥 𝑐 𝑎 𝑥 (3) 𝑐 𝑎 = 𝑇(𝐶 𝑎 ), 𝑐 = 𝑇(𝐶)(4)\nwhere 𝑇(𝐶) is a transformation function of a matrix in a blur-vector, which is explained in the where the sharpness-vector dimension is denoted by the parameters 𝑛, 𝑙 and ℎ; where 𝑙 and ℎ are identified as the demarcation-value of low-level and high-level frequency, respectively. The co- The minimized and maximized DCT-based distances are merged to yield the distance formula 𝐷𝐶𝑇 (𝑢 𝑖 , 𝑦 𝑖 ) in the DCT-based transformation domain, as follows:\n𝐷𝐶𝑇(𝑢 𝑖 , 𝑦 𝑖 ) = 1 𝛼 𝑖 + 𝛽 𝑖 (𝛼 𝑖 𝐷𝐶𝑇 (𝑢 𝑖 , 𝑦 𝑖 ) + 𝛽 𝑖 𝐷𝐶𝑇 (𝑢 𝑖 , 𝑦 𝑖 )) (10\n)\nwhere \nThe parameter 𝑀𝑖𝑛 𝑖 denotes the maximized searching window at pixel position 𝑖. The maximized DCT-based weights 𝑊 𝑀𝑖𝑛 (𝑢 𝑖 , 𝑦 𝑗 ) are estimated as specified in the formula (14).\n𝑊 𝑀𝑎𝑥 (𝑢 𝑖 , 𝑦 𝑗 ) = ℯ -∑ (𝐷𝐶𝑇(𝑙,ℎ)-𝐷𝐶𝑇(𝑖,ℎ)) 𝑙 ℎ=1 𝐹(𝐷𝐶𝑇)(14)\nThe value of filtering parameter 𝐹(𝐷𝐶𝑇) is similar for 𝑊 𝑀𝑖𝑛 (𝑢 𝑖 , 𝑦 𝑗 ) as well as for 𝑊 𝑀𝑎𝑥 (𝑢, 𝑦 𝑗 ), meanwhile the calculation of DCT-based coefficients is similar for patch size 𝓂 × 𝓂 extractions." }, { "figure_ref": [], "heading": "GAUSSIAN FUNCTION PARAMETERS", "publication_ref": [ "b43", "b13" ], "table_ref": [], "text": "Gaussian function parameter 𝜌 𝔖 value is selected in order to detect the sharp patch of the defocusedblur image. It is required to select numerous sharpness parameters at the edge and pixel positions along with various texture intensities. Once selecting the local sharpness parameters, the primary effect which is needed to be considered is the noise, which can be eliminated by the filtering process. In our experiment, we set the parameter 𝜌 𝔖 = 0.4𝜌 ℭ to produce optimal results. In original images, the local sharpness descriptor at the pixel points is measured to detect the pixel sharpness. The sharpness descriptor classified a defocused image into sharp and blur regions, whereas the sharp region represents the foreground and the blurred one indicates the background, as given below:\n𝐼 𝐷𝑒𝑓 (𝑖, 𝑗) = 𝜒 𝑖,𝑗 𝐼 𝐹𝑔 (𝑖, 𝑗) + (1 -𝜒 𝑖,𝑗 )𝐼 𝐵𝑔 (𝑖, 𝑗)(15)\nwhere 𝜒 𝑖,𝑗 represents the dense foreground on the corresponding pixel location (𝑖, 𝑗). Some pretreatment work is required on the image acquisition prior to entering the input image into the proposed Algorithm 2. In the initial operation, the outlier needs to be removed by applying bilateral filtering on the defocus-blur map [44]. The potential errors of the defocus-blur mapping are further reduced by adopting the double threshold scheme [14], as illustrated in Eq. ( 16).\n𝑀 𝐷𝐶𝑅 (𝑖, 𝑗) 𝑖𝑓 𝑀 𝐷𝐶𝑅 (𝑖, 𝑗) ≥ 𝑇ℎ 1 𝑚𝑎𝑝(𝑖, 𝑗) = 𝑀 𝐷𝐶𝑅 (𝑖, 𝑗) 𝑖𝑓 𝑀 𝐷𝐶𝑅 (𝑖, 𝑗) ≤ 𝑇ℎ 2 (16) 0 otherwise where 𝑀 𝐷𝐶𝑅 (𝑖, 𝑗) is the 𝐷𝐶𝑅 value at pixel position (𝑖, 𝑗)." }, { "figure_ref": [], "heading": "B. PC NEURAL NET (PULSE COUPLED NEURAL NETWORK)-BASED SCHEME", "publication_ref": [ "b44", "b17" ], "table_ref": [], "text": "The PC Neural Net is a coupling nature neuron based on a feedback system. Each coupling neuron contains three sub-systems: the receptive branch, modulation field, and pulse producer [45]. The neuron firing will target the neurons of the same category. The linking and feeding inputs in the receptive branch provide input signals to neuron. Next, the input signals are categorized into two networks: one is the feeding input denoted by ℱ 𝑖𝑗 whereas the other is the linking input identified by 𝔏 𝑖𝑗 . The normalized pixel location (𝑖 𝑗) of the image is the input motivation and is represented as 𝛿 𝑖𝑗 . The internal neuron activity is denoted as 𝒰 𝑖𝑗 while dynamic-thresholding is represented by 𝜗 𝑖𝑗 . The feeding element received the input motivation; whereas the linking and feeding elements are merged by the internal activation element. The PC Neural Net-based image fusion, as depicted in Fig. 4, observes that the external stimulus element is only accepted by the feeding signal ℱ 𝑖𝑗 . The 𝓅 reflects the 𝓅 𝑡ℎ block pixels of the source image. PC Neural Net-based mathematical structure is illustrated in the schematic model presented below (Fig. 4): In image segmentation, PC Neural Net is a single pulse layer-based coupling of nature neurons along with a 2-D connection. The pixel number in the inputted image is equal to the number of neural cells in a network. Therefore, it is known as a 1-to-1 correspondence that exists between pixels in an input image and neurons in a network. The linking field connects each neuron along with its adjacent neurons. The firing output of each neuron lies under two states, i.e., firing or '1' state and non-firing or '0' state. The neighboring neurons receive the pulse burst result. If the current neuron denoted as ℭ 𝑖𝑗 and neighboring neurons have similar intensity, firing will perform as a result of pulse-coupled action. Therefore, the neuron ℭ 𝑖𝑗 has been recalled to capture the neighboring neuron cells. Lastly, the synchronization pulses will be emitted by the neuron ℭ 𝑖𝑗 and its neighboring neurons. Consequently, the synchronous pulses and the global coupling are the basic properties of PC Neural Net.\nℱ 𝑖𝑗 𝓅 = 𝛿 𝑖𝑗 𝓅 (17\n)\n𝔏 𝑖𝑗 𝓅 [𝓃] = ѵ 𝔏 ∑ 𝒲 𝑖𝑗𝑥𝑦 𝓅 Υ 𝑥𝑦 𝓅 [𝓃 -1] 𝑎𝑏 + 𝑒𝑥(-𝜕 𝔏 ) 𝔏 𝑖𝑗 𝓅 [𝓃 -1] (18) 𝒰 𝑖𝑗 𝓅 [𝓃] = ℱ 𝑖𝑗 𝓅 [𝓃] (1 + 𝛽 𝑖𝑗 𝓅 𝔏 𝑖𝑗 𝓅 [𝓃]) (19) 𝜗 𝑖𝑗 𝓅 [𝓃] = 𝒱 𝜗 Υ 𝑥𝑦 𝓅 [𝓃 -1] + 𝑒𝑥(-𝜕 𝜗 ) 𝜗 𝑖𝑗 𝓅 [𝓃 -1](20) Υ 𝑥𝑦 𝓅 [𝓃] = 𝒰 𝑖𝑗 𝓅 [𝓃] -𝜗 𝑖𝑗 𝓅 [𝓃](21)\nThe mathematical model of PC Neural Net is represented in Equations ( 17)-( 21), the linkingstrength 𝛽 𝑖𝑗 𝓅 indicates the characteristics of the pixels and the values that lie in between 0 < 𝛽 𝑖𝑗 𝓅 < 1. According to the human vision system, the stimulus about prominent region features is high compared to less prominent region features. Hence, the 𝛽 𝑖𝑗 𝓅 value of each neuron cell in the PC Neural Net model must be connected to corresponding pixel features of the defocus image. The above-focused parameters are adaptively allocated in the proposed Algorithms 2 and 3. Algorithm 2 takes the defocused-blur image as an input and yields the in-focused image as an output, whereas Algorithm 3 illustrates pixel classification called by Algorithm 2. Algorithm 2 takes the defocused-blur image as an input and yields prominent image regions as an output. Algorithm 2 consists of parameter initialization, producing a firing sequence matrix, DCT coefficient calculation, and analyzing the segmentation quality of prominent regions. Algorithm 2 calls Algorithm 3 for pixel classification, whereas Algorithm 3 marks each pixel category by receiving the inputs: connectivity matrix CMat , and the parameter 𝛶. Algorithm 2 involves various parameters, i.e., connecting weight matrix W, connecting strength β, dynamic thresholded coefficient 𝑌 𝐸 , decay factor 𝑋 𝐸 , minimum thresholded limit Tℎ m , and judgment criteria j. The initial value of W has been computed experimentally. The other parameters 𝑌 𝐸 , 𝑋 𝐸 , 𝑇 𝑚 , and j are configured adaptively according to gray-scale distribution in the image. The gray-scale pixel intensity value is indicated by the connecting weight matrix W and the central neuron broadcast this information. The synaptic weights in our matrix are initialized with constant values as given in Eq. ( 22) as follows:\n𝑊 𝑖𝑗 [ 0.5 1 0.5 1 0 1 0.5 1 0.5 ] (22\n)\nThe activation of the firing neuron interval in the PCNN structure is adapted step-wise. Tsai and Wang [18] illustrate that 𝑌 𝐸 modifies the width of the matrix for each firing step whereas, the height of each firing step is altered by 𝑋 𝐸 . For example, 𝑋 𝐸 narrows down each step of neural firing that decreases its numerical neural coupling properties, and neural pulse delivery about network behavior is shown. The algorithm performance is suffering from a continuing decrease 𝑋 𝐸 , that tends to increase each iteration interval of the algorithm. The pixel value parallel to the neuron and the normalized gray-level value high(BDef) in the whole defocused-blur image must be fired at the primary iteration interval. Therefore, 𝑌 𝐸 is normally set as high(BDef). To avoid overlapping between each neural firing cycle, the neurons must be fired once. If the neuron gets fired, then its threshold value is assigned as infinity. Subsequently, in the same cycle of algorithm, the neuron has not the capability to fire again as mentioned in Eq. ( 23). The image BDef is normalized as the matrix δ.\n𝑌 𝐸 =max(δ)(23)\nThe simple pre-processing steps adopted by the algorithm 2 proposed consist of spatial frequency statistics, calculation of gray-scale statistical distribution, and normalization of gray-level values. The adaptability of the proposed algorithm 2 is improved if the parameters are set as per the pre-processing results of images. The gray-scale distribution of the whole image is indicated by the parameter 𝑇ℎ 𝑚 . Thus, the gray-level values with pixel numbers in the parameter iteration [Tℎ 𝑚 ; 1>=93%] of δ pixels are yielded in the image. The low, mid, and high-level frequency information from the entire image is extracted, called threelevel descriptive regions. The highest pixel value with the image block in each descriptive region is the output frequency band." }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL RESULT AND EVALUATION", "publication_ref": [ "b9", "b12", "b16", "b13", "b21", "b24", "b26", "b23", "b31", "b42", "b45" ], "table_ref": [], "text": "To evaluate the proposed model, we conducted our experiments using two publicly available datasets. The first one consists of 704 partially blurred\n(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) FIGURE 5.\nVisual results of local in-focused detection illustrated by numerous schemes stated as left to right: (a) Original images, (b) Su et al. [10], (c) Shi et al. [13], (d) Javaran et al. [17], (e) Yi et al. [14], (f) Tang et al. [22], (g) Tang et al. [25] (h) Xu et al. [27], (i) Karaali et al. [24] (j) Ma et al. [32], (k) Ours (l) Ground-truth. images, presented by Shi et al. [43]. The second one is a defocused-blurred dataset of 500 images, presented by Zhao et al. [46] along with recent state-of-the-art comparators. There are some challenges involved in both the datasets, as some images are nearly blurred while others are distantly blurred. Consequently, some images have homogeneous backgrounds, whereas other images involve cluttered backgrounds." }, { "figure_ref": [], "heading": "A. EVALUATION AND PARAMETER SELECTION", "publication_ref": [ "b9", "b12", "b16", "b42", "b45", "b9", "b12", "b16", "b13", "b21", "b24", "b26", "b23", "b31", "b21", "b23", "b24", "b26", "b42", "b45", "b9", "b12", "b16", "b13", "b21", "b24", "b26", "b23", "b31", "b24", "b13", "b9", "b12", "b16", "b24", "b23", "b31", "b26", "b16", "b52", "b53", "b23", "b23", "b9", "b16", "b26", "b16", "b46", "b24", "b24", "b11", "b13", "b21", "b24", "b26", "b23", "b9", "b12", "b16", "b13", "b31" ], "table_ref": [ "tab_1" ], "text": "In this section, the comparison of our proposed approach along with referenced schemes is performed based on both the qualitative and the quantitative evaluations. For testing the results, the proposed approach was executed on Intel(R) Core (TM) i7-10 th GEN CPU @2.70 GHz. The proposed approach partially segmented the dataset images into in-focused and out-of-focused patches, as illustrated in Fig. 5. The in-focused regions are identified by white color and are assigned a pixel value is 1, whereas the out-of-focused regions are depicted in black color and are allocated a pixel value is 0. The in-focused regions are prominently detected by the proposed approach in the segmented defocused-blur images. The results yielded by the proposed approach eliminated noisy background and have a closer resemblance to the ground-truth images, compared to previously published research. The segmented results produced by Su et al. [10], Shi et al. [13], and Javaran et al. [17] have mixed-up the sharp and blurred regions and the objects are not noticeable in the results. Henceforth, the proposed approach prominently detected the sharp objects from the blurred background as compared to referenced schemes. The estimated process time required for our proposed approach on the datasets, i.e. Shi et al. [43], and Zhao et al. [46], were 136.407s and 33.139s, respectively. The results of the proposed approach were compared with those of nine other comparators [10,13,17,14,22,25,27,24,32]. Some of the schemes among them are edge-based techniques i.e., Tang et al. [22], Karaali and Jung [24], Tang et al. [25], and Xu et al. [27], while the rest are recent pixelbased techniques.\nThe experimental results of the proposed approach along with those of the comparators techniques for sample images of diverse categories are illustrated in Fig. 5. Out of eleven images, the first eight was chosen from Shi et al. [43] dataset, while the rest were selected from Zhao et al. [46] dataset. It is noticeably observed that the proposed approach visibly outperformed the referenced schemes under numerous blurs and cluttered backgrounds. The visual effect of the proposed approach is outstanding, even in the cases of nonuniform and complex blurs and backgrounds.\nOur approach outperformed the nine classical techniques [10,13,17,14,22,25,27,24,32] in terms of the error-control and the accurate infocused region location. Tang et al. [25] missed the details of the targeted objects. The edge-based techniques avoided the texture features of the regions without edge points and adopted blur details of the edges to detect sharp regions in a sample image. Yi et al. [14] measured the sharpness estimation using the LBP descriptor by adopting the thresholded-based LBP method. Su et al. [10] calculated and classified the sharpness metric by applying the Decomposition of Singular Value (DSV) algorithm. Shi et al. [13] applied a multi-scale inference structure following the Naïve Bayes classifier. Javaran et al. [17] adopted a DCT-based feature vector for blur map extraction and segmented the images into blurred and sharp regions. Tang et al. [25] used a log averaged-spectrum residual mechanism for segmenting the in-focused smooth region and blurred-smooth region in defocus and motionblurred images. Consequently, Karaali et al. [24] adopted an edge-based method for spatiallyvarying defocus-blur map using a reblurredgradient magnitude to detect blur map in defocusblur images. Similarly, Ma et al. [32] adopted DCT-based feature for detecting the blur estimation and segmented the in-focused and the out-of-focused regions in the partially blurred defocused dataset. The outputs produced by the contrast techniques were gray-scale images where the maximum intensity levels indicate the highest sharpness level, and most of the studies applied threshold measure for final segmentation. The depth metric of the proposed approach is standardized by the interval [0, 8] to detect the sharpness map. This study, following the referenced schemes, adopted Precision and Recall curves along with ℱ 𝛼 -score for validating the results, in terms of quantitative evaluation [27,17,53,54]. The parameters of the performance metrics are as follows:\nPrecision and Recall graphs of each contrast technique, to vary the threshold at each integer value, were yielded by applying the interval [0, 255] on Shi's and Zhao's dataset, as illustrated in formula (24). (24) where 𝑅 𝑆 indicates the pixels in the blurred region of the segmented image, whereas 𝑅 𝐺 denotes the pixels in the blurry region of a ground-truth image. The authors of reference techniques including [10,17] provided the implementation codes. We brought some minor changes in the results of some of the techniques, to adjust the black and white regions signifying the blurred and non-blurred regions. The edge-based comparisons were performed on Shi's dataset and observed that the proposed approach outperformed the comparators' ones if the Recall is higher than 0.65, as depicted in Fig. 6. Consequently, the proposed approach achieved higher Precision in terms of Recall, compared to Yi et al. [27], and Javaran et al. [17], compared to other pixel-based algorithms which are illustrated in Fig. 7. Zhao's dataset is very challenging for performing the experiments for our proposed model, because of the cluttered backgrounds and non-uniform in-focused regions. Conversely, the proposed approach yielded higher Precision in terms of higher Recall, while the rest of the techniques reduced their accuracy. Correspondingly, ℱ 𝛼 -score [47] was also computed for the proposed approach, to evaluate the segmentation metric of the blurred regions, expressing the harmonic mean of precision-recall, as illustrated in Eq. (25). (25) where 𝛼 was assigned a value of 0.3, as stated in [12] and [14]. It can be seen in Table 1 that the proposed approach outperformed the referenced techniques.\n𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑅 𝑆 ⋂ 𝑅 𝐺 𝑅 𝑆 𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑅 𝑆 ⋂ 𝑅 𝐺 𝑅 𝐺\nℱ 𝛼 = (1+ 𝛼 2 ) ×𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ×𝑟𝑒𝑐𝑎𝑙𝑙 𝛼 2 ×𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑟𝑒𝑐𝑎𝑙𝑙\nIt is observed that the proposed approach illustrated accurate results in terms of precision and recall and has noticeable segmentation leads in blurred and non-blurred regions. Tang13 [22] 0.4414 0.7783 Tang16 [25] 0.6189 0.8975 Xu [27] 0.5145 0.8785 Karaali [24] 0.5326 0.8877 Pixel-based Algorithms Su [10] 0.6896 0.8438 Shi [13] 0.5933 0.8610 Javaran [17] 0.7184 0.8968 Yi [14] 0.7491 0.8878 Ma [32] 0.7851 0.9088 Ours 0.7940 0.9178" }, { "figure_ref": [], "heading": "B. RANKING BASED EVALUATION", "publication_ref": [ "b37", "b40", "b24", "b26", "b23", "b9", "b12", "b16", "b13", "b31", "b28", "b28", "b29", "b24", "b26", "b23", "b9", "b12", "b16", "b13", "b31", "b31", "b31", "b31", "b32", "b24", "b26", "b23", "b12", "b16", "b13", "b31", "b24", "b26", "b23", "b12", "b16", "b13", "b31", "b35" ], "table_ref": [ "tab_1", "tab_3", "tab_4", "tab_5", "tab_6", "tab_9", "tab_9" ], "text": "In this study, the fuzzy logic-based Evaluation Based on Distance from Average Solution (EDAS) technique [38][40] [41] has been adopted to rank the proposed scheme, following the referenced approaches with respect to feature integrity and minimum execution time. In this research study, the EDAS scheme has been revamped to accumulate the cross-efficient results of numerous parameters of the overall ten schemes, comprising the proposed one as well. In our research, The EDAS ranking has been applied on the basis of ℱ 𝛼score, whereas the rest were used by the proposed scheme only. The Appraisal Score (AS) was calculated to rank the existing algorithms. The positive distance value from the mean value been measured as indicated by (𝑃 𝔉 ) and the negative distance value from the mean solution has been measured as represented by (𝑁 𝔉 ), refer to the equations below. In Table 1, the estimated performance has been detected as the benchmark of the existing algorithms. Overall, the following steps were performed to conduct the ranking based evaluation:\nStep 1: Calculate the mean value (𝜇 ℘ ) solution of the overall metrics in expression ( 26);\n(𝜇 ℘ ) = [𝜇 ℘ 𝛽 ] 1×𝑇(26)\nwhere,\n(𝜇 ℘ 𝛽 ) = ∑ 𝕐 αβ 𝓍 i=1 𝓍(27)\nStep 1 measures the performance and calculates numerous algorithms criteria. The cumulative score of formulas ( 26) and ( 27) can be determined as the mean value ( 𝜇 ℘ 𝛽 ) for each value of the benchmark calculated in Table 2. [25] 0.6189 0.8975 Xu [27] 0.5145 0.8785 Karaali [24] 0.5326 0.8877 Pixel-based Algorithms Su [10] 0.6896 0.8438 Shi [13] 0.5933 0.8611 Javaran [17] 0.7184 0.8968 Yi [14] 0.7491 0.8878 Ma [32] 0.7851 0.9088 Ours 𝜇 ℘ 𝛽 0.7941 0.7152 0.9178 0.9731\nStep 2: This step calculates the positive distance results from the mean value (𝑃 𝔉 ) in formulas ( 28), (29), and (30), as mentioned below: 𝐴 𝑉 𝔅 (29) otherwise, the formula (29) will be transformed as mentioned below: (𝑃 𝔉 ) 𝛼𝛽 = 𝑀𝑎𝑥𝑖𝑚𝑢𝑚(0, (𝑋 𝛼𝛽 -𝐴 𝑉 𝔅 )) 𝐴 𝑉 𝔅 (30) The outputs of evaluation of this step are given in Table 3. [25] 0.13466109 0.077711 Xu [27] 0.28063198 0.097225 Karaali [24] 0.25532477 0.087771 Pixel-based Algorithms Su [10] 0.03580916 0.132884 Shi [13] 0.17045472 0.115209 Javaran [17] 0 0.078421 Yi [14] 0 0.087668 Ma [32] 0 0.066088 Ours 0 0.056839\n𝑃 𝔉 = [(𝑃 𝔉 ) 𝛼𝛽 ] ԛ×ԛ(\nStep 3: The results of negative distance has been estimated in this step from the average (𝑁 𝔉 ) using formulas ( 31), (32), and (33), as shown below:\n(𝑁 𝔉 ) = [(𝑁 𝔉 ) 𝛼𝛽 ] ԛ×ԛ(31)\nIf the 𝛽th criterion is the most measurable, then the below formula ( 32) is calculated: (𝑁 𝔉 ) 𝛼𝛽 = 𝑀𝑎𝑥𝑖𝑚𝑢𝑚(0, (𝐴 𝑉 𝔅 -𝑋 𝛼𝛽 )) 𝐴 𝑉 𝔅 (32) Otherwise, the formula (31) will be revised in formula (32) as given below:\n(𝑁 𝔉 ) 𝛼𝛽 = 𝑀𝑎𝑥𝑖𝑚𝑢𝑚(0, (𝑋 𝛼𝛽 -𝐴 𝑉 𝔅 )) 𝐴 𝑉 𝔅 (33) whereas the (𝑃 𝔉 ) 𝛼𝛽 and (𝑁 𝔉 ) 𝛼𝛽 indicate the positive distance value and negative distance value of 𝛽th estimated methods from the average value about 𝛼th rating performance measures, respectively. The results achieved in this step are illustrated in Table 4. [25] 0 0 Xu [27] 0 0 Karaali [24] 0 0 Pixel-based Algorithms Su [10] 0 0 Shi [13] 0 0 Javaran [17] 0.00445867 0 Yi [14] 0.04738306 0 Ma [32] 0.09771785 0 Ours 0.11016172 0\nStep 4: This step calculates the cumulative sum of (𝑃 𝔉 ) for the estimation method in formula (34):\n(𝑆𝑃 𝔉 )) 𝛼 = ∑ 𝑌 𝛽 (𝑃 𝔉 ) 𝛼𝛽 𝑥 𝛽=1(34)\nThe results of this step are presented in Table 5. [25] 0 0 0 Xu [27] 0 0 0 Karaali [24] 0 0 0 Pixel-based Algorithms Su [10] 0 0 0 Shi [13] 0 0 0 Javaran [17] 0.00222933 0 0.023692 Yi [14] 0.02369153 0 0.023692 Ma [32] 0.04885892 0 0.048859 Ours 0.05508086 0 0.055081\nStep 6: This step standardizes and calculates the values of (𝑆𝑃 𝔉 ) 𝛼 and (𝑆𝑁 𝔉 ) 𝛼 for the evaluated methods, using the formulas (36) \nStep 7: This step estimates the values of 𝑁(𝑆𝑃 𝔉 ) 𝛼 and 𝑁(𝑆𝑁 𝔉 ) 𝛼 to obtain an appraisal score (AS) which is equal to (𝜌) for the rated approaches, using the formula (38) below: (𝜌) 𝛼 = Step 8: This step determines the decreasing order in appraisal scores (AS) and also estimates the ranking of appraised methods. The lowest (AS) determines the best ranking scheme. As evident from Table 7, the proposed scheme, presented in this article, has the lowest (AS). Table 7 illustrates the final results, indicating that our proposed approach outperformed the referenced methods. " }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper represents a hybrid approach consisting of the DCT-based coefficients and PC Neural Net for in-focused segmentation in the defocus-blur dataset. The neuron firing sequence contains significant features of the defocused-blur image, i.e., texture, edge, and pixel information. The proposed approach revamped the PC Neural Net neuron firing sequence, following the design and pixel classification criteria, to select parameters along with DCT-based feature vectors for sharpness descriptor. The proposed approach segments the in-focused region in a defocused-blur image. The experimental outputs and quantitative evaluations noticeably depicted a balanced ratio between precision and recall, in terms of accuracy compared to those of other recent state-of-the-art schemes. It evidently outperforms, specifically in differentiating the detailed information between in-focused and out-of-focused regions. However, the state-of-the-art methods effectively extract the defocus-blur metric in defocus-blur images, generally, these techniques have some complexities for prominent detection of in-focused and out-of-focused regions. The referenced defocus-blur detection algorithms have some common limitations are extending the blur metric duration, background clutter, indistinguishable infocused regions of low contrast images from defocus-blur, and especially high computational cost. The proposed approach achieves promising results with efficient computational time, producing smooth edges and object shapes, even in noisy and blurred background images compared to the reference algorithms. The limitation of the proposed scheme is that it may degrade the overall performance of in-focused segmentation in those images having cluttered background. Another limitation of the proposed scheme is that it is not applicable to medical and microorganism-related images. Our future research direction is to improve the efficiency of the existing techniques and preferred GPU coding in case of enormous datasets and also span its scope in medical, agriculture, and 3D object estimation." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the Ministry of Higher Education (MoHE), Malaysia, under Project FRGS/1/2021/ICT08/XMU/02/1; and in part by the Xiamen University Malaysia Research Fund under Project XMUMRF/2021-C8/IECE/0025 and Project XMUMRF/2022-C10/IECE/0043." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "FUNDING This research is financially supported by the Ministry of Higher Education (MoHE), Malaysia [project code: FRGS/1/2021/ICT08/XMU/02/1] and Xiamen University Malaysia Research Fund [Project codes: XMUMRF/2021-C8/IECE/0025 and XMUMRF/2022-C10/IECE/0043]." }, { "figure_ref": [], "heading": "CONFLICTS OF INTEREST", "publication_ref": [], "table_ref": [], "text": "The authors declare no conflict of interest. " } ]
The motion or out-of-focus effect in digital images is the main reason for the blurred regions in defocused-blurred images. It may adversely affect various image features such as texture, pixel, and region. Therefore, it is important to detect in-focused objects in defocused-blurred images after the segmentation of blurred and non-blurred regions. The state-of-the-art techniques are prone to noisy pixels, and their local descriptors for developing segmentation metrics are also complex. To address these issues, this research, therefore, proposed a novel and hybrid-focused detection approach based on Discrete Cosine Transform (DCT) coefficients and PC Neural Net (PCNN) structure. The proposed approach partially resolves the limitations of the existing contrast schemes to detect in-focused smooth objects from the out-of-focused smooth regions in the defocus dataset. The visual and quantitative evaluation illustrates that the proposed approach outperformed in terms of accuracy and efficiency to referenced algorithms. The highest ℱ 𝛼 -score of the proposed approach on Zhao's dataset is 0.7940 whereas on Shi's dataset is 0.9178.
A Novel Defocus-blur Region Detection Approach Based on DCT Feature and PCNN Structure
[ { "figure_caption": "FIGURE 1 .1FIGURE 1. The test as well as its corresponding ground-truth images are signified as in-focused, transitive and out-of-focus regions. In the ground-truth image, white denotes in-focused region, whereas black identifies out-of-focused region. Red square in both images indicates transitive region while yellow and purple squares illustrate the in-focused and the out-offocused regions, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "FIGURE 2 .2FIGURE 2. The framework of the proposed approach is depicted. The left side of the figure indicates the primary steps, while the role and production of each image is illustrated in the right side of the figure.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "section below. The DCT vector-based transformation matrix of 𝐹 and 𝐹 𝑎 , are ∁ ∈ 𝐷 𝑖×𝑗 and 𝐶 𝑎 ∈ 𝐷 𝑖×𝑗 , which are the test as well as deblurred defocus image patches, and 𝑐 𝑎 𝑥 and 𝑐 𝑥 are the DCT transformation matrix of 𝑥𝑡ℎ components. In formula (3), ℛ 𝑥 indicates the sharpness-vector ℛ of 𝑥 𝑡ℎ element which lies in the interval [0, ∞], where sharper defocused-blur images are denoted by larger values.The matrix acquired after the transformation of DCT coefficient illustrates that the DCT coefficient ratio differ more spontaneously, and the irregular DCT coefficients reduce the impact of DoF. The DCT vector-based coefficients perform the mean operation of similar frequency. A 2 × 70 -1 dimension column is obtained by DCT-based coefficients, as depicted in Fig.2. The function 𝑇(𝐶) delineating a specified process is represented in Eq. (5) as below:𝑐 𝑥 = ∑ 𝑢+𝑣=𝑥+1 ∁ 𝑢,𝑣 𝑓𝑜𝑐𝑢𝑠(∁ 𝑢,𝑣 |𝑢+𝑣=𝑥+1)(5) where 𝑐 𝑥 denotes DCT-based transformation vector of 𝑥𝑡ℎ element, ∁ 𝑢,𝑣 represents initial DCT vector-based matrix 𝐶 and 𝑓𝑜𝑐𝑢𝑠(∁ 𝑢,𝑣 |𝑢 + 𝑣 = 𝑥 + 1) indicates the ∁ 𝑢,𝑣 total number at a specific frequency element. The edge and pixel-based blur metric is observed in Fig. 3, containing DCTbased feature extraction de-blur parameter selection. The DCT-based transformation of the infinite image signal of 2-D cosine function is represented as the super-position. The DCT-based coefficient matrix is denoted by ∁ 𝑢,𝑣 , which is the weight of discrete cosine transformation signal function on 𝑢 (i.e., horizontal-frequency direction) and 𝑣 (i.e., vertical-frequency direction). The lack of detailed image information is reflected as blurred image. The sharpness-vector coefficients are categorized into three major frequency classifications and a weight is assigned to each classification to estimate the 𝐷𝐶𝑅 (de-blurred coefficient ratio) of the original images and illustrated as below:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "FIGURE 3 . 131FIGURE 3. DCT coefficients are averaging at similar frequency and the row vectors at each frequency of average DCT-based coefficients are achieved. efficient weights in numerous classifications are denoted as 𝑎, 𝑏, 𝑦. The 𝐷𝐶𝑅 mainly reflects the blur estimation of central pixels. Furthermore, the vector 𝑐 determined the 𝐷𝐶𝑅 value in the estimation procedure value of which range lies in the interval [0, +∞]. The 𝐷𝐶𝑅 value is mapped as [0,1]. 𝑀 𝐷 = 1-𝔉 -𝑏𝓅 1+𝔉 -𝑏𝓅", "figure_data": "", "figure_id": "fig_3", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1DCT-based descriptors calculation at pixel level i (DCT(i, 𝜂)) Data: parametric estimation 𝓂(𝑚𝑥) Result: DCT-based descriptor begin 𝑚 = 2, 𝑚(𝑚𝑎𝑥) = 2 𝜂 = 0 for 𝑥 = 1 𝑡𝑜 𝑚 for 𝑦 = 1 𝑡𝑜 𝑚 while 𝑥 + 𝑦 -1 ≤ 𝑚(𝑚𝑎𝑥) 𝑇 = 0 for 𝑢 = 1 𝑡𝑜 𝑚 for 𝑣 = 1 𝑡𝑜 𝑚 𝑇 = 𝑇 + 𝜚(𝑥) ∑ 𝜚(𝑦) ∑ 𝑓(𝑢, 𝑣) cos(", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Algorithm 2 :2Proposed Defocus-Blur Metric Data: B Def = Defocussed-Blur image Result: I Foc = In-focussed segmented image begin Highest value = high(B Def ) Lowest value = low(B Def ) Average value = avg(B Def )DCT-based feature vector for in-focused region estimation using Eq. (10)-Eq. (16) PC Neural Net initial formula estimation using Eq. (17) -Eq. (21)forpixel-position uv in B Def do if DCT(T) < B Def(uv) then DCT segmented image (uv) = 0 else DCT segmented image (uv) = B Def(uv) end if end for // Call Algorithm 1 for DCT-based coefficient calculation E Mat = 0, ℐ =0, C Mat = 0 and 𝓃 = 1 for pixel-position(uv) in DCT segmented image do estimate ℱ 𝑖𝑗 𝓅 [𝑛], 𝔏 𝑖𝑗 𝓅 [𝓃], 𝒰 𝑖𝑗 𝓅 [𝓃], 𝜗 𝑖𝑗 𝓅 [𝓃], Υ 𝑥𝑦 𝓅 [𝓃] if Υ 𝑥𝑦 𝓅 [𝓃] = =0 then E Mat(uv) = 1, DCT segmented image (uv) = 1 else E Mat(uv) = 0, DCT segmented image (uv) = 0 end if end for // Call Algorithm 3 for pixel classification I Foc = E Mat _lw for pixel position (uv) in I Foc do if I Foc(uv) = =1 then I Foc(uv) = I Foc(uv) else I Foc(uv) = 0 end if end for return", "figure_data": "", "figure_id": "fig_5", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "FIGURE 4 .Algorithm 3 :43FIGURE 4. Schematic model of PCNN structure.", "figure_data": "", "figure_id": "fig_6", "figure_label": "43", "figure_type": "figure" }, { "figure_caption": "(a) Evaluation of pixel-based techniques and our scheme (b) Evaluation of edge-based techniques and our scheme FIGURE 6. Precision vs recall of numerous techniques adopted on Shi's dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) Evaluation of pixel-based techniques and our scheme (b) Evaluation of edge-based techniques and our scheme FIGURE 7. Precision vs recall of numerous techniques adopted on Zhao's dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "28 )28If the 𝛽th criterion is more valued then (𝑃 𝔉 ) 𝛼𝛽 = 𝑀𝑎𝑥𝑖𝑚𝑢𝑚(0, (𝐴 𝑉 𝔅 -𝑋 𝛼𝛽 ))", "figure_data": "", "figure_id": "fig_9", "figure_label": "28", "figure_type": "figure" }, { "figure_caption": "1 2(1𝑁(𝑆𝑃 𝔉 ) 𝛼 -𝑁(𝑆𝑁 𝔉 ) 𝛼 )(38) where 0 ≤ 𝐴𝑆 ≤ 1. The (AS) is determined by the aggregate score of 𝑁𝑆𝑃 𝔉 and 𝑁𝑆𝑁 𝔉 .", "figure_data": "", "figure_id": "fig_10", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "𝛼 𝑖 and 𝛽 𝑖 indicate the weighting factors for minimum and maximum level DCT-based distances at pixel position 𝑖. The minimized distance in the DCT coefficient is indicated as 𝐷𝐶𝑇 𝑀𝑖𝑛 (𝑢 𝑖 , 𝑦 𝑖 ) and calculated in Eq. (11): 𝑢 𝑖 , 𝑦 𝑖 ) and also calculated in Eq. (13).", "figure_data": "𝑙 ℎ=1𝐹(𝐷𝐶𝑇)(12)where 𝐹(𝐷𝐶𝑇) is indicated as the filteringparameter for weight estimation. The maximizedDCT-baseddistanceisidentifiedas𝐷𝐶𝑇 𝑀𝑎𝑥 (𝐷𝐶𝑇 𝑀𝑎𝑥 (𝑢 𝑖 , 𝑦 𝑖 ) =∑𝑊 𝑀𝑎𝑥 (𝓊 𝑖 ,𝑦 𝑗 )𝐷(𝑢 𝑖 ,𝑦 𝑖 ) 𝑊 𝑀𝑎𝑥 (𝑢 𝑖 ,𝑦 𝑗 ) 𝑀𝑎𝑥 𝑖 𝑀𝑎𝑥 𝑖 ∑𝐷𝐶𝑇 𝑀𝑖𝑛 (𝑢 𝑖 , 𝑦 𝑖 ) =∑𝑊 𝑀𝑖𝑛 (𝑢 𝑖 ,𝑦 𝑗 )𝐷(𝑢 𝑖 ,𝑦 𝑖 ) 𝑊 𝑀𝑖𝑛 (𝑢 𝑖 ,𝑦 𝑗 ) 𝑀𝑖𝑛 𝑖 𝑀𝑖𝑛 𝒾 ∑(11)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The highest ℱ 𝛼 -score of diverse schemes", "figure_data": "Schemes𝓕 𝜶 -", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Cross-efficient values", "figure_data": "Schemes𝓕 𝜶 -score_________________________Zhao's DatasetShi's DatasetEdge-based AlgorithmsTang13 [22]0.44140.7783Tang16", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Estimated results of average (𝑃 𝔉 )", "figure_data": "Schemes𝓕 𝜶 -score________________________Zhao's DatasetShi's DatasetEdge-based AlgorithmsTang13 [22]0.382839570.200194Tang16", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Estimated results of average (𝑁 𝔉 )", "figure_data": "Schemes𝓕 𝜶 -score_______________________Zhao's DatasetShi'sDatasetEdge-based AlgorithmsTang13 [22]00Tang16", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Estimated results of the aggregate (𝑆𝑃 𝔉 )) 𝛼 Calculate the cumulative sum of (𝑁 𝔉 ) 𝛼𝛽 for the rated algorithms in Table6mentioned in formula(35) as shown below:", "figure_data": "Criteria (W)0.50.166667Schemes𝓕 𝜶 -score___________________(𝑆𝑃 𝔉 )) 𝛼Zhao's Dataset Shi's DatasetEdge-based AlgorithmsTang13 [22]0.191419780.0333660.224785Tang16 [25]0.067330550.012950.080281Xu [27]0.140315990.0162040.15652Karaali [24]0.127662380.0146290.142291Pixel-based AlgorithmsSu [10]0.017904580.0221470.040052Shi [13]0.085227360.0192010.104429Javaran [17]00.013070.01307Yi [14]00.0146110.014611Ma [32]00.0110150.011015Ours00.0094730.009473Step 5: (𝑆𝑁 𝔉 ) 𝛼 = ∑𝑥 𝛽=1𝑌 𝛽 (𝑁 𝔉 ) 𝛼𝛽(35)The outputs are represented in Table 6.", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Estimated results of the aggregate (𝑆𝑁 𝔉 ) 𝛼", "figure_data": "Criteria (W)0.50.166667Schemes𝓕 𝜶 -score__________________(𝑆𝑁 𝔉 ) 𝛼Zhao's Dataset Shi's DatasetEdge-based AlgorithmsTang13 [22]000Tang16", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "and (37): 𝑁(𝑆𝑃 𝔉 ) 𝛼 = (𝑆𝑃 𝔉 ) 𝛼 𝑚𝑎𝑥𝑖𝑚𝑢𝑚 𝛼 ((𝑆𝑃 𝔉 ) 𝛼 )", "figure_data": "(36)𝑁(𝑆𝑁 𝔉 ) 𝛼 = 1 -", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Estimated results of 9 state-of-the-art schemes", "figure_data": "Schemes(𝑆𝑃 𝔉 ) 𝜶(𝑆𝑁 𝔉 ) 𝜶𝑁(𝑆𝑃 𝔉 ) 𝛼𝑁(𝑆𝑁 𝔉 ) 𝛼(AS)RankingEdge-based AlgorithmsTang13 [22]0.224789011110Tang16 [25]0.08028100.35714310.6785726Xu [27]0.15652200.69630910.8481559Karaali [24]0.14229100.63300810.8165048Pixel-based AlgorithmsSu [10]0.04005200.17817910.5890895Shi [13]0.10442900.46457110.7322867Javaran [17]0.0130690.0236920.0581440.5698770.3140113Yi [14]0.0114610.0236920.0650020.5698770.3174394Ma [32]0.0110150.0488590.0490009050.1112960.0809862Ours0.0094730.0550810.04214348700.0210721", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" } ]
Sadia Basar; Mushtaq Ali; Abdul Waheed; Muneer Ahmad; Mahdi H Miraz
[ { "authors": "S Basar; A Waheed; M Ali; S Zahid; M Zareei; R R Biswal", "journal": "Sensors", "ref_id": "b0", "title": "An Efficient Defocus Blur Segmentation Scheme Based on Hybrid LTP and PCNN", "year": "2022" }, { "authors": "T H Oh; R Jaroensri; C Kim; M Elgharib; F E Durand; W T Freeman; W Matusik", "journal": "", "ref_id": "b1", "title": "Learning-based video motion magnification", "year": "2018" }, { "authors": "A Abbate; R Arena; N Abouzaki; B W Van Tassell; J Canada; K Shah; G Biondi-Zoccai; N F Voelkel", "journal": "International journal of cardiology", "ref_id": "b2", "title": "Heart failure with preserved ejection fraction: refocusing on diastole", "year": "2015" }, { "authors": "J Shi; X Tao; L Xu; J Jia", "journal": "ACM Trans. Graph. (TOG)", "ref_id": "b3", "title": "Break Ames room illusion: Depth from general single images", "year": "2015" }, { "authors": "J Xiao; T Liu; Y Zhang; B Zou; J Lei; Q Li", "journal": "Signal Process", "ref_id": "b4", "title": "Multi-focus image fusion based on depth extraction with inhomogeneous diffusion equation", "year": "2016" }, { "authors": "W Lu; Y Xue; Y Yeung; H Liu; J Huang; Y Shi", "journal": "IEEE Transactions on Dependable and Secure Computing", "ref_id": "b5", "title": "Secure halftone image steganography based on pixel density transition", "year": "2019" }, { "authors": "J Liu; H Su; Y Yi; W Hu", "journal": "Signal Process", "ref_id": "b6", "title": "Robust text detection via multi-degree of sharpening and blurring", "year": "2016" }, { "authors": "J Li; W Lu", "journal": "J. Vis. Commun. Image Represent", "ref_id": "b7", "title": "Blind image motion deblurring with L0-regularized priors", "year": "2016" }, { "authors": "S Cao; N He; S Zhao; K Lu; X Zhou", "journal": "Signal Process", "ref_id": "b8", "title": "Single image motion deblurring with reduced ringing effects using variational bayesian estimation", "year": "2018" }, { "authors": "B Su; S Lu; C L Tan", "journal": "", "ref_id": "b9", "title": "Blurred image region detection and classification", "year": "2011" }, { "authors": "A Chakrabarti; T Zickler; W T Freeman", "journal": "", "ref_id": "b10", "title": "Analyzing spatially-varying blur", "year": "2010" }, { "authors": "H Xiao; W Lu; R Li; N Zhong; Y Yeung; J Chen; F Xue; W Sun", "journal": "J. Vis. Commun. Image Represent", "ref_id": "b11", "title": "Defocus blur detection based on multiscale SVD fusion in gradient domain", "year": "2019" }, { "authors": "J Shi; X Li; J Jia", "journal": "IEEE", "ref_id": "b12", "title": "Discriminative blur detection features", "year": "2014" }, { "authors": "X Yi; M Eramian", "journal": "IEEE Trans. Image Process", "ref_id": "b13", "title": "LBP-based segmentation of defocus blur", "year": "2016" }, { "authors": "X Marichal; W Ma; H Zhang", "journal": "IEEE", "ref_id": "b14", "title": "Blur determination in the compressed domain using DCT information", "year": "1999" }, { "authors": "C T Vu; T D Phan; D M Chandler", "journal": "IEEE Trans. Image Process", "ref_id": "b15", "title": "s3: A spectral and spatial measure of local perceived sharpness in natural images", "year": "2012" }, { "authors": "T A Javaran; H Hassanpour; V Abolghasemi", "journal": "Vis. Comput", "ref_id": "b16", "title": "Automatic estimation and segmentation of partial blur in natural images", "year": "2017" }, { "authors": "S A Golestaneh; L J Karam", "journal": "IEEE", "ref_id": "b17", "title": "Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes", "year": "2017" }, { "authors": "A P Pentland", "journal": "IEEE Trans. Pattern Anal. Mach.Intell", "ref_id": "b18", "title": "A new sense for depth of field", "year": "1987" }, { "authors": "S Bae; F Durand", "journal": "Comput. Graph. Forum", "ref_id": "b19", "title": "Defocus magnification", "year": "2007" }, { "authors": "A Levin; D Lischinski; Y Weiss", "journal": "ACM Trans. Graph. (TOG)", "ref_id": "b20", "title": "Colorization using optimization", "year": "2004" }, { "authors": "C Tang; C Hou; Z Song", "journal": "Opt. Lett", "ref_id": "b21", "title": "Defocus map estimation from a single image via spectrum contrast", "year": "2013" }, { "authors": "S Zhuo; T Sim", "journal": "Pattern Recognit", "ref_id": "b22", "title": "Defocus map estimation from a single image", "year": "2011" }, { "authors": "A Karaali; C R Jung", "journal": "IEEE Trans. Image Process", "ref_id": "b23", "title": "Edge-based defocus blur estimation with adaptive scale selection", "year": "2018" }, { "authors": "C Tang; J Wu; Y Hou; P Wang; W Li", "journal": "IEEE Signal Process. Lett", "ref_id": "b24", "title": "A spectral and spatial approach of coarse-to-fine blurred image region detection", "year": "2016" }, { "authors": "S Liu; F Zhou; Q Liao", "journal": "IEEE Trans. Image Process", "ref_id": "b25", "title": "Defocus map estimation from a single image based on two-parameter defocus model", "year": "2016" }, { "authors": "G Xu; Y Quan; H Ji", "journal": "", "ref_id": "b26", "title": "Estimating defocus blur via rank of local patches", "year": "2017" }, { "authors": "R Liu; Z Li; J Jia", "journal": "", "ref_id": "b27", "title": "Image partial blur detection and classification", "year": "2008" }, { "authors": "J Shi; L Xu; J Jia", "journal": "", "ref_id": "b28", "title": "Just noticeable defocus blur detection and estimation", "year": "2015" }, { "authors": "L Dandres; J Salvador; A Kochale; S Susstrunk", "journal": "IEEE Trans. Image Process", "ref_id": "b29", "title": "Non-parametric blur map regression for depth of field extension", "year": "2016" }, { "authors": "C Tang; X Zhu; X Liu; L Wang; A Y Zomaya", "journal": "", "ref_id": "b30", "title": "DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features", "year": "2019" }, { "authors": "M Ma; W Lu; W Lyu", "journal": "Signal Processing", "ref_id": "b31", "title": "Defocus blur detection via edge pixel DCT feature of local patches", "year": "2020" }, { "authors": "Q Liu; L P Xu; Y D Ma; Y Wang", "journal": "International Journal of Computers and Application", "ref_id": "b32", "title": "Bilateral Filtering Algorithm for Image Processing Based on Pulse Coupled Neural Networks", "year": "2012-10" }, { "authors": "H B Fan; C N Zhang; W C Yuan; Y He", "journal": "Information & Communications", "ref_id": "b33", "title": "Medical image Mixed-noise filtering method based on PCNN", "year": "2014-05" }, { "authors": "Chong Shen; Ding Wang; Shuming Tang; Huiliang Cao; Jun Liu", "journal": "Vision Computer", "ref_id": "b34", "title": "Hybrid image noise reduction algorithm based on genetic ant colony and PCNN", "year": "2017-10" }, { "authors": "Chin Sheng; Chen ; Chi-Min Weng; Chien-Chuan Tseng", "journal": "The International Journal of Advanced Manufacturing Technology", "ref_id": "b35", "title": "An efficient detection algorithm based on anisotropic diffusion for low-contrast defect", "year": "2018-10" }, { "authors": "J Shen; L Han; M Xu; C Huang; Z Zhang; H Wang", "journal": "J. Signal Process. Syst", "ref_id": "b36", "title": "Focused region segmentation for refocusing images from light fields", "year": "2018" }, { "authors": "S Basar; M Ali; G Ochoa-Ruiz; M Zareei; A Waheed; A Adnan", "journal": "PLoS ONE", "ref_id": "b37", "title": "Unsupervised color image segmentation: A case of RGB histogram based Kmeans clustering initialization", "year": "2020" }, { "authors": "S Basar; M Ali; G Ochoa-Ruiz; A Waheed; G Rodriguez-Hernandez; M Zareei", "journal": "IEEE Access", "ref_id": "b38", "title": "A Novel Defocused Image Segmentation Method Based on PCNN and LBP", "year": "2021" }, { "authors": "G Mehmood; M Z Khan; A Waheed; M Zareei; E M Mohamed", "journal": "IEEE Access", "ref_id": "b39", "title": "A trust-based energy-efficient and reliable communication scheme (Trust-based ERCS) for remote patient monitoring in wireless body area networks", "year": "2020" }, { "authors": "G Ilieva; T Yankova; S Klisarova-Belcheva", "journal": "Comput. Appl. Math", "ref_id": "b40", "title": "Decision analysis with classic and fuzzy EDAS modifications", "year": "2018" }, { "authors": "X Liu; W Lu; Q Zhang; J Huang; Y Q Shi", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b41", "title": "Downscaling factor estimation on pre-JPEG compressed images", "year": "2019" }, { "authors": "J Shi; L Xu; J Jia", "journal": "", "ref_id": "b42", "title": "Blur detection dataset", "year": "" }, { "authors": "G Petschnigg; R Szeliski; M Agrawala; M Cohen; H Hoppe; K Toyama", "journal": "ACM Trans. Graph. (TOG)", "ref_id": "b43", "title": "Digital photography with flash and no-flash image pairs", "year": "2004" }, { "authors": "L Johnson; M L Padgett", "journal": "IEEE Transaction on Neural Networks", "ref_id": "b44", "title": "PCNN models and applications", "year": "1999" }, { "authors": "W Zhao; F Zhao; D Wang; H Lu", "journal": "", "ref_id": "b45", "title": "Defocus blur detection dataset", "year": "" }, { "authors": "R Achanta; S S Hemami; F J Estrada; S Susstrunk", "journal": "", "ref_id": "b46", "title": "Frequency-tuned salient region detection", "year": "2009" }, { "authors": "Jinxing Li; Beicheng Liang; Xiangwei Lu; Mu Li; Guangming Lu; Yong Xu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b47", "title": "From Global to Local: Multi-Patch and Multi-Scale Contrastive Similarity Learning for Unsupervised Defocus Blur Detection", "year": "2023" }, { "authors": "Xianrui Luo; Juewen Peng; Ke Xian; Zijin Wu; Zhiguo Cao", "journal": "Information Fusion", "ref_id": "b48", "title": "Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors", "year": "2023" }, { "authors": "Sankaraganesh Jonna; Moushumi Medhi; Rajiv Ranjan; Sahay ", "journal": "ACM Transactions on Multimedia Computing, Communications and Applications", "ref_id": "b49", "title": "Distill-DBDGAN: Knowledge Distillation and Adversarial Learning Framework for Defocus Blur Detection", "year": "2023" }, { "authors": "Wenda Zhao; Fei Wei; Haipeng Wang; You He; Huchuan Lu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b50", "title": "Full-scene Defocus Blur Detection with DeFBD+ via Multi-Level Distillation Learning", "year": "2023" }, { "authors": "Yanli Chen; Haitao Wang; Jinding Gao", "journal": "Pattern Analysis and Applications", "ref_id": "b51", "title": "A single defocused image depth recovery with superpixel segmentation", "year": "2023" }, { "authors": "J Li; D Fan; L Yang; S Gu; G Lu; Y Xu; D Zhang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b52", "title": "Layer-output guided complementary attention learning for image defocus blur detection", "year": "2021" }, { "authors": "Liangji Zhang; Chao Lu; Haiwen Xu; Aibin Chen; Liujun Li; Guoxiong Zhou", "journal": "IEEE Internet of Things Journal", "ref_id": "b53", "title": "MMFNet: Forest Fire Smoke Detection Using Multiscale Convergence Coordinated Pyramid Network with Mixed Attention and Fast-robust NMS", "year": "2023" }, { "authors": "W Chantara; Jeon", "journal": "Applied Sciences", "ref_id": "b54", "title": "All-in-focused image combination in the frequency domain using light field images", "year": "2019" }, { "authors": "X Deng; Y Yang; H Zhang; Y Ma", "journal": "Multimedia Tools and Applications", "ref_id": "b55", "title": "PCNN double step firing mode for image edge detection", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 298.04, 422.16, 241, 28.47 ], "formula_id": "formula_0", "formula_text": "𝐹 𝐺(𝑢,𝑣,𝜎)= 1 2𝜋𝜎 2 𝑒 . -𝑢 2+𝑣 2 2𝜎 2 ⁄(2)" }, { "formula_coordinates": [ 6, 298.04, 230.48, 240.45, 39.51 ], "formula_id": "formula_1", "formula_text": "ℛ 𝑥 = 𝑐 𝑥 𝑐 𝑎 𝑥 (3) 𝑐 𝑎 = 𝑇(𝐶 𝑎 ), 𝑐 = 𝑇(𝐶)(4)" }, { "formula_coordinates": [ 8, 37.02, 512.35, 234.03, 35.4 ], "formula_id": "formula_2", "formula_text": "𝐷𝐶𝑇(𝑢 𝑖 , 𝑦 𝑖 ) = 1 𝛼 𝑖 + 𝛽 𝑖 (𝛼 𝑖 𝐷𝐶𝑇 (𝑢 𝑖 , 𝑦 𝑖 ) + 𝛽 𝑖 𝐷𝐶𝑇 (𝑢 𝑖 , 𝑦 𝑖 )) (10" }, { "formula_coordinates": [ 8, 271.05, 535.16, 5.32, 10.98 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 8, 300.68, 232.28, 238.41, 26.97 ], "formula_id": "formula_5", "formula_text": "𝑊 𝑀𝑎𝑥 (𝑢 𝑖 , 𝑦 𝑗 ) = ℯ -∑ (𝐷𝐶𝑇(𝑙,ℎ)-𝐷𝐶𝑇(𝑖,ℎ)) 𝑙 ℎ=1 𝐹(𝐷𝐶𝑇)(14)" }, { "formula_coordinates": [ 8, 298.04, 592.6, 240.93, 13.13 ], "formula_id": "formula_6", "formula_text": "𝐼 𝐷𝑒𝑓 (𝑖, 𝑗) = 𝜒 𝑖,𝑗 𝐼 𝐹𝑔 (𝑖, 𝑗) + (1 -𝜒 𝑖,𝑗 )𝐼 𝐵𝑔 (𝑖, 𝑗)(15)" }, { "formula_coordinates": [ 10, 40.02, 496.15, 237.5, 17.46 ], "formula_id": "formula_7", "formula_text": "ℱ 𝑖𝑗 𝓅 = 𝛿 𝑖𝑗 𝓅 (17" }, { "formula_coordinates": [ 10, 277.52, 500.74, 4.58, 9.88 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 10, 39.78, 512.8, 241.25, 77.39 ], "formula_id": "formula_9", "formula_text": "𝔏 𝑖𝑗 𝓅 [𝓃] = ѵ 𝔏 ∑ 𝒲 𝑖𝑗𝑥𝑦 𝓅 Υ 𝑥𝑦 𝓅 [𝓃 -1] 𝑎𝑏 + 𝑒𝑥(-𝜕 𝔏 ) 𝔏 𝑖𝑗 𝓅 [𝓃 -1] (18) 𝒰 𝑖𝑗 𝓅 [𝓃] = ℱ 𝑖𝑗 𝓅 [𝓃] (1 + 𝛽 𝑖𝑗 𝓅 𝔏 𝑖𝑗 𝓅 [𝓃]) (19) 𝜗 𝑖𝑗 𝓅 [𝓃] = 𝒱 𝜗 Υ 𝑥𝑦 𝓅 [𝓃 -1] + 𝑒𝑥(-𝜕 𝜗 ) 𝜗 𝑖𝑗 𝓅 [𝓃 -1](20) Υ 𝑥𝑦 𝓅 [𝓃] = 𝒰 𝑖𝑗 𝓅 [𝓃] -𝜗 𝑖𝑗 𝓅 [𝓃](21)" }, { "formula_coordinates": [ 13, 37.02, 301.48, 235.77, 40.16 ], "formula_id": "formula_10", "formula_text": "𝑊 𝑖𝑗 [ 0.5 1 0.5 1 0 1 0.5 1 0.5 ] (22" }, { "formula_coordinates": [ 13, 272.79, 316, 5, 10.8 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 13, 300.68, 191.26, 236.94, 13.69 ], "formula_id": "formula_12", "formula_text": "𝑌 𝐸 =max(δ)(23)" }, { "formula_coordinates": [ 14, 37.02, 490.84, 480.44, 22.52 ], "formula_id": "formula_13", "formula_text": "(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) FIGURE 5." }, { "formula_coordinates": [ 16, 37.02, 524.29, 199.1, 22.59 ], "formula_id": "formula_14", "formula_text": "𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 𝑅 𝑆 ⋂ 𝑅 𝐺 𝑅 𝑆 𝑅𝑒𝑐𝑎𝑙𝑙 = 𝑅 𝑆 ⋂ 𝑅 𝐺 𝑅 𝐺" }, { "formula_coordinates": [ 17, 37.02, 394.18, 142.22, 23.79 ], "formula_id": "formula_15", "formula_text": "ℱ 𝛼 = (1+ 𝛼 2 ) ×𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 ×𝑟𝑒𝑐𝑎𝑙𝑙 𝛼 2 ×𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛+𝑟𝑒𝑐𝑎𝑙𝑙" }, { "formula_coordinates": [ 18, 298.04, 123.88, 240.96, 17.11 ], "formula_id": "formula_16", "formula_text": "(𝜇 ℘ ) = [𝜇 ℘ 𝛽 ] 1×𝑇(26)" }, { "formula_coordinates": [ 18, 298.04, 154.34, 241.08, 25.11 ], "formula_id": "formula_17", "formula_text": "(𝜇 ℘ 𝛽 ) = ∑ 𝕐 αβ 𝓍 i=1 𝓍(27)" }, { "formula_coordinates": [ 18, 37.02, 552.44, 223.3, 13.69 ], "formula_id": "formula_18", "formula_text": "𝑃 𝔉 = [(𝑃 𝔉 ) 𝛼𝛽 ] ԛ×ԛ(" }, { "formula_coordinates": [ 19, 37.02, 348.78, 239.94, 13.69 ], "formula_id": "formula_19", "formula_text": "(𝑁 𝔉 ) = [(𝑁 𝔉 ) 𝛼𝛽 ] ԛ×ԛ(31)" }, { "formula_coordinates": [ 19, 298.04, 696.45, 227.1, 16.38 ], "formula_id": "formula_20", "formula_text": "(𝑆𝑃 𝔉 )) 𝛼 = ∑ 𝑌 𝛽 (𝑃 𝔉 ) 𝛼𝛽 𝑥 𝛽=1(34)" } ]
2023-10-13
[ { "figure_ref": [ "fig_0", "fig_2" ], "heading": "INTRODUCTION", "publication_ref": [ "b18", "b6", "b0", "b31", "b38", "b42", "b9", "b27", "b29", "b30", "b41", "b0", "b20", "b42", "b1", "b36", "b26", "b45", "b35", "b5", "b7", "b11", "b43", "b25" ], "table_ref": [], "text": "AI image generation programs, namely Artificial Intelligence Generated Content (AIGC) tools such as Stable Diffusion [20], DALL•E2 [8], Midjourney [1], are setting off a new revolution of artwork creation. These programs allow users to effortlessly generate target images by taking some descriptions as input into the model. However, these AI-generated artworks inherit the characteristics of the images that are used to train models [33,40,43], which might be pretty similar to the original ones, as shown in Figure 1. Such similarity has aroused concerns about copyright infringement disputes. For example, three artists (Sarah Andersen, Kelly McKernan, and Karla Ortiz) have recently accused Stable Diffusion of unlawfully 2) was created by Stable Diffusion using the \"style of Erin Hanson\" as a prompt. The styles of these two images are so similar that it is impossible to tell them apart.\nscraping copyrighted images from the Internet to mimic their art styles [11]. To this end, research on responsible AI image generation is in urgent need to address such copyright issues.\nPrevious work on data attribution [29,31,32,42] focused on how the images in the training data contributed to the model's outcome, which is not suitable for the context of copyright traceability. This is because of the following reasons: (1) the training data is not known in advance in real-world practices; as shown in Figure 2, the models are usually publicly available, while the training datasets are not [22,43]. (2) the responsible party is the model provider, namely, the personnel or the organization who abused the online image collections without the owners' consent, not the image itself. As long as the model that generates the infringing image is identified, the corresponding infringer (the model provider) can be found, and the degree of infringement can be quantified. Thus, we need to develop a new approach to quantify copyright infringement from the model level, which is the focus of our paper.\nTo this end, we propose a new framework CopyScope at the model level towards AIGC copyright traceability. Our framework CopyScope includes three closely intertwined stages (Identify-Quantify-Evaluate). In the Identify stage, we conduct an extensive and in-depth analysis of 16,000 generated images under six themes and rigorously identify four components (Base Model [38], Lora [28], ControlNet [46], and Key Prompt [37]) that are involved in the infringement in diffusion workflow. In the Quantify stage, we extensively compare and analyze five metrics, which include Cosine [7], DHash (Difference Hash similarity) [9], Hist (Histogram similarity) [13], SSIM (Structural similarity) [44], and FID (Fréchet Inception Distance) [27] methods, from multiple dimensions, such as style, structure, etc., to measure the similarity between the generated and original images. We find that the FID metric is the most effective quantification method because FID can capture the similarity that fits human perception naturally, which could reflect accurate quantification of each model's contribution. In the Evaluate stage, we model our scenario into a cooperative game model and propose the FID-based Shapley method to evaluate the contribution of each infringement model. In the end, we conduct extensive experiments to demonstrate that our proposed FID-based Shapley algorithm could effectively quantify the infringement in the diffusion workflow, which offers a promising approach to help us unveil the intricacies of infringement in the emerging domain of AI image generation tasks.\nWe summarize our contributions as follows: \n•" }, { "figure_ref": [ "fig_3" ], "heading": "PRELIMINARY AND BACKGROUND 2.1 AI Image Generation", "publication_ref": [ "b26", "b45", "b36", "b15", "b17", "b14", "b8", "b10", "b16", "b4" ], "table_ref": [], "text": "In Figure 3, we show the basic diffusion workflow to generate images with text as input: ❶ First, a user sets up a fundamental component to create images, which includes the text encoder model, U-Net & scheduler, and image decoder. ❷ The user selects specific prompts and adjusts related parameters such as seed, sampling steps, image size, etc. However, due to the universality of the fundamental component, the above process alone can merely generate coarse images. Therefore, additional components such as low-rank adaptation of large language models (Lora) [28] and conditional control of text-toimage diffusion models (ControlNet) [46] are required to create fine images with specific themes. These components jointly contribute to the output image's characteristics, including content, style, and background, which refer to the originality of the image and pose a potential risk of copyright infringement. The functionality of each component is described as follows:\n(1) Base model: The most famous model for AI image generation is the SD series model (e.g., SD1.5 [38], SD2.0, and SD XL [17].) released by StabilityAI [19] and Runway [16]. However, as these models are trained using hundreds of millions of images with mixed styles, they are difficult to generate images with a specific theme. Therefore, the image generation usually relies on secondary-trained models based on these basic models (e.g., DreamShaper [10], GhostMix [12], Anything[4], SDMv10 [18]). These secondary-trained models have unique themes and styles and are more likely to infringe the existing works. We thereby conduct copyright traceability based on the secondary-trained model and refer to it as the Base Model. the primary structure of the generated image. We can transfer the composition or human pose from the reference image to the target image by utilizing ControlNet. Furthermore, ControlNet is almost inseparable in some specific generation tasks, such as specific image structure, design layout, architectural form, line drawing coloring, and other scenes [6]. (5) Prompt: Users can leverage their imagination and employ appropriate prompt words to describe their images to AI to generate desirable results. Based on the content, description instructions can be categorized into type prompts, content prompts, composition prompts, and painter prompts. Among these, painter prompts are primarily intended for generating the image's subject, which can be an artist's painting style or the protagonist of a specific work. However, it is essential to note that such prompts often directly infringe upon the original author's rights, which are called Key Prompt in our research." }, { "figure_ref": [], "heading": "Similarity Metrics", "publication_ref": [ "b7", "b43", "b25" ], "table_ref": [], "text": "To study the issue of copyright tracing more accurately and effectively, we need to quantify the models involved in infringement.\nRegarding AI-generated images, we need to examine their relationship with the original image from multiple dimensions, such as style, structure, and other characteristics. Therefore, we quantify the models associated with infringement based on these dimensions and compare the following quantitative indicators: Cosine similarity (Cosine). Cosine similarity calculates the cosine of the angle between the vectors, equivalent to the vectors' dot product divided by the product of their lengths. In the image similarity calculation, we can convert the image into a feature vector and then use cosine similarity to compare the similarity of these feature vectors. The image cosine similarity ranges from [0, 1], where 1 indicates the same vectors and 0 indicates orthogonality or decorrelation.\nDifference Hash similarity (DHash). The essence of the hash algorithm for image similarity recognition is to hash the image to generate a set of binary numbers and then find similar images by comparing the hash value distance of different images. DHash [9] is a differential hashing algorithm that compares the size of the left and right pixels when hashing an image to obtain the hash sequence. A larger hash value indicates a more tremendous difference between the two images.\nHistogram similarity (Hist). It measures the similarity of two pictures in color distribution. The histogram algorithm counts the number of pixels of different colors in the image, presents it as a histogram, and then compares the image similarity. The value range is [0, 1]. The closer to 1, the more similar the two are. However, the histogram method only considers the color distribution but ignores texture and structure information.\nStructural similarity (SSIM). SSIM [44] simultaneously compares the similarity of images from three aspects: brightness, contrast, and structure. The SSIM algorithm can enhance the structural similarity of images in a group and better detect subtle differences.\nSSIM(𝑟, 𝑔) = (2𝜇 𝑟 𝜇 𝑔 + 𝐶 1 )(2𝜎 𝑟𝑔 + 𝐶 2 ) (𝜇 2 𝑟 + 𝜇 2 𝑔 + 𝐶 1 )(𝜎 2 𝑟 + 𝜎 2 𝑔 + 𝐶 2 )\n.\n(1)\nAmong them, 𝜇 represents the average brightness of the image,𝜎 represents the standard deviation of image brightness, 𝜎 represents the variance of image 𝑟 , 𝑔, and 𝐶 is a constant. The value range of SSIM is [0, 1]. The larger the value, the more similar the two images are.\nFID method (FID). Fréchet Inception Distance [27], originally used as an evaluation index of the generative model to calculate the distance between the real image and the generated image feature" }, { "figure_ref": [], "heading": "Components", "publication_ref": [], "table_ref": [], "text": "Description Models" }, { "figure_ref": [], "heading": "Base Model", "publication_ref": [ "b1", "b16" ], "table_ref": [], "text": "The basic model of stable diffusion determines the style of generated image.\nSDv1-5 [2]: Universal model without specific topic.\nSDMv10 [18]: Based on SDv1-5 and added training of classical works of art." }, { "figure_ref": [], "heading": "ControlNet", "publication_ref": [ "b4" ], "table_ref": [], "text": "A category of models that control image structure by adding additional conditions.\nDepth [6]: Capture the original image's structural depth to control the generated image's structure." }, { "figure_ref": [], "heading": "Lora", "publication_ref": [ "b12" ], "table_ref": [], "text": "Fine-tune the generated image.\nLeonardo [14]: Use Davinci's portfolio training to adjust images to more closely resemble Davinci's creative style." }, { "figure_ref": [], "heading": "Key Prompt", "publication_ref": [], "table_ref": [], "text": "Instruct an AI on what to paint. Davinci: Tips for generating Leonardo da Vinci style images.\nMonaLisa: Make the generated image closer to MonaLisa.\nTable 1: Four infringement components have been identified, each representing a type of model set. A brief description of their function is in the Description column. The Models column shows the specific models we used in the Mona Lisa experiment (please see section 4.)\nvector. FID directly considers the distance between the generated data and the accurate data at the feature level, and the smaller the data, the more similar. From a theoretical perspective, FID measures the distance between two multivariate normal distributions, and its calculation formula is as follows,\nFID = ∥ 𝜇 𝑟 -𝜇 𝑔 ∥ 2 2 + 𝑇𝑟 (Σ 𝑟 + Σ 𝑔 -2(Σ 𝑟 Σ 𝑔 ) 1/2 ).(2)\nAmong them, 𝜇 𝑟 and 𝜇 𝑔 represent the mean value of the feature vectors extracted from the real image and the generated image, respectively; Σ 𝑟 and Σ 𝑔 represent the covariance matrix of the original image and the generated image." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [ "b3" ], "table_ref": [], "text": "In this section, we present an Identify-Quantify-Evaluate framework called CopyScope to address the issue of AI-generated image copyright traceability at the model level. In the Identify stage, we first rigorously select four pivotal components for describing infringing models by analyzing images generated from Civitai [5].\nIn the Quantify stage, we adopt FID to measure the similarity between the original images and the images generated by our designed model under different alliances of models. In the Evaluate stage, we trace back the possible infringing model by computing the contribution of models using the FID-Shapley value." }, { "figure_ref": [], "heading": "Identify Influential components", "publication_ref": [ "b2" ], "table_ref": [], "text": "We initiate our study by determining components that have the most significant impacts on the generated images. Components make up the AI image generation workflow, which is used to characterize generation models in our proposed copyright tracking approach. This stage is based on a survey from the world's largest AI image generation exchange and sharing platform Civitai, where we collect more than 16,000 generated image data from over 5,000 models to find commonalities in generated images. The generated images are divided into 6 themes: celebrity, film&TV, artwork, popular models, design, and game. We explore the distribution of models that generate images involving copyright infringement by calculating the usage rate of components: Base Model, Lora, ControlNet, and Key Prompt. Table 2 shows the frequency of these components, indicating that they have a high usage rate in AI image-generation tasks.\nWe identify four components that are used in AI image generation at a high frequency: ❶ The Base Model is essential for each generated image. ❷ The second is the Lora. Although the Lora is not a necessary option for generating images, we can find from Table 2 that it has a high application rate in each category, indicating that the use of Lora for adjustment in AI image generation has become a norm. ❸ Prompts with particular specificity are called Key Prompts. Key Prompt can make the generated image close to the characteristics of these keywords to a large extent, thus infringing on the original author's rights. ❹ The overall usage rate of ControlNet is relatively lower than the other components. This is because the ControlNet is challenging to use as it needs higher environment configuration requirements than Lora and Key Prompt [3]. However, ControlNet is an essential components in generating particular themes as it controls the structure of the image. From the perspective of copyright tracing, the ControlNet is a critical suspected infringement component that our CopyScope framework considers." }, { "figure_ref": [], "heading": "Quantify Model Performance", "publication_ref": [ "b3" ], "table_ref": [], "text": "As shown in Table 1, we choose specific models for each component. We select SDv1-5 and SDMv10 for Base Model, Depth for ControlNet, Leonardo for Lora, Davinci and MonaLisa for Key Prompt. We use these models to simulate 30 different alliances, where the two specific models of Base Model is essential for any alliances of models and the other four specific models of ControlNet, Lora, Key Prompt can be used to form an alliance at the same time. In this way, the alliance of models is calculated as 2 × (𝐶 1 4 + 𝐶 2 4 + 𝐶 3 4 + 𝐶 4 4 ) = 30. We then use each alliance to generate 100 batches of images to generate the dataset of 3, 000 images for quantification and evaluation.\nWe propose to explore which model can affect the generated image and thereby cause infringement by studying the generated and original images. Therefore, accurately measuring the similarity between the generated and original images is crucial to quantification. We quantify the performance of each alliance by measuring the similarity of the images generated by the alliance and the original 2: The dataset for identifying infringement components in the CopyScope framework. The generated images in the dataset come from AI-generated images platform Civitai [5]. The generated images in the dataset are divided into six major themes, and the usage ratio of the four types of infringement components in each category is marked. image. To this end, we present the results of multiple indicators such as Cosine, Hist, DHash, SSIM, and FID as described in section 2.2.\nAccording to the results in the following experiment, we comparatively select FID as the quantification method because the results of images' similarity are more approximate to the human intuition. Furthermore, in section 4.2, we compare these indicators and found that FID has a more accurate similarity description ability than other quantitative methods. Additionally, the discrimination between the generated results of different alliances is more remarkable, which is more helpful for our subsequent evaluation." }, { "figure_ref": [ "fig_3" ], "heading": "Evaluate Contributions of Models", "publication_ref": [ "b44", "b37", "b23", "b24", "b33", "b39" ], "table_ref": [], "text": "In the sections above, we have identified the copyright-related components in the diffusion workflow and proposed quantitative metrics to measure image similarity. To achieve copyright traceability and determine the level of infringement for different models, we introduce the Evaluate stage as the final step of the CopyScope framework, as shown in Figure 3. The Evaluate stage aims to assess the contribution of each model in the output image. We formally define the Evaluate stage as follows: Definition 3.1. (Evaluate stage) Given a set of models M = {𝑧 1 , 𝑧 2 , . . . , 𝑧 𝑛 }, where each 𝑧 𝑖 is a specific model of the component in the diffusion workflow, the Evaluate stage aims to find the value of 𝑣 (𝑧) of each model 𝑧, which represents how much it contributes to the copyright of the generated image.\nLet L = {𝑧 1 , ..., 𝑧 𝑁 } be an alliance formed by 𝑁 models and 𝑈 (•) be the value function that can be applied on any subset of the alliance L. To evaluate the contribution of 𝑖-th model 𝑧 𝑖 to the overall value of the alliance L, two widely used solutions from cooperative game theory are Leave-one-out (LOO) [45] and Shapley value (SV) [39]. LOO-based Method. The idea of LOO is to measure the marginal contribution of each model 𝑧 to the alliance by removing it from the alliance and observing the difference. For the 𝑖-th model 𝑧 𝑖 , its contribution 𝑣 𝐿𝑂𝑂 (𝑧 𝑖 ), also referred to as the LOO value, can be obtained as follows,\n𝑣 𝐿𝑂𝑂 (𝑧 𝑖 ) ∝ 𝑈 (L) -𝑈 (L\\𝑧 𝑖 ).(3)\nSV-based Method. The SV-based method was originally used to provide a fair way of dividing the benefits for players in a coalition based on their individual and joint contributions. To calculate the contribution of a model in the alliance, the SV-based method considers all possible sub-alliances that include that model and then takes the weighted average of the differences between the value of Calculate the FID-Shapley value of 𝑧: 𝑣 (𝑧 ) = 𝑐 (𝑧) 𝑁 ;\n14:\nAppend to FID-Shapley value set R ← 𝑣 (𝑧 ); 15: end for 16: Return: R. each sub-alliance with and without that model. For the 𝑖-th model 𝑧 𝑖 , its contribution 𝑣 𝑆𝑉 (𝑧 𝑖 ), also referred to as the Shapley value, can be obtained as follows,\n𝑣 𝑆𝑉 (𝑧 𝑖 ) ∝ 1 𝑁 ∑︁ L ⊆ M\\𝑧 𝑖 [𝑈 (L ∪𝑧 𝑖 ) -𝑈 (L)] |L|!(𝑁 -1 -|L|)! (𝑁 -1)! .(4)\nWe represent the models in the diffusion workflow as an alliance in a cooperative game [25,26,35], as each model jointly forms an image generation alliance, as shown in Table 3, and the value of the model can be calculated by a value function, which is the similarity metrics in our work. Each model is independent and does not affect each other, and the diffusion workflow can be built by any sub-alliance from the models. Meanwhile, the models' alliance satisfies the following properties [41], which enables us to design a contribution evaluation method based on the Shapley value.\n• Property 1.\n𝑧 ∈ M 𝑣 (𝑧) = 𝑈 (M). The Shapley values of all collaborators add up to the value of the grand coalition, ensuring that the total benefit is shared among them.\n• Property 2. For a player 𝑧, if 𝑈 (L) = 𝑈 (L ∪ 𝑧) holds for any alliance L, then 𝑣 (𝑧) = 0. This implies that if a model that participates in the diffusion workflow leaves the generated image unchanged from its absence, then this model does not affect the image generation and deserves zero contribution. Original image\n(1)(2) (3) (4) (5) (6) (7)\n(8) (9) (12) (13) (14) (11) (10)\nFigure 4: The different models (e.g., SDMv10, Depth, Davinci, etc.) are alliances to simulate the generation of Leonardo da Vinci's Mona Lisa images. In this example, we explore 30 alliances, including six models. We generate 100 batches of images for each alliance to explore the similarity between the images generated by different alliances and the original images. Based on the above analysis, we demonstrate that our scenario theoretically fits the cooperative game model, and we can use the LOO and Shapley value methods to evaluate the model contribution in the diffusion workflow of image generation. We innovatively adopt the FID as the value function and propose the FID-Shapley algorithm to measure the contribution of each model alliance in AI image generation, as shown in Algorithm 1. The subsequent experiments show that FID-Shapley can accurately reflect the contribution of each model to the AI-generated images, and also agree with the human observation on the infringement. Models with high FID-Shapley value have a more significant impact on the AIgenerated images and are more likely to cause infringement issues, which provides guidance for AI-generated image users to pay extra attention to these models with high FID-Shapley value in order to avoid copyright infringement in real-world." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "We conduct thorough experiments to ensure that the CopyScope framework can effectively solve the copyright traceability problem of AI-generated images." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "To investigate the specific infringement models involved in AI image generation, we use a diffusion workflow to generate Mona Lisa paintings, one of the classic artworks. We construct the AI image generation workflow using the models in Table 1. We set SDv1-5 as the pipeline benchmark and assume its contribution to the generated image is zero. Then, we introduce SDMv10 as Base model, Depth as ControlNet model, Leonardo as Lora model, and \"Davinci\" and \"MonaLisa\" as two Key Prompts, and thus we obtain the models set M = { SDMv10, Depth, Davinci, \"MonaLisa\", \"Leonardo\" }. We keep the hyperparameters constant for all experiments, including image size, scale, sample step, etc. We construct a total number of 30 diffusion workflows by different models alliances. With each diffusion workflow, we generate 100 images and evaluate their contribution using their average FID value." }, { "figure_ref": [ "fig_7" ], "heading": "Quantitative Metrics for Model Performance", "publication_ref": [], "table_ref": [], "text": "In this experiment, we compare different quantitative metrics for model performance introduced in Section 2.2 and demonstrate that the FID is the most effective metric for identifying the infringement models in AI image generation. Table 3 presents the similarity values between the images generated by various diffusion workflows and the original images under different metrics. The Cosine and RGB-SSIM metrics give similar similarity values for different workflows, which implies that they are not sensitive to the model alliance. Thus, they are not informative for assessing the contribution of various models in the workflow. As shown in Figure 4, the Hist and DHash metrics, on the other hand, give similarity values that do not agree with the human perception of infringement. The Hist metric only considers the color distribution of the images, and the DHash metric only considers the hash fingerprint of the images. The SSIM and FID metrics have better discrimination ability on the similarity values between different generated images and original images, where SSIM measures the structural similarity between images, reflected in higher similarity values for images with similar outlines, and FID measures the similarity from more comprehensive image features, including outline, style, content, etc. As shown in Figure 5, under pixel-level evaluation, high FID values correspond to overall similarity, while high SSIM values correspond only to clear structural similarity. Therefore, FID is more appropriate for tracing the infringement models in AI image generation." }, { "figure_ref": [ "fig_8", "fig_9", "fig_9", "fig_9", "fig_9", "fig_9" ], "heading": "LOO vs. Shapley: Contribution Evaluation Experiment", "publication_ref": [], "table_ref": [], "text": "In this experiment, we compare the methods proposed in Section 3.3 for contribution evaluation, FID-LOO and FID-Shapley value.\nFigure 6 shows the normalized contribution of each model to the generation of AI image generation, calculated by the two methods. We observe that there are differences in the contribution evaluation between the two methods: under FID-LOO, the contribution order is: 𝑆𝐷𝑀𝑣10 >\"Leonardo\"> 𝐷𝑒𝑝𝑡ℎ >\"MonaLisa\"> 𝐷𝑎𝑣𝑖𝑛𝑐𝑖, while under FID-Shapley value, the contribution order is: 𝐷𝑎𝑣𝑖𝑛𝑐𝑖 > 𝐷𝑒𝑝𝑡ℎ > \"MonaLisa\">\"Leonardo\"> 𝑆𝐷𝑀𝑣10. To verify which one of FID-LOO and FID-Shapley values provides a more accurate contribution evaluation in AI image generation, we further conduct ablation experiments. We drop out each model separately and calculate the average FID value of the original image with the images generated by the diffusion models compromised by all subsets composed of the remaining models to measure the impact of the dropped model on the generated AI image generation. We repeat generating images of 100 batches with regularized seed (reg seed: green line marked by triangles in Figure 7) and random seed (ran seed: blue line marked by squares in Figure 7). The red line marked by circles in Figure 7 shows the FID of the AI images generated by the complete set of models and the original image. From Figure 7, we observe that when dropping out Davinci, it causes the largest deviation of FID value, meaning that the similarity between the generated image and the original image significantly decreases after removing Davinci model, which also means that Davinci model has the largest contribution to the copyright of the generated AI image. Similarly, from Figure 7, we can observe that the following important models are Depth, \"MonaLisa\" and \"Leonardo\", SDMv10, respectively, which is consistent with the contribution evaluation results of FID-Shapley value. The experiment proves that the FID-Shapley value provides a more accurate and realistic contribution evaluation in AI image generation than FID-LOO." }, { "figure_ref": [ "fig_2" ], "heading": "RELATED WORK", "publication_ref": [ "b21", "b34", "b42", "b28", "b31", "b32", "b22" ], "table_ref": [], "text": "Data Attribution: Some previous works traced copyright by evaluating the relationship between training input and generated output. For example, Datta et al. [23] explored the impact of black-box machine learning model inputs on the algorithm by proposing a transparent mechanism. Park et al. [36] introduced the method of using Tracing with the Randomly-projected After Kernel (TRAK) to implement the data attribution problem of large-scale models. Wang et al. [43] evaluated data attribution for text-to-image models through the \"customization\" method. However, such methods were based on the premise of a known training dataset. As shown in Figure 2, in AI image generation, the user can generate images by using the selected model with some prompts. The specific images used to train the model were only known by the model provider. Therefore, tracing copyright from the perspective of training images is challenging to achieve in real-world practices.\nFingerprint Traceability Another sort of approach attempted to track the models by adding a fingerprint to the model. Kim et al. [30] modified the model according to each user's unique digital fingerprint, so each text-to-image result was embedded with a unique fingerprint, and the model was traced through the fingerprint. Marra et al. [33] experimented with several popular GAN architectures and datasets and demonstrated the existence of GAN fingerprints and their value for reliable forensic analyses. Nie et al. [34] investigated the use of latent semantic dimensions as fingerprints, improved fingerprinting methods exhibit a significant tradeoff between robust attribution accuracy and generation quality, and enhanced the efficiency of fingerprint identification methods. Fernandez et al. [24] introduced an active strategy combining image watermarking and Latent Diffusion Models (LDM) by fine-tuning the decoder of LDM and embedding watermarks in all the generated images. Although these work have studied various watermarking methods to achieve model traceability, these methods can only trace back to a chosen specific model in their experiments, without considering the interplay among models in the complex AI image generation task.\nTo address the challenge of potential copyright infringement in AI-generated images, we have proposed a new framework called CopyScope that could identify different copyright infringement sources at the model level in the AI image generation process and evaluate their impact. We have proposed a FID-based Shapley algorithm to assess the infringement contribution of each model in the diffusion workflow. Extensive results have demonstrated that our proposed CopyScope framework could effectively zoom in on the sources and quantify the impact of infringement models in AI image generation. Our work offers a promising solution for copyright traceability in AI image generation, which could also promote the legally compliant use of AI-generated content. " } ]
Web-based AI image generation has become an innovative art form that can generate novel artworks with the rapid development of the diffusion model. However, this new technique brings potential copyright infringement risks as it may incorporate the existing artworks without the owners' consent. Copyright infringement quantification is the primary and challenging step towards AI-generated image copyright traceability. Previous work only focused on data attribution from the training data perspective, which is unsuitable for tracing and quantifying copyright infringement in practice because of the following reasons: (1) the training datasets are not always available in public; (2) the model provider is the responsible party, not the image. Motivated by this, in this paper, we propose CopyScope, a new framework to quantify the infringement of AIgenerated images from the model level. We first rigorously identify pivotal components within the AI image generation pipeline. Then, we propose to take advantage of Fréchet Inception Distance (FID) to effectively capture the image similarity that fits human perception naturally. We further propose the FID-based Shapley algorithm to evaluate the infringement contribution among models. Extensive experiments demonstrate that our work not only reveals the intricacies of infringement quantification but also effectively depicts the infringing models quantitatively, thus promoting accountability in AI image-generation tasks.
CopyScope: Model-level Copyright Infringement Quantification in the Diffusion Workflow
[ { "figure_caption": "Figure 1 :1Figure 1: The image (1) was created by painter Erin Hanson in 2021, and the image (2) was created by Stable Diffusion using the \"style of Erin Hanson\" as a prompt. The styles of these two images are so similar that it is impossible to tell them apart.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "To our knowledge, we are the first to propose a new copyright infringement quantification framework CopyScope at the model level, which facilitates stakeholders to investigate the emerging intricate infringement case. • We propose the FID-based Shapley algorithm to effectively quantify the infringement in the diffusion workflow, which takes advantage of Fréchet Inception Distance (FID) to effectively capture the image similarity that fits human perception naturally, and the Shaplely value scheme to quantify the infringement contribution among models. • We conduct extensive experiments to demonstrate the effectiveness of our proposed framework CopyScope, which depicts the infringing models quantitatively, thus promoting the legally compliant use of AI-generated content.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The interaction between user and model. It shows that the training images are invisible other than the model provider, and each model is uniquely associated with its provider, which is possible to conduct a model-side infringement tracing.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustration of the diffusion workflow, which consists of two parts: (a) the image creation process, in which multiple components jointly contribute to the generation of images; (b) the Evaluate stage, in which the contribution of multiple models on the generated image's copyright is evaluated.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 FID-Shapley Algorithm 1 : 9 :119Input: All models: M = {𝑧 1 , ..., 𝑧 𝑁 }; the alliances set S = { L 1 , L 2 , . . . , L 𝑛 }, where S contains all possible alliance L from M; the FID-based value function 𝑈 (•). 2: Initialize: An FID-Shapley value set R = ∅, FID-Shapley value 𝑣 = 0. 3: for each model 𝑧 in M do 4: Initialize margin contribution 𝑐 (𝑧 ) = 0; 5: for each alliance L in S do 6: if 𝑧 ∉ L then Update the margin contribution of model 𝑧: 10: 𝑐 (𝑧 ) ← 𝑐 (𝑧 ) + [𝑈 ( L ) -𝑈 ( L -𝑧 ) ] × |L|!(𝑁 -1-|L|)", "figure_data": "", "figure_id": "fig_4", "figure_label": "119", "figure_type": "figure" }, { "figure_caption": "Bold:Best performance compared to all alliances. ↑: Higher value corresponds to the generated image being more similar to the original image. ↓: Lower value corresponds to the generated image being more similar to the original image. Table 3: Quantitative results under different Quantitative metrics. The Figure No. column corresponds to each generated Mona Lisa image in Figure 4. The Alliances column gives the model alliances used in each generated image(more quantitative data from the alliance, see Appendix A). The Cosine∼FID column is the similarity score between each generated and original image under different quantification methods.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "• Property 3 .3If player 𝑧 𝑖 and player 𝑧 𝑗 satisfy 𝑈 (L ∪ 𝑧 𝑖 ) = 𝑈 (L ∪ 𝑧 𝑗 ), for any alliance L that does not include 𝑧 𝑖 and 𝑧 𝑗 , then 𝑣 (𝑧 𝑖 ) = 𝑣 (𝑧 𝑗 ). If multiple models contribute equally, they should be assigned the same contribution. • Property 4. For two value functions 𝑈 and 𝑈 ′ , we calculate the Shapley value of a player 𝑧 based on 𝑈 , 𝑈 ′ and 𝑈 +𝑈 ′ , denoted by 𝑣 (𝑧), 𝑣 ′ (𝑧) and 𝑣 ′′ (𝑧), respectively. Then, we have 𝑣 ′′ (𝑧) = 𝑣 (𝑧) + 𝑣 ′ (𝑧). If we combine two coalition games with value functions 𝑈 and 𝑈 ′ , then the Shapley value of the combined coalition is equal to the sum of the Shapley values from each individual coalition. In our scenario, the image generation tasks based on different value functions are independent, and their contribution evaluations satisfy such additivity property.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The pixel difference between the generated image and the original image, where darker color indicates larger differences, and lighter color indicates smaller differences.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparison of contribution values under FID-LOO (left) and FID-Shapley (right) methods.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Ablation experiment. Corresponding models are eliminated in order, and the average FID value of all alliance solutions composed of the remaining models is calculated. The greater the difference between the remaining alliances' FID and the all-model alliance's FID, the more influential the model is.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Component: PromptsText(b) Component: Model ContributionEncoderControlNet(CLIPText)Component: Base ModelGenerated imageText EmbeddingLatent spaceImage DecoderImageAuto+INPUTTensorencoder-decoderU-Net + SchedulerRandom image distribution tensorComponent: Lora2) U-Net & Scheduler: U-Net iteratively denoises the Gauss-ian noise matrix in a diffusion loop, and the noise of eachprediction is guided by text and timesteps. The predictednoise is removed from the random Gaussian noise matrix,and finally, the random Gaussian noise matrix is convertedto the latent features of the image. The scheduler is responsi-ble for the forward and backward propagation of the entireDiffusion model. It processes the model's output duringtraining and inference according to the set mathematicalrules and the number of timesteps.(3) Lora: A fine-tuning model generates a specific type of im-age with a particular style of artist (e.g., Ghibli Style[21],Davinci Style[14], One Piece Style[15]). Such models aretrained at lower cost and are more likely to generate imageswith particular topics, making them more susceptible toproducing infringing images.(4) ControlNet: A neural network model is deployed to regu-late the stable diffusion model, which enables control over", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "A full quantitative results under different metrics.", "figure_data": "SDv1-5+Davinci+MonaLisa0.93280.45910.65620.72350.9420241.1813SDv1-5+Davinci+Leonardo0.75430.44200.51560.82420.9864260.6814SDv1-5+MonaLisa+Leonardo0.77300.52070.50000.26590.9279332.4515SDMv10+Depth+Davinci0.88980.52270.73430.62140.9819209.0616SDMv10+Depth+MonaLisa0.87980.53930.73430.47660.9910246.0117SDMv10+Depth+Leonardo0.84890.57350.70310.66850.9245240.1618SDMv10+Davinci+MonaLisa0.85660.12060.67180.32290.8826230.7019SDMv10+Davinci+Leonardo0.78970.56340.54680.20480.9280301.2020SDMv10+MonaLisa+Leonardo0.82480.67550.64060.18180.9254310.0621SDv1-5+Depth+Davinci+MonaLisa0.90870.57890.75000.96840.9963233.2122SDv1-5+Depth+Davinci+Leonardo0.81560.57020.71870.82510.9528227.4523SDv1-5+Depth+MonaLisa+Leonardo0.79770.61900.71870.42650.9564217.4124SDv1-5+Davinci+MonaLisa+Leonardo0.82320.34360.54680.41780.8974238.6325SDMv10+Depth+Davinci+MonaLisa0.85900.57090.79680.45280.9775219.0126SDMv10+Depth+Davinci+Leonardo0.86360.56360.81250.48070.9659194.7127SDMv10+Depth+MonaLisa+Leonardo0.81420.51130.75000.49740.9315240.7128SDMv10+Davinci+MonaLisa+Leonardo0.80810.66630.62500.28570.9558248.5429SDv1-5+Depth+Davinci+MonaLisa+Leonardo0.82790.53560.76560.74630.9689220.4030SDMv10+Depth+Davinci+MonaLisa+Leonardo0.86050.40960.85930.44810.9733184.69", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Junlei Zhou; Jiashi Gao; Ziwei Wang; Xuetao Wei
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Midjourney", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "SDv1-5", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "Stable Diffusion Extensions", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "Civitai", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b4", "title": "ControlNet", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b5", "title": "Cosine similarity", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b6", "title": "DALL•E2", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b7", "title": "DHash", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b8", "title": "DreamShaper", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b9", "title": "Getty Images lawsuit says Stability AI misused photos to train AI | Reuters", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b10", "title": "GhostMix", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b11", "title": "Histogram similarity", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b12", "title": "Leonardo da Vinci style", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b13", "title": "One Piece Style LoRA", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b14", "title": "Runway", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b15", "title": "SD-XL by StabilityAI", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b16", "title": "SDMv10", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b17", "title": "StabilityAI", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b18", "title": "Stable Diffusion", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b19", "title": "Studio Ghibli Style LoRA", "year": "2023" }, { "authors": "Zheng Dai; David K Gifford", "journal": "", "ref_id": "b20", "title": "Training Data Attribution for Diffusion Models", "year": "2023" }, { "authors": "Anupam Datta; Shayak Sen; Yair Zick", "journal": "", "ref_id": "b21", "title": "Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems", "year": "2016" }, { "authors": "Pierre Fernandez; Guillaume Couairon; Hervé Jégou; Matthijs Douze; Teddy Furon", "journal": "", "ref_id": "b22", "title": "The Stable Signature: Rooting Watermarks in Latent Diffusion Models", "year": "2023" }, { "authors": "Amirata Ghorbani; James Zou", "journal": "", "ref_id": "b23", "title": "Data Shapley: Equitable Valuation of Data for Machine Learning", "year": "2019" }, { "authors": "Qiangqiang He; Yu Qiao; Shang Yang; Chongjun Wang", "journal": "Springer WASA", "ref_id": "b24", "title": "Equitable Valuation of Crowdsensing for Machine Learning via Game Theory", "year": "2021" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b25", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b26", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Ruoxi Jia; Fan Wu; Xuehui Sun; Jiacen Xu; David Dao; Bhavya Kailkhura; Ce Zhang; Bo Li; Dawn Song", "journal": "", "ref_id": "b27", "title": "Scalability vs. utility: Do we have to sacrifice one for the other in data importance quantification?", "year": "2021" }, { "authors": "Changhoon Kim; Kyle Min; Maitreya Patel; Sheng Cheng; Yezhou Yang", "journal": "", "ref_id": "b28", "title": "WOUAF: Weight Modulation for User Attribution and Fingerprinting in Text-to-Image Diffusion Models", "year": "2023" }, { "authors": "Yongchan Kwon; James Zou", "journal": "", "ref_id": "b29", "title": "Beta shapley: a unified and noise-reduced data valuation framework for machine learning", "year": "2021" }, { "authors": "Rafid Mahmood; James Lucas; David Acuna; Daiqing Li; Jonah Philion; Jose M Alvarez; Zhiding Yu; Sanja Fidler; Marc T Law", "journal": "", "ref_id": "b30", "title": "How Much More Data Do I Need? Estimating Requirements for Downstream Tasks", "year": "2022" }, { "authors": "Francesco Marra; Diego Gragnaniello; Luisa Verdoliva; Giovanni Poggi", "journal": "IEEE MIPR", "ref_id": "b31", "title": "Do GANs Leave Artificial Fingerprints?", "year": "2019" }, { "authors": "Guangyu Nie; Changhoon Kim; Yezhou Yang; Yi Ren", "journal": "", "ref_id": "b32", "title": "Attributing Image Generative Models Using Latent Fingerprints", "year": "2023" }, { "authors": "Konstantin D Pandl; Fabian Feiland; Scott Thiebes; Ali Sunyaev", "journal": "", "ref_id": "b33", "title": "Trustworthy Machine Learning for Health Care: Scalable Data Valuation with the Shapley Value", "year": "2021" }, { "authors": "Min Sung; Kristian Park; Andrew Georgiev; Guillaume Ilyas; Aleksander Leclerc; Madry", "journal": "", "ref_id": "b34", "title": "TRAK: Attributing Model Behavior at Scale", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b35", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b36", "title": "High-Resolution Image Synthesis with Latent Diffusion Models", "year": "2022" }, { "authors": "Lauren Benedek Rozemberczki; Péter Watson; Hao-Tsung Bayer; Olivér Yang; Sebastian Kiss; Rik Nilsson; Sarkar", "journal": "", "ref_id": "b37", "title": "The Shapley Value in Machine Learning", "year": "2022" }, { "authors": "Zeyang Sha; Zheng Li; Ning Yu; Yang Zhang", "journal": "", "ref_id": "b38", "title": "De-fake: Detection and attribution of fake images generated by text-to-image diffusion models", "year": "2022" }, { "authors": "S Lloyd; Shapley", "journal": "", "ref_id": "b39", "title": "Notes on the n-person game-ii: The value of an n-person game", "year": "1951" }, { "authors": "S Lloyd; Shapley", "journal": "", "ref_id": "b40", "title": "", "year": "1951" }, { "authors": "Rachael Hwee; Ling Sim; Xinyi Xu; Bryan Kian; Hsiang Low", "journal": "", "ref_id": "b41", "title": "Data valuation in machine learning:\"ingredients\", strategies, and open challenges", "year": "2022" }, { "authors": "Sheng-Yu Wang; Alexei A Efros; Jun-Yan Zhu; Richard Zhang", "journal": "", "ref_id": "b42", "title": "Evaluating Data Attribution for Text-to-Image Models", "year": "2023" }, { "authors": "Zhou Wang; Alan C Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE transactions on image processing", "ref_id": "b43", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "Zhao Ying; Kwoh Chee; Keong ", "journal": "", "ref_id": "b44", "title": "Fast leave-one-out evaluation and improvement on inference for LS-SVMs", "year": "2004" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b45", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" } ]
[ { "formula_coordinates": [ 2, 72.99, 298.38, 4.46, 7.7 ], "formula_id": "formula_0", "formula_text": "•" }, { "formula_coordinates": [ 3, 359.07, 593.45, 154.11, 23.6 ], "formula_id": "formula_2", "formula_text": "SSIM(𝑟, 𝑔) = (2𝜇 𝑟 𝜇 𝑔 + 𝐶 1 )(2𝜎 𝑟𝑔 + 𝐶 2 ) (𝜇 2 𝑟 + 𝜇 2 𝑔 + 𝐶 1 )(𝜎 2 𝑟 + 𝜎 2 𝑔 + 𝐶 2 )" }, { "formula_coordinates": [ 4, 86.23, 337.66, 208.35, 13.55 ], "formula_id": "formula_3", "formula_text": "FID = ∥ 𝜇 𝑟 -𝜇 𝑔 ∥ 2 2 + 𝑇𝑟 (Σ 𝑟 + Σ 𝑔 -2(Σ 𝑟 Σ 𝑔 ) 1/2 ).(2)" }, { "formula_coordinates": [ 5, 118.86, 629.38, 175.72, 8.43 ], "formula_id": "formula_4", "formula_text": "𝑣 𝐿𝑂𝑂 (𝑧 𝑖 ) ∝ 𝑈 (L) -𝑈 (L\\𝑧 𝑖 ).(3)" }, { "formula_coordinates": [ 5, 322.21, 474.96, 236.53, 24.77 ], "formula_id": "formula_5", "formula_text": "𝑣 𝑆𝑉 (𝑧 𝑖 ) ∝ 1 𝑁 ∑︁ L ⊆ M\\𝑧 𝑖 [𝑈 (L ∪𝑧 𝑖 ) -𝑈 (L)] |L|!(𝑁 -1 -|L|)! (𝑁 -1)! .(4)" }, { "formula_coordinates": [ 6, 176.56, 474.46, 363.77, 8.75 ], "formula_id": "formula_6", "formula_text": "(1)(2) (3) (4) (5) (6) (7)" }, { "formula_coordinates": [ 6, 175.95, 554.02, 365.74, 8.52 ], "formula_id": "formula_7", "formula_text": "(8) (9) (12) (13) (14) (11) (10)" } ]
2023-10-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b24" ], "table_ref": [], "text": "A fundamental aspect of software safety is arguably the modelling of its expected operational domain through a formal or semi-formal specification, giving clear boundaries on when it is sensible to deploy the program, and when it is not. It is however difficult to define such boundaries for machine learning programs, especially for visual classifiers based on artificial neural networks (ANN) that process high-dimensional data (images, videos) and are the result of a complex optimisation procedure. In this context, Out-of-Distribution (OoD) detection -which aims to detect whether an input of an ANN is In-Distribution (ID) or outside of it -serves several purposes: 1) it helps characterise the extent to which the ANN can operate outside a bounded dataset; 2) it constitutes a surrogate measure of the generalisation abilities of the ANN; 3) it can help assess when an input is too far away from the operational domain, which prevents misuses of the program and increases its safety. However, one crucial aspect missing from current OoD detection methods is the ability to provide some form of explanation of their decision. Indeed, most approaches are based on a statistical model of the system behaviour, built upon an abstract representation of the input data, sometimes turning OoD detection into an opaque decision that may appear arbitrary to the end-user. While it would be possible to generate a visual representation of the abstract space using tSNE and to highlight ID data clusters for justifying the OoD-ness of a given sample, tSNE is extremely dependent on the choice of hyper-parameters, sometimes generating misleading visualisations [25]. In this regard, methods from the field of Explainable AI (XAI), which are typically used to provide some insight about the decisionmaking process of the model, can be adapted to build models for OoD detection that provide some context information to justify their decision. In the particular task of image classification, XAI methods can help extract visual cues that are class-specific (e.g., a bird has wings), and whose presence or absence can help characterise the similarity of the input image to the target distribution (e.g., an object classified as a bird that shows neither wings nor tail nor beak is probably an OoD input). Therefore, in this work we make the following contributions:\n1. We introduce a new benchmark based on perturbations of the ID dataset which provides a known and quantifiable evaluation of the discrepancy between the ID and OoD datasets that serves as a reference value for the comparison between various OoD detection methods (Sec. 3). 2. We propose CODE, an OoD agnostic detection measure that does not require any fine-tuning of the original classifier. Pattern identification allows us to provide images from the ID dataset as reference points to justify the decision (Sec. 4). Finally, we demonstrate the capabilities of this approach in a broad comparison with existing methods (Sec. 5)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b14", "b12", "b15", "b20", "b19", "b5", "b8", "b17", "b23", "b21", "b9", "b1", "b5", "b20", "b0", "b26", "b26", "b16", "b27", "b13", "b3", "b25", "b28", "b25", "b6" ], "table_ref": [], "text": "Out-of-distribution detection. In this work, we focus on methods that can apply to pre-trained classifiers. Therefore, we exclude methods which integrate the learning of the confidence measure within the training objective of the model, or specific architectures from the field of Bayesian Deep-Learning that aim at capturing uncertainty by design. Moreover, we exclude OoD-specific methods that use a validation set composed of OoD samples for the calibration of hyperparameters, and focus on OoD-agnostic methods that require only ID samples. In this context, the maximum softmax probability (MSP) obtained after normalisation of the classifier logits constitutes a good baseline for OoD detection [8].\nMore recently, ODIN [15] measures the local stability of the classifier using gradient-based perturbations, while MDS [13] uses the Mahalanobis distance to class-specific points in the feature space. [16] proposes a framework based on energy scores, which is extended in the DICE method [21] by first performing a class-specific directed sparsification of the last layer of the classifier. ReAct [20] also modifies the original classifier by rectifying the activation values of the penultimate layer of the model. [6] proposes two related methods: MaxLogitbased on the maximum logit value -and KL-Matching which measures the KL divergence between the output of the model and the class-conditional mean softmax values. The Fractional Neuron Region Distance [9] (FNRD) computes the range of activations for each neuron over the training set in order to empirically characterise the statistical properties of these activations, then provides a score describing how many neuron outputs are outside the corresponding range boundaries for a given input. Similarly, for each layer in the model, [18] computes the range of pairwise feature correlation between channels across the training set. ViM [24] adds a dedicated logit for measuring the OoD-ness of an input by using the residual of the feature against the principal space. KNN [22] uses the distance of an input to the k-th nearest neighbour. Finally, GradNorm [10] measures the gradients of the cross-entropy loss w.r.t. the last layer of the model.\nEvaluation of OoD detection. All methods presented above are usually evaluated on different settings (e.g., different ID/OoD datasets), sometimes using only low resolution images (e.g., MNIST [2]), which only gives a partial picture of their robustness. Therefore, recent works such as [6,21] -that evaluate OoD methods on datasets with higher resolution (e.g., ImageNet [1]) -or Open-OoD [27] -which aims at standardising the evaluation of OoD detection, anomaly detection and open-set recognition into a unified benchmark -are invaluable. However, when evaluating the ability of a method to discriminate ID/OoD datasets, it is often difficult to properly quantify the margin between these two datasets, independently from the method under test, and to establish a \"ground truth\" reference scale for this margin. Although [27] distinguishes \"near-OoD datasets [that] only have semantic shift compared with ID datasets\" from \"far-OoD [that] further contains obvious covariate (domain) shift \", this taxonomy lacks a proper way to determine, given two OoD datasets, which is \"further\" from the ID dataset. Additionally, [17] generates \"shifted sets\" that are \"perceptually dissimilar but semantically similar to the training distribution\", using a GAN model for measuring the perceptual similarity, and a deep ensemble model for evaluating the semantic similarity between two images. However, this approach requires the training of multiple models in addition to the classifier. Thus, in this paper we propose a new benchmark based on gradated perturbations of the ID dataset. This benchmark measures the correlation between the OoD detection score returned by a given method when applied to a perturbed dataset (OoD), and the intensity of the corresponding perturbation.\nPart detection. Many object recognition methods have focused on part detection, in supervised (using annotations [28]), weakly-supervised (using class labels [14]) or unsupervised [4,26,29] settings, primarily with the goal of improving accuracy on hard classification tasks. To our knowledge, the PARTICUL algorithm [26] is the only method that includes a confidence measure associated with the detected parts (used by the authors to infer the visibility of a given part). PARTICUL aims to identify recurring patterns in the latent representation of a set of images processed through a pre-trained CNN, in an unsupervised manner. It is, however, restricted to homogeneous datasets where all images belong to the same macro-category. For more heterogeneous datasets, it becomes difficult to find recurring patterns that are present across the entire training set.\n3 Beyond cross-dataset evaluation: measuring consistency against perturbations\nIn this section, we present our benchmark for evaluating the consistency of OoD detection methods using perturbations of the ID dataset. Let f : X → R N be a classifier trained on a dataset X train ∼ P id , where P id is a distribution over X × R N and N is the number of categories learned by the classifier. We denote D id the marginal distribution of P id over X . For any image x ∈ X , f outputs a vector of logits f (x) ∈ R N . The index of the highest value in f (x) corresponds to the most probable category (or class) of x -relative to all other categories.\nWithout loss of generality, the goal of an OoD detection method is to build a class-conditional 4 confidence function C : X × R N → R assigning a score to each pair (x, y), where y can be either the ground truth label of x when known, or the prediction f (x) otherwise. This function constitutes the basis of OoD detection, under the assumption that images belonging to D id should have a higher confidence score than images outside D id .\nA complete evaluation of an OoD detection method would require the application of the confidence function C on samples representative of the ID and OoD distributions. However, it is not possible to obtain a dataset representative of all possible OoD inputs. Instead, cross-dataset OoD evaluation consists in drawing a test dataset X test ∼ D id (with X test ̸ = X train ), choosing a different dataset D ood ̸ ∼ D id , then measuring the separability of C(X test ) and C(D ood ), where C(X) denotes the distribution of scores computed over dataset X using C. Three metrics are usually used: Area Under the ROC curve (AUROC); Area Under the Precision-Recall curve (AUPR), False Positive Rate when the true positive rate is 95% (FPR95).\nIn this work, in addition to cross-dataset evaluation, we propose to generate an OoD distribution D ood by applying a perturbation to all images from D id . Although image perturbation is a standard method for evaluating the robustness of classifiers [7], our intent differs: rather than trying to capture the point of failure of a classifier, we monitor how the various confidence scores evolve when applying a perturbation of increasing intensity to the ID dataset. In practice, we use four transformations: Gaussian noise, Gaussian blur, brightness changes and rotations. More generally, a perturbation P α is a function that applies a transformation of magnitude α to an image x ∈ X (e.g., a rotation with angle α). When applying P α over D id , we define the expected confidence as\nE(P α , C) = E x∼D id C P α (x), f (P α (x))(1)\nwhich is evaluated over the test set X test . Although it would again be possible to measure the separability of ID and OoD confidence distributions, perturbations of small magnitude would result in almost identical distributions. Instead, we evaluate the correlation between the magnitude of the perturbation and the average confidence value of the perturbed dataset as the Spearman Rank Correlation Coefficient (SRCC) r s between α and E(P α , C), using multiple magnitude values (α 0 , . . . , α n ). r s = 1 (resp. -1) indicates that the average confidence measure increases (resp. decreases) monotonically with the value of α, i.e., that the measure is correlated with the magnitude of the perturbation. The key advantage of the SRCC resides in the ability to compare the general behaviour of various OoD detection methods that usually have different calibrations (i.e., different range of values). Assuming that the discrepancy between D id and P α (D id ) is correlated to the magnitude of the perturbation α (ground truth), this benchmark measures the consistency of the OoD methods under test." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Contextualised OoD Detection using Pattern Identification", "publication_ref": [ "b25", "b25", "b25", "b18" ], "table_ref": [], "text": "In this section, we present CODE, our proposal for building a contextualised OoD detector. CODE is an extension of the PARTICUL algorithm described in [26], which is intended to mine recurring patterns in the latent representation of a set of images processed through a CNN. Patterns are learnt from the last convolutional layer of the classifier f over the training set X train , in a plug-in fashion that does not require the classifier to be retrained. Let v be the restriction of classifier f up to its last convolutional layer, i.e., f = l•v, where l corresponds to the last pooling layer followed by one or several fully connected layers. ∀x ∈ X , v(x) ∈ R H×W ×D is a convolutional map of D-dimensional vectors. The purpose of the PARTICUL algorithm is to learn p distinct 1×1×D convolutional kernels K = [k 1 , . . . , k p ] (or detectors), such that ∀x ∈ X train : 1) each kernel k i strongly correlates with exactly one vector in v(x) (Locality constraint); 2) each vector in v(x) strongly correlates with at most one kernel k i (Unicity constraint).\nLearning class-conditional pattern detectors. While PARTICUL is an unsupervised approach restricted to fine-grained recognition datasets, CODE uses the training labels from X train to learn p detectors per class. More precisely, let\nK (c) = [k (c) 1 , . . . k (c)\np ] be the set of kernel detectors for class c. Similar to [26], we define the normalised activation map between kernel k (c) i and image x as:\nP (c) i (x) = σ v(x) * k (c) i ∈ R H×W (2\n)\nwhere σ is the softmax normalisation function. We also define the cumulative activation map, which sums the normalised scores for each vector in v(x), i.e.,\nS (c) (x) = p i=1 P (c) i (x) ∈ R H×W(3)\nThen, we define the Locality and Unicity objective functions as follows:\nL l = - (x,y)∈Xtrain N c=1 p i=1 1 [c=y] × max P (c) i (x) * u(4)\nL c = (x,y)∈Xtrain N c=1 1 [c=y] × max 0, max S (c) (x) -t (5\n)\nwhere 1 is the indicator function, and u is a 3 × 3 uniform kernel that serves as a relaxation of the Locality constraint. Due to the softmax normalisation of the activation map P (c) i (x), L l is minimised when, for all images x of class c, each kernel k (c) i strongly correlates with one and only one 3 × 3 region of the convolutional map v(x). Meanwhile, L c is minimised when, for all images x of class c, the sum of normalised correlation scores between a given vector in v(x) and all kernels k (c) i does not exceed a threshold t = 1, ensuring that no vector in v(x) correlates too strongly with multiples kernels. The final training objective is L = L l + λ u L u . Importantly, we do not explicitly specify a target pattern for each detector, but our training objective will ensure that we obtain detectors for p different patterns for each class. Moreover, since patterns may be similar across different classes (e.g., the wheels on a car or a bus), we do not treat images from other classes as negative samples during training.\nConfidence measure. After training, we build our confidence measure using the function\nH (c) i (x) = max v * ∈v(x) (v * * k (c)\ni ) returning the maximum correlation score between kernel k (c) i and v(x). Assuming that each detector will correlate more strongly with images from D id than images outside of D id , we first estimate over X train the mean value µ \nC (c) (x) = 1 p p i=1 C (c) i (x), with C (c) i (x) = sig H (c) i (x) -µ (c) i /σ (c) i(6)\nas the class confidence score for class c. Though it could be confirmed using a KS-test on empirical data, the logistic distribution hypothesis used for H (c) i rather of the normal distribution used in PARTICUL -is primarily motivated by the computational effectiveness 5 and the normalisation effect of the sigmoid sig that converts a raw correlation score into a value between 0 and 1. During inference, for x ∈ X , the confidence measure C(x) is obtained by weighting each class confidence score by the probability that x belongs to this class:\nC(x) = N c=1 C (c) (x) × P (Y = c | X = x)(7)\nwhere the categorical distribution P (Y | X = x) is obtained from the vector of normalised logits n = σ f (x) , as shown in Fig. 1. Note that it would be i ). Then, as in [26], we locate the pattern associated with this detector inside image x using the SmoothGrads [19] algorithm. This operation anchors each detector for each class to a part of an image in the training set. Moreover, the ability to visualise patterns can also serve as a sanity check to verify that our method has indeed learned unique and relevant patterns w.r.t. to the class object.\nFor each new image, as shown in Fig. 2, we first identify the predicted class c = arg max f (x) returned by the classifier. Then, we use the individual confidence scores C (c) i (x) for each detector of class c to infer the presence or absence of each pattern. When the confidence score of a given detector is above a given threshold (e.g., C\n(c) i (x) > 0.3), we highlight the corresponding pattern inside image x (again using SmoothGrads) and display the most correlated sample from the training set as a reference. In summary, we justify the OoD-ness of the new image by pointing out the presence or absence of class-specific recurring patterns that were found in the training set. Note that although our confidence measure is computed using all class confidence scores (weighted by the categorical distribution, see above), we believe that an explanation built solely on the most probable class can provide enough justification for the decision, while being sufficiently concise to be understandable by the user." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b26", "b8", "b26", "b8", "b26", "b26", "b1", "b10", "b10", "b11", "b4", "b22" ], "table_ref": [ "tab_1" ], "text": "In this section, we start by describing the experimental setup designed to answer the following research questions: 1) How does CODE fare against other detection methods on a cross-dataset OoD evaluation benchmark? 2) What is the influence of weighting all class-condition confidence scores (Eq. 7) rather than using only the confidence score of the most probable class? 3) How does the number p of detectors per class influences CODE detection capabilities? 4) How do OoD detection methods behave when applying perturbations on the ID dataset?\nSetup We performed our evaluation using the OpenOoD framework [27], which already implements most recent OoD detection methods. For each ID dataset, we used the provided pre-trained classifier for feature extraction and trained 4 or 6 pattern detectors per class, using the labels of the training set and the objective function described in Sec. 4. After cross-validation on a CIFAR10 v. CIFAR100 detection benchmark, we set λ u = 1, putting equal emphasis on the locality and unicity constraints. Although CODE trains a high number of detectors, the learning process remains computationally efficient since the classifier is not modified and only the detectors of the labelled class are updated during the back-propagation phase. Additionally, for large datasets such as ImageNet, detectors from different classes can be trained in parallel on chunks of the dataset corresponding to their respective class. We trained our detectors with RMSprop (learning rate 5 × 10 -4 , weight decay 10 -5 ), for 30 epochs (ImageNet) or 200 epochs (all other ID datasets). As a comparison, we also implemented a classbased FNRD [9], extracting neuron activation values at different layers of the classifier.\nTable 1: Comparison of AUROC scores between CODE and state-ofthe-art methods on a cross-dataset benchmark. Results with * are extracted from [27] -keeping only OoD-agnostic methods. We also add results of our implementation of a class-based FNRD [9]. Experiments on ImageNet using 6 CODE detectors have not yet been conducted due to limited ressources (denoted ). For readability, AUPR and FPR95 are omitted but available upon request. Cross-dataset OoD evaluation The cross-dataset evaluation implemented in Ope-nOoD includes a OoD detection benchmark and an Open Set Recognition (OSR) benchmark. For the OoD detection benchmark, we use the ID/Near-OoD/Far-OoD dataset split proposed in [27].\nFor the OSR benchmark, as in [27], M-6 indicates a 6/4 split of MNIST [2] (dataset split between 6 closed set classes used for training and 4 open set classes), C-6 indicates a 6/4 split of CIFAR10 [11], C-50 indicates a 50/50 split of CIFAR100 [11] and TIN-20 indicates a 20/180 split of TinyImageNet [12]. The AUROC score is averaged over 5 random splits between closed and open sets.\nThe results, summarised in Table 1, show that CODE displays OoD detection capabilities on par with most state-of-the-art methods (top-10 on OSR benchmark, top-8 on Near-OoD detection, top-9 on Far-OoD detection). Moreover, as discussed in Sec. 4, using the categorical distribution of the output of the classifier to weight class confidence scores systematically yields better results than using only the confidence score of the most probable class (up to 7% on the Far-OoD benchmark for ImageNet). Interestingly, increasing the number of detectors per class from 4 to 6 does not necessarily improve our results. Indeed, the Unicity constraint (Eq. 5) becomes harder to satisfy with a higher number of detectors and is ultimately detrimental to the Locality constraint (Eq. 4). This experiment also shows that the choice of Near-OoD/Far-OoD datasets in OpenOoD is not necessarily reflected by the average AUROC scores. Indeed, for CIFAR100, most methods exhibit a higher AUROC for Near-OoD datasets than for Far-OoD datasets. This observation highlights the challenges of selecting and sorting OoD datasets according to their relative \"distance\" to the ID dataset, without any explicit formal definition of what this distance should be. In this regard, our proposed benchmark using perturbations of the ID dataset aims at providing a quantifiable distance between ID and OoD datasets. Consistency against perturbations We also evaluated all methods on our perturbation benchmark (see Sec. 3), measuring the Spearman Rank correlation coefficient (SRCC) between the magnitude of perturbation (see Table 2) and the average confidence measured on the perturbed dataset. The results, shown in Table 3, reveal that, on average, CODE seem to correlate more strongly to the magnitude of the perturbation than all other methods. Moreover, some OoD methods sometimes display unexpected behaviours, depending on the choice of dataset and perturbation, as shown in Fig. 3. In particular, MSP tends to increase with the noise ratio, hence the success of adversarial attacks [5,23]. Additionally, by construction, any perturbation reducing the amplitude of neuron activation values (blur, brightness) has the opposite effect of increasing the FNRD. Gram also increases with the noise ratio and is highly sensitive to rotations, although we do not have a satisfactory explanation for this particular behaviour. We also notice that -contrary to our expectations -the average confidence does not monotonously decrease when rotating images from 0 to 180°: all methods show periodic local maximums of the average confidence that may indicate a form of invariance of the network w.r.t. rotations of specific magnitude (45°for CIFAR10, 90°for CIFAR100/ImageNet, 180°for MNIST). This effect seems amplified for CIFAR100 (see Fig. 3). Finally, we notice that the top-3 methods for Near-OoD detection (KNN, ViM and ReAct) also strongly correlate with the magnitude of the perturbation, which opens the door to a more in-depth analysis of the relationship between the two benchmarks.\nTable 3: Comparison of OoD methods on our perturbation benchmark.\nFor each perturbation, ↑ (resp. ↓) indicates that the average confidence on the perturbed dataset should increase (resp. decrease) with α, i.e., that the sign of the SRCC should be positive (resp. negative). Results in red indicate either a weak correlation (absolute value lower than 0.3) or unexpected sign of the correlation coefficient, e.g., the average Gram confidence score increases with the noise ratio on CIFAR100 (r s = 1.0) when it should be decreasing. Results in bold indicate a strong expected correlation (absolute value greater than 0.9). The last column represents the average correlation score, taking into account the expected sign of the correlation (results with * are partial average values). indicates a timeout during the experiments." }, { "figure_ref": [], "heading": "Conclusion & Future Work", "publication_ref": [], "table_ref": [], "text": "In this paper, we have demonstrated how the detection of recurring patterns can be exploited to develop CODE, an OoD-agnostic method that also enables a form of visualisation of the detected patterns. We believe that this unique feature can help the developer verify visually the quality of the OoD detection method and therefore can increase the safety of image classifiers. More generally, in the future we wish to study more thoroughly how part visualisation can be leveraged to fix or improve the OoD detection method when necessary. For instance, we noticed some redundant parts during our experiments and believe that such redundancy could be identified automatically, and pruned during the training process to produce a more precise representation of each class. Additionally, providing a form of justification of the OoD-ness of a sample could also increase the acceptability of the method from the end-user point of view, a statement that we wish to confirm by conducting a user study in the future. Our experiments show that CODE offers consistent results on par with state-of-the-art methods in the context of two different OoD detection benchmarks, including our new OoD benchmark based on perturbations of the reference dataset. This new benchmark highlights intriguing behaviours by several state-of-the-art methods (w.r.t. specific types of perturbation) that should be analysed in details. Moreover, since these perturbations are equivalent to a controlled covariate shift, it would be interesting to evaluate covariate shift detection methods in the same setting. Finally, note that CODE could be applied to other part detection algorithms, provided that a confidence measure could be associated with the detected parts." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organisations (see https://www.grid5000.fr). This work has been partially supported by MIAI@Grenoble Alpes, (ANR-19-P3IA-0003) and TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215." } ]
In this work, we propose CODE, an extension of existing work from the field of explainable AI that identifies class-specific recurring patterns to build a robust Out-of-Distribution (OoD) detection method for visual classifiers. CODE does not require any classifier retraining and is OoD-agnostic, i.e., tuned directly to the training dataset. Crucially, pattern identification allows us to provide images from the In-Distribution (ID) dataset as reference data to provide additional context to the confidence scores. In addition, we introduce a new benchmark based on perturbations of the ID dataset that provides a known and quantifiable measure of the discrepancy between the ID and OoD datasets serving as a reference value for the comparison between OoD detection methods.
Contextualised Out-of-Distribution Detection using Pattern Identification
[ { "figure_caption": "i for (x, c) ∼ P id . Then, we define", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: CODE inference overview. When processing a new sample x, the confidence measure sums up the average contribution of the detectors from each class weighted by the probability of x belonging to that class.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Explanations generated by CODE for ID and OoD samples. For each image, the classification as ID/OoD rely on the presence/absence of class-specific visual cues extracted from the training set.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(a) Out-of-distribution image. (b) Inside-Of-Distribution image. Extracting examples One of the key advantages of CODE over existing OoD detection methods resides in the ability to provide a visual justification of the confidence measure. For each detection kernel k (c) i for class c, we first identify the sample (x, c) ∈ X train that most faithfully represents the distribution of correlation scores H (c) i across the training (in practice, we select the sample correlation score is closest to µ (c)", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Summary of the perturbations, with definition of α and its range.", "figure_data": "Perturbation PDescriptionRange for αBlurGaussian blur with kernel 3 × 3 α ∈ [0.0, 10]and standard deviation σ = αNoiseGaussian noise with ratio αα ∈ [0, 1.0]BrightnessBlend black image withα ∈ [0.1, 1.0]ratio 1 -αRotation forth (R+) Rotation with degree αα ∈ [0, 180]Rotation back (R-)Rotation with degree αα ∈ [180, 360]", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Romain Xu-Darme; Julien Girard-Satabin; Darryl Hond; Gabriele Incorvaia; Zakaria Chihani
[ { "authors": "J Deng; W Dong; R Socher; L J Li; K Li; L Fei-Fei", "journal": "CVPR", "ref_id": "b0", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "L Deng", "journal": "IEEE Signal Processing Magazine", "ref_id": "b1", "title": "The MNIST database of handwritten digit images for machine learning research", "year": "2012" }, { "authors": "O M Eidous; M Al-Rawash", "journal": "", "ref_id": "b2", "title": "Approximations for standard normal distribution function and its invertible", "year": "2022" }, { "authors": "J Han; X Yao; G Cheng; X Feng; D Xu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b3", "title": "P-CNN: Part-based convolutional neural networks for fine-grained visual categorization", "year": "2022" }, { "authors": "M Hein; M Andriushchenko; J Bitterwolf", "journal": "CVPR", "ref_id": "b4", "title": "Why RELU networks yield highconfidence predictions far away from the training data and how to mitigate the problem", "year": "2019" }, { "authors": "D Hendrycks; S Basart; M Mazeika; M Mostajabi; J Steinhardt; D X Song", "journal": "ICML", "ref_id": "b5", "title": "Scaling out-of-distribution detection for real-world settings", "year": "2022" }, { "authors": "D Hendrycks; T G Dietterich", "journal": "", "ref_id": "b6", "title": "Benchmarking neural network robustness to common corruptions and perturbations", "year": "2018" }, { "authors": "D Hendrycks; K Gimpel", "journal": "ICLR", "ref_id": "b7", "title": "A baseline for detecting misclassified and out-ofdistribution examples in neural networks", "year": "2017" }, { "authors": "D Hond; H Asgari; D Jeffery; M Newman", "journal": "International Journal of Artificial Intelligence and Machine Learning", "ref_id": "b8", "title": "An integrated process for verifying deep learning classifiers using dataset dissimilarity measures", "year": "2021-07" }, { "authors": "R Huang; A Geng; Y Li", "journal": "NeurIPS", "ref_id": "b9", "title": "On the importance of gradients for detecting distributional shifts in the wild", "year": "2021" }, { "authors": "A Krizhevsky", "journal": "", "ref_id": "b10", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "", "ref_id": "b11", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "K Lee; K Lee; H Lee; J Shin", "journal": "NeurIPS", "ref_id": "b12", "title": "A simple unified framework for detecting outof-distribution samples and adversarial attacks", "year": "2018" }, { "authors": "H Li; X Zhang; Q Tian; H Xiong", "journal": "", "ref_id": "b13", "title": "Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition", "year": "2020" }, { "authors": "S Liang; Y Li; R Srikant", "journal": "ICLR", "ref_id": "b14", "title": "Enhancing the reliability of out-of-distribution image detection in neural networks", "year": "2018" }, { "authors": "W Liu; X Wang; J Owens; Y Li", "journal": "NeurIPS", "ref_id": "b15", "title": "Energy-based out-of-distribution detection", "year": "2020" }, { "authors": "J Mukhoti; T Y Lin; B C Chen; A Shah; P H S Torr; P K Dokania; S N Lim", "journal": "", "ref_id": "b16", "title": "Raising the bar on the evaluation of out-of-distribution detection", "year": "2022" }, { "authors": "C S Sastry; S Oore", "journal": "ICML", "ref_id": "b17", "title": "Detecting out-of-distribution examples with gram matrices", "year": "2020" }, { "authors": "D Smilkov; N Thorat; B Kim; F B Viégas; M Wattenberg", "journal": "", "ref_id": "b18", "title": "Smoothgrad: removing noise by adding noise", "year": "2017" }, { "authors": "Y Sun; C Guo; Y Li", "journal": "NeurIPS", "ref_id": "b19", "title": "React: Out-of-distribution detection with rectified activations", "year": "2021" }, { "authors": "Y Sun; Y Li", "journal": "", "ref_id": "b20", "title": "Dice: Leveraging sparsification for out-of-distribution detection", "year": "" }, { "authors": "Y Sun; Y Ming; X Zhu; Y Li", "journal": "", "ref_id": "b21", "title": "Out-of-distribution detection with deep nearest neighbors", "year": "" }, { "authors": "C Szegedy; W Zaremba; I Sutskever; J Bruna; D Erhan; I Goodfellow; R Fergus", "journal": "ICLR", "ref_id": "b22", "title": "Intriguing properties of neural networks", "year": "2014" }, { "authors": "H Wang; Z Li; L Feng; W Zhang", "journal": "", "ref_id": "b23", "title": "Vim: Out-of-distribution with virtual-logit matching", "year": "" }, { "authors": "M Wattenberg; F Viégas; I Johnson", "journal": "", "ref_id": "b24", "title": "How to use t-SNE effectively", "year": "2016" }, { "authors": "R Xu-Darme; G Quénot; Z Chihani; M C Rousset", "journal": "", "ref_id": "b25", "title": "PARTICUL: Part Identification with Confidence measure using Unsupervised Learning", "year": "2022-06" }, { "authors": "J Yang; P Wang; D Zou; Z Zhou; K Ding; W Peng; H Wang; G Chen; B Li; Y Sun; X Du; K Zhou; W Zhang; D Hendrycks; Y Li; Z Liu", "journal": "", "ref_id": "b26", "title": "OpenOOD: Benchmarking generalized out-of-distribution detection", "year": "" }, { "authors": "X Zhao; Y Yang; F Zhou; X Tan; Y Yuan; Y Bao; Y Wu", "journal": "", "ref_id": "b27", "title": "Recognizing Part Attributes With Insufficient Data", "year": "" }, { "authors": "H Zheng; J Fu; T Mei; J Luo", "journal": "", "ref_id": "b28", "title": "Learning Multi-attention Convolutional Neural Network for Fine-Grained Image Recognition", "year": "" } ]
[ { "formula_coordinates": [ 4, 218.35, 607.71, 262.24, 10.99 ], "formula_id": "formula_0", "formula_text": "E(P α , C) = E x∼D id C P α (x), f (P α (x))(1)" }, { "formula_coordinates": [ 5, 134.77, 536.76, 84.29, 13.95 ], "formula_id": "formula_1", "formula_text": "K (c) = [k (c) 1 , . . . k (c)" }, { "formula_coordinates": [ 5, 234.92, 576.96, 241.43, 14.07 ], "formula_id": "formula_2", "formula_text": "P (c) i (x) = σ v(x) * k (c) i ∈ R H×W (2" }, { "formula_coordinates": [ 5, 476.35, 579.43, 4.24, 9.96 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 5, 241.19, 637.47, 239.4, 30.79 ], "formula_id": "formula_4", "formula_text": "S (c) (x) = p i=1 P (c) i (x) ∈ R H×W(3)" }, { "formula_coordinates": [ 6, 194.37, 134.53, 286.22, 31.41 ], "formula_id": "formula_5", "formula_text": "L l = - (x,y)∈Xtrain N c=1 p i=1 1 [c=y] × max P (c) i (x) * u(4)" }, { "formula_coordinates": [ 6, 186.07, 174.59, 290.28, 30.94 ], "formula_id": "formula_6", "formula_text": "L c = (x,y)∈Xtrain N c=1 1 [c=y] × max 0, max S (c) (x) -t (5" }, { "formula_coordinates": [ 6, 476.35, 184.34, 4.24, 9.96 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 6, 174.94, 390.5, 113.19, 18.18 ], "formula_id": "formula_8", "formula_text": "H (c) i (x) = max v * ∈v(x) (v * * k (c)" }, { "formula_coordinates": [ 6, 163.26, 469.23, 317.33, 24.82 ], "formula_id": "formula_9", "formula_text": "C (c) (x) = 1 p p i=1 C (c) i (x), with C (c) i (x) = sig H (c) i (x) -µ (c) i /σ (c) i(6)" }, { "formula_coordinates": [ 6, 221.23, 589.13, 259.36, 30.2 ], "formula_id": "formula_10", "formula_text": "C(x) = N c=1 C (c) (x) × P (Y = c | X = x)(7)" } ]
2023-10-25
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b3", "b4", "b5" ], "table_ref": [], "text": "F OR effective e-governance, automatic license plate recog- nition systems should be accurate and reliable. Unfortunately, these deep neural network-based systems are often vulnerable and under attack. Recent studies show that deep neural networks are vulnerable to adversarial attacks, where subtle, imperceptible, or malicious modifications to input data can lead to misleading and erroneous predictions [1]. Realworld AI applications can be at a loss due to these adversarial attacks [2]- [4]. A successful adversarial attack could have disastrous consequences, potentially allowing criminals to evade identification, compromising national security, and undermining the effectiveness of law enforcement efforts. In the case of automatic License Plate Character Recognition (LPCR) systems, tiny manipulations can be maliciously employed in each character of the license plates, allowing vehicles to evade police apprehension or engage in criminal activities undetected. Unfortunately, despite the high volume of theoretical research on adversarial attack algorithms and defense mechanisms against these attacks [5], [6], studies that focus on practical real-life attack probabilities are missing.\nUsing a small license character dataset obtained from vehicles in Nepal, in this research, we elucidate how existing deeplearning algorithms for license plate character recognition can be easily improved against adversarial attacks. Specifically, we first discuss a novel adversarial attack algorithm to generate effective adversarial license plate character images. These adversarial images are similar to the images that may be observed in real-life settings. We then demonstrate how existing algorithms and methods are highly vulnerable to such adversarial samples. Next, we demonstrate that a deep learning model can be made considerably robust to such adversarial attacks, by simply training with the adversarial samples. Finally, we present several findings of our interpretability experiments to understand when these methods are inaccurate.\nThe paper is structured as follows: Section 2 investigates related works in this domain, Section 3 outlines the methodology, Section 4 presents and analyzes the results of the experiment, and Section 5 concludes the paper with a discussion of future work." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [ "b6", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29" ], "table_ref": [], "text": "Every year, neural networks have brought outstanding advancements in the License Plate Recognition (LPR) system [7]- [10]. The automatic license plate recognition system proposed by Chang et al. [11] consists of two modules: a license plate locating module and a license number identification module. In Nepal, the government has introduced embossed license plates replacing the traditional handwritten ones to implement a robust LPR system. Interestingly, recent studies have shown that adversarial examples expose vulnerabilities in the architectures of deep neural network [12] as security of machine learning [13] has been of concern for a long time. A survey on the threat of adversarial attacks on Deep Learning in Computer Vision shows the transferability of such attacks on real-life scenarios [14].\nGoodfellow et al. [15] proposed FGSM (Fast Gradient Sign Method), which is to calculate one-step gradient update along the direction of the sign of gradient at each pixel. This method provided a meager computational cost. Since then, researchers have proposed different techniques to generate adversarial examples. Moosavi-Deezfooli et al. [16] proposed Deepfool to compute a minimal norm of adversarial perturbation for a given image in an iterative manner which provided less perturbation compared to FGSM. Carlini and Wagner [17] proposed the C&W method, which defeated the defensive distillation for targeted networks. Shu and Vargas et al. [18] proposed a single-pixel attack method using the genetic algorithm to deceive a neural network by altering a single pixel. These adversarial attacks can be used to create adversarial samples for an LPR system that is not perceived by humans and can only be detected by the system [19]. However, these attacks cannot be simulated easily regarding an actual license plate. Quian et al. [20] proposed a spot evasion attack method for the LPR system, which uses a genetic algorithm to find the optimal perturbation position. This simulated a more realworld scenario that could be used to tamper with the license plate. However, this approach only looked into spots that disregard other shapes of different sizes, which can be easier to reproduce in real life.\nStudies by Kurakin et al. [21] showcase the ease with which adversarial examples can mislead DNN classifiers, even when modifications are undetectable to human observers. Furthermore, the introduction of physically feasible adversarial stickers highlights the significance of investigating stealthy attack methods that exploit real-world objects, reinforcing the importance of resilience in the face of physical adversarial threats [22]. In the realm of real-world object detection, the universal background adversarial attack method proposed by Yidan Xu et al. [23] modifies local pixel information around target objects using a single background image, underscoring the relevance of studying robustness against physical attacks. Moreover, leveraging the natural phenomenon of shadows, [24] presents a stealthy and effective physical-world adversarial attack that generates inconspicuous adversarial examples, further highlighting the necessity for resilience against emerging attack methods utilizing natural phenomena. Recent studies have also highlighted the vulnerability of deep neural networks (DNNs) to physical-world attacks using light beams. The Adversarial Laser Beam (AdvLB) attack [25] and the Adversarial Laser Spot (AdvLS) attack [26] demonstrate the efficacy of manipulating laser beam parameters to deceive DNNs, emphasizing the need for resilient models against lightbased adversarial threats.\nOn the basis of threat, Papernot et al. [27] presented effective results on the practical black-box attack against machine learning that is capable of evading defense strategies previously found to generate adversarial examples harder. Previous work of Szegedy et al. [28] showed that adversarial samples have the property of transferability, which shows that such adversarial samples can be misclassified across models.\nIn the domain of defending deep neural networks from physically realizable attacks, a study [29] demonstrates the limited effectiveness of existing robust model learning methods against prominent physical attacks on image classification. In response, they propose a new abstract adversarial model, rectangular occlusion attacks, and leverage adversarial training with this model to achieve high robustness against physically realizable attacks. Similarly, another study [30] focuses on defending object detection systems against adversarial patch attacks by introducing a defense method based on \"Adversarial Patch-Feature Energy\" (APE). The APE-based defense exhibits impressive robustness against adversarial patches in both digital and physical settings, providing a promising approach for countering physical adversarial attacks in critical systems like autonomous vehicles." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "The methodology employed in this research study outlines the step-by-step process followed to investigate and analyze the robustness of a license plate character recognition model against proposed adversarial attack. The methodology encompasses various stages, including image acquisition, preprocessing, character segmentation and labeling, model development, adversarial attack implementation, external validation of adversarial samples, and adversarial training. Each step was carefully executed to ensure the reliability and validity of the experimental results. The following sections provide an overview of the steps in our overall experiment of this study.\nStep 1: Image Acquisition. High-quality license plate images were collected from the VFTC government office, which is responsible for license plate distribution. The images were acquired using a SONY A6000 Digital camera. Both front and rear license plate images were included.\nStep 2: Image Preprocessing. The acquired images were filtered for noise, the resolution was scaled down, and license plate localization was performed to remove unnecessary backgrounds.\nStep 3: Character Segmentation and Labeling. The characters were segmented from the preprocessed images using a fixed mask, and they were labeled accordingly.\nStep 4: Character Recognition Model Development. A highly accurate multiclass CNN-based classification model was trained using the clean and labeled dataset.\nStep 5: Adversarial Attack Implementation. The proposed exhaustive geometric mask-based adversarial algorithm was employed to generate adversarial samples, targeting the benign images within the multiclass CNN-based classification model.\nStep 6: External Validation of Adversarial Samples. The generalizability of the generated adversarial samples in deceiving character recognition models beyond their original target was confirmed through evaluation with an external character recognition model.\nStep 7: Adversarial Training. The multiclass CNN-based character classification model, was retrained, incorporating the randomly generated adversarial samples using the proposed attack method into the training dataset." }, { "figure_ref": [], "heading": "A. Dataset", "publication_ref": [], "table_ref": [], "text": "For this study, three different types of datasets have been used. The first type is the I-1057 dataset, which consists of the pre-processed images of the collected data. The second type is the I-Hard-1057 dataset, which consists of adversarial images generated using the Exhaustive Geometric patch-based adversarial attack on the License Plate Character Recognition (LPCR) model. The third type is the I-Adversarial-Train dataset, which consists of equally distributed I-1057 images and randomly generated adversarial samples using geometric patches (horizontal, vertical, and circular), which is used for adversarial training of LPCR model to generate Adversarial Attack-aware License Plate Character Recognition (AA-LPCR) model.\nThe images of Nepal's Embossed Licence Plates that had been introduced to replace existing hand-written license plates were required, as shown in Fig 1 . These data were acquired from the Vehicle Fitness Testing Centre (VFTC), which consisted of License Plates of four-wheeler vehicles, both front and rear, from State 3 for Private, Public, and Governmental vehicles. A total of 160 samples were collected using SONY A6000 Digital camera. The collected samples were filtered manually for noisy images. The collected images were of high resolution. To simulate a natural License Plate Recognition (LPR) system, the images were scaled down, followed by License Plate Localization using OpenCV.\nFor the segmentation of characters, a fixed mask was used by generalizing the position of the characters. The characters from License Plate were extracted using the mask, which was later labeled, giving us the I-1057 dataset. This I-1057 dataset consisted of 1057 images of characters segmented from Nepal's Embossed License Plate with the dimensions of 105x160x3 and consisted of characters 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, and F. The I-1057 dataset was subjected to augmentation (Rotation and Blur) to imitate real-life scenarios during the model's training." }, { "figure_ref": [], "heading": "B. CNN-based License Plate Character Recognition (LPCR) model", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "With the recognition of a CNN-based classifier, LPR systems are built with a state-of-art approach. For this study, a CNN-based classification model is used for better recognition of characters on the license plate. The CNN architecture was designed from scratch, achieving optimum time and space complexity, and the model was trained using the I-1057 dataset with Gaussian blur and rotation as augmentation using Pytorch. This augmented I-1057 dataset was subjected to a train-validation split of 80-20, which was fed into the model while keeping the image dimension of 105x160x3 (no grey scaling was done).\nOn training the LPCR model, the optimum parameters were found to be: Learning Rate of 0.001, Momentum of 0.9, Batch size of 64, and Epochs of 80. The architecture of the CNN model can be seen in Table I. " }, { "figure_ref": [ "fig_1" ], "heading": "C. Adversarial Attack", "publication_ref": [ "b11" ], "table_ref": [], "text": "Adversarial attacks use an adversarial image to fool the neural networks by providing wrong information regarding the original image with the intention that the neural network misclassifies the original image [12].\nPopular algorithms such as FGSM, C&W, and DeepFool are commonly used to generate adversarial examples to attack or test a model. Depending upon the domain and purpose of the attack, the usability of such an attack varies accordingly. Despite producing optimum adversarial examples, such gradientbased attacks are difficult to reproduce in Smart Embossed License Plates. As shown in figure 2, a calculated pixelated noise can be seen on the generated adversarial examples. This pixelated noise is difficult to be replicated in a real-life scenario as the embossed license plates are metal plates with lettering and numbering raised, unlike the handwritten number plates, and it is fixed onto the vehicle using a 'one-way' screw, preventing vehicle owners from changing the license plates on their own. So, the embossed license plates are presented to be tamper-proof and highly secure. That is why an exhaustive geometric mask-based adversarial attack is proposed in this paper. This method of attack was found to be highly probable in a real-life scenario and of high threat to License Plate Recognition Models.\nTo compare our method to other standard attack methods, Nepal's Embossed License Plate Characters were subjected to gradient-based adversarial attacks such as FGSM, C&W, and DeepFool to investigate why such gradient-based adversarial images are not practical. After this, the proposed exhaustive geometric mask-based adversarial attack, simulating the physical domain attack, was implemented to generate adversarial images (I-Hard-1057 dataset) of the characters segmented from Nepal's Embossed License Plate (I-1057 dataset). The proposed attack used different geometrical masks for the generation of adversarial images. For this study, three types of patches were used: Horizontal line, Vertical line, and Circular patches. The algorithm to produce adversarial images using Exhaustive Geometric Mask-based Adversarial Attack using Horizontal patches can be seen in Algorithm 1.\nThe proposed method of attack is a white-box attack, as the architecture and structure of the targeted model are accessible, and the model loss is used as a reference to determine the suitable adversarial images. Taking Algorithm 1 as a reference, vertical and circular patches were implemented as well. In order to generate adversarial images by attacking our LPCR model using exhaustive geometric mask-based adversarial attack, we perturbed benign images from the I-1057 dataset and passed the perturbed image to our model where several parameters are observed: prediction label, confidence of prediction, and loss. The perturbation is performed in an incremental approach where we start from the least noise and slowly increase the amount of perturbation until the LPCR model misclassifies the image with high confidence or the noise reaches its threshold. This process was iterated for every benign image for horizontal line, vertical line, and circular geometric patches.\nFor a step-by-step algorithm to generate adversarial samples using exhaustive geometric patch-based adversarial attack with minimal mean square error and maximum loss, aiming to achieve misclassification, we begin by initializing several variables. The input variable, denoted as X, represents the original image, while the output variable, denoted as X ′ , represents the perturbed image. We also introduce a boolean variable, \"success,\" which indicates whether the image was misclassified by the model. Additionally, we maintain a counter, \"hit,\" to keep track of the number of misclassifications.\nTo control the perturbation level, we introduce the variable \"thickness,\" representing the amount of perturbation applied to a horizontal patch within the image. The variables x and y represent the width and height of the image, respectively. Lastly, \"rgb\" represents the darkest pixel value within the image.\nTo simulate a realistic attack, we set a threshold for the amount of perturbation. For horizontal patches, we use a threshold equal to half the height of the image, i.e., y/2. Starting with an initial thickness of 1, we iterate through thickness values up to half the height of the image, i.e., y/2.\nFor each thickness value, we initialize a variable, \"i,\" representing the position of the perturbation. It ranges from 0 to y-thickness+1, which corresponds to the last possible position in the image. We generate a perturbed image, X ′ , by applying the function perturbimage, which takes the original image X, the perturbation position i, and the thickness of the perturbation as parameters. The perturbimage function applies a geometric mask to the benign images based on the provided parameters, generating perturbed images.\nNext, we evaluate whether the perturbed image satisfies the necessary conditions to proceed further. If the original image (X) is not equal to the perturbed image (X ′ ) generated by the perturbimage function, and the prediction loss is greater than the previously recorded loss, this can be considered a better adversarial sample with the maximum loss for the given thickness value. The perturbation parameters are temporarily stored for reference. Specifically, we save the current position as y ′ , the loss as loss ′ , and increment the \"hit\" counter by 1.\nOnce we have applied the perturbation to all possible positions in the image for the given thickness, we analyze the value of \"hit\" to determine if we have successfully perturbed the image and selected the one with the maximum loss. If the value of \"hit\" is non-zero, indicating at least one misclassification occurred, we use the temporarily stored perturbation parameters to create the best-perturbed image generated by the algorithm with the minimum thickness and maximum loss. In this case, we set the \"success\" variable to true. However, if \"hit\" is zero, meaning we were unable to generate a perturbed image that could successfully misclassify the LPCR modal, we increase the thickness and continue the process until we find a perturbed image with the minimum perturbation and maximum loss within that thickness range. Finally, if \"success\" is true, we return the perturbed image as the result; otherwise, we return the original image." }, { "figure_ref": [], "heading": "D. External Validation", "publication_ref": [], "table_ref": [], "text": "It is essential to validate the produced result externally. For this paper, EasyOCR is used to test the generated adversar-ial images. EasyOCR is a well-known character recognition open-source tool that uses various recognition models such as ResNet, LSTM, and CTC, making it ideal for external validation. During the external validation, all the generated adversarial samples from our model were given as input, and the predicted label and its confidence were noted. The noted result was used to verify if the transferability property holds true (see Supplementary Table S2)." }, { "figure_ref": [], "heading": "E. Adversarial Training", "publication_ref": [], "table_ref": [], "text": "Adversarial training is presented as a defense mechanism for the proposed exhaustive geometric mask adversarial attack, where adversarial samples generated using random adversarial parameters are also included in the training dataset (I-Adversarial-1057 dataset). Since the model was subjected to an adversarial attack, evidence to support the existing vulnerability of Nepal's Embossed License Plate Recognition model was found. The study found severe threats to the Embossed License Plate that individuals can use for personal benefit at the expense of breaching the law. This reason leads to the implementation of adversarial training as a defense mechanism. The CNN model was subjected to Adversarial Training with 50% original and 50% perturbed dataset. The perturbed dataset consisted of adversarial samples generated using mutually exclusive horizontal line, vertical line, and circular patches with random positions and dimensions during the model's training. This method proved highly effective as the misclassification was significantly reduced." }, { "figure_ref": [], "heading": "IV. RESULTS AND ANALYSIS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Accuracy of the License Plate Character Recognition (LPCR) model", "publication_ref": [], "table_ref": [], "text": "Our License Plate Character Recognition (LPCR) model was trained on a dataset consisting of 1057 character images (I-1057 dataset), which were subjected to rotation and blur augmentation while preserving the RGB color dimension. The dataset was divided into an 80-20 train-validation split to assess the model's performance. During training, the model was fine-tuned with specific parameter settings: a Learning Rate of 0.001, a Momentum of 0.9, a Batch size of 64, and Epochs set to 80. This training configuration resulted in the model achieving a validation accuracy of 99.5%." }, { "figure_ref": [ "fig_1" ], "heading": "B. Adversarial Example Generation", "publication_ref": [ "b14", "b16", "b15" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Following the standard practice of generating adversarial samples by perturbing an image until a model misclassifies the image, we tried several methods to generate adversarial samples for our I-1057 dataset. While commonly used Gradientbased Adversarial Attacks such as FGSM [15], C&W [17], and DeepFool [16] are readily available to use, we observed that the samples generated by these methods contain specific color taints throughout the image that appear unrealistic. For a common real-world attack, intentional or inadvertent, changing an entire license plate in a specific way may not be practical. Some examples of these samples are shown in the Adversarial Sample column in Table II for i ← 0 to (y -thickness + 1) do 10:\nX ′ ← perturbimage(X, i, thickness) ▷ Perturb X at position i 11: if X ′ ̸ = X then 12:\nif loss > loss ′ then ▷ Select X ′ with maximum loss if hit ̸ = 0 then 20:\nX ′ ← perturbimage(X, y ′ , thickness) To simulate more realistic attacks, we performed a whitebox attack on the LPCR model using our exhaustive geometric mask algorithm. In order to generate adversarial images by attacking our LPCR model, we perturbed I-1057 images and passed the perturbed image to our model, then observed several parameters: prediction label, confidence of prediction, and loss. The perturbation is performed in an incremental approach where we start from the least noise and slowly increase the amount of perturbation until the LPCR model misclassifies the image with high confidence and minimum noise or the noise reaches its threshold. This process was iterated for every benign image for different geometric patches. Samples of generated adversarial images can be seen in Table II.\nNext, adversarial samples were generated by attacking the LPCR model using our exhaustive geometric mask algorithm (see Methods). Using the original 1057 character images (I-1057 dataset), several random adversarial images were generated to test the performance of LPCR. A significant drop in the performance of the LPCR model was observed on the TABLE II: Performance of our LPCR and AA-LPCR model demonstrated via 12 example cases. Sample images selected were classified by LPCR (second column) and then perturbed to obtain an adversarial image (third and fourth columns). These adversarial images were reclassified using LPCR (second-last column) and AA-LPCR (last column). generated random adversarial images. Exhaustively perturbing I-1057 images to attack the LPCR model, we identified all the \"hard\" variants of I-1057 images and grouped them as the I-Hard-1057 dataset. Our LPCR model correctly classified only 24% of images in this I-Hard-1057 dataset. Most of the correctly classified images were from class 'A', i.e., 'A' was least vulnerable in comparison to other characters (see Supplementary Table S1). Overall, this drastic drop in the accuracy of the LPCR model validates that a highly accurate model is not necessarily robust.\nWe also studied the amount of change (in pixels) needed in the images for them to be misclassified. To analyze this, we calculate the Mean Squared Error (MSE) between the original image and its perturbed variant. MSE is used to quantify the amount of perturbation added by measuring the average distance between the original image and the perturbed image. The analysis presented in Table III reveals notable distinctions in the MSE among adversarial samples generated using vertical, horizontal, and circular patches. Adversarial samples created with vertical patches exhibit the highest MSE on average, signifying their conspicuous nature. In contrast, adversarial samples generated with circular patches show the lowest MSE, implying a more unobtrusive distortion. Furthermore, circular patches yield the highest confidence level, reinforcing their efficacy. When evaluated holistically, the data indicates that circular patches provide the most effective and efficient performance among the three patch shapes, given their ability to generate adversarial samples with minimal deviation from the original samples, and with a high degree of confidence. Therefore, circular patches emerge as the most optimal choice for generating inconspicuous adversarial samples.\nAnalyzing the adversarial image samples, we found some common patterns. These patterns were based either on the class of the image (i.e., what character it is) or the location and TABLE III: Average metrics showing the effectiveness of generated adversarial samples against LPCR using average confidence of misclassification and average mean squared error (MSE). The average MSE represents the average squared difference between the original and adversarial images (amount of perturbation). The higher value of average confidence shows a higher ability of the adversarial image to fool the LPCR model and vice versa. The ideal condition is to have higher confidence of misclassification and lower MSE from an attacker's perspective and vice versa from a defender's perspective. direction of the patch introduced. Analyzing these patterns, we were able to find vulnerable areas in the character images. First, we studied the average probabilities that any given character image may be misclassified as another character. Our goal in this task was to find answers to questions of the form, \"How likely can a '0' be misclassified as another character when we apply a horizontal patch?\" Our findings, summarized in Figure 2 show several notable misclassifications. Regardless of the type of patch applied (horizontal, vertical, or circular), we find that most characters are prone to be misclassified as either '8', 'A', or 'B'. Similarly, the most common misclassifications include '0' misclassified as 'B', '4' as 'A', and '7' as '2'. In general, across all three patch types, we found that the images of the character 'A' were the least misclassified, suggesting that it is the most difficult character to attack. Next, for the case of introducing horizontal patches, we visualized the misclassified images and the vulnerable regions within the images (see Figure 3). Visualization of the character images, along with the highlighted vulnerable bands on them, shows that several character images, such as '0' and '5', have smaller regions of vulnerability regions. On the other hand, images such as '8' and '1' have much larger attack-prone regions." }, { "figure_ref": [], "heading": "C. Comparison of LPCR with EasyOCR", "publication_ref": [ "b30" ], "table_ref": [], "text": "Since our LPCR model was susceptible to our exhaustive geometric mask-based adversarial attack (i.e., the attack was successful), we sought to check if the same I-Hard-1057 could be successful at attacking other state-of-the-art character recognition models. To validate the transferability of these results, to classify the images in the I-Hard-1057 dataset we used EasyOCR, a Convolutional Recurrent Neural Network (CRNN) character recognition model [31]. EasyOCR is a widely-used, open-source, lightweight character recognition tool recognized for its robust text recognition capabilities. Its extensive versatility enables it to excel in diverse OCR applications, including text extraction from natural scenes and documents with support for over 80 languages. On our I-Hard-1057 dataset, EasyOCR's accuracy was 29.2% (see Supplementary Table S2), which is similar to the accuracy of LPCR. This confirms that the adversarial samples generated by our exhaustive geometric mask-based adversarial attack algorithm can also fool other deep learning models." }, { "figure_ref": [], "heading": "D. Adversarial Attack-aware License Plate Character Recognition (AA-LPCR) model", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "Retraining our LPCR model on the new I-Adversarial-Train dataset (see Methods), we developed an adversarial attackaware license plate character recognition (AA-LPCR) model and found it to be considerably resilient. This AA-LPCR model was able to correctly classify 99.74% of the I-Hard-1057 dataset, for which the accuracy of the LPCR model was only 24.06%. The remaining rare I-Hard-1057 images that were misclassified are illustrated in Table IV. We then proceeded to attack the AA-LPCR model using the exhaustive geometric mask-based adversarial attack and observed that the success rate of the attack significantly dropped as low as 21.95% from 75.9% for vertical patches. As illustrated in Table V, a similar decline can be seen for both horizontal and circular patches as well. This increase in the accuracy of AA-LPCR on the I-Hard-1057 dataset and the decrease in the success rate of exhaustive geometric mask-based adversarial attack on AA-LPCR validates that Adversarial Training is efficient in increasing the robustness of the model." }, { "figure_ref": [], "heading": "V. CONCLUSION AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "Using Nepal's embossed license plate images as a dataset, this work classified license plate characters using standard deep learning models. Findings elucidate that license plate recognition systems designed using off-the-shelf deep learning methods can be easily tricked, intentionally or inadvertently. Overall, we find that existing deep learning-based character recognition methods that are unaware of adversarial attacks can be ineffective for license plate recognition. As a solution, the license plate character recognition method developed herein was refined by retraining using adversarial samples to increase classification accuracy. Developing this adversarial attack-aware license plate character recognition (AA-LPCR) model, we demonstrate that license plate recognition systems can be refined to be remarkably more accurate and effectively be 'attack-aware'.\nSeveral additional experiments could improve the prediction accuracy of our AA-LPCR method. In addition to perturbing images by adding vertical, horizontal, and circular patches, several additional geometric shapes, such as squares and rectangles, can be explored to study how they affect prediction accuracy. Also, these shapes can be dulled or blunted and combined to generate more diverse adversarial samples and more effectively simulate real-world physical attacks. Furthermore, a complete attack-aware license plate recognition method can be developed using our attack-aware character recognition models. Finally, interpretability experiments can lead to an in-depth understanding of why certain characters and their regions are prone to be misclassified as others. Fig. 3: Attack-prone regions of license plate character images identified by predicting the class of the images after adding horizontal line patches. For instance, images of a character '0' (row '0') are highly likely to be misclassified as a 'B' (column 'B') if adversarial horizontal line patches appear in the red highlighted region. Perturbed images were passed through our license plate character recognition (LPCR) model to predict their labels and were analyzed to obtain these highlighted regions. " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "We would like to thank the Vehicle Fitness Test Center (VFTC), Kathmandu, Nepal for allowing us to take pictures of license plates from their inventory." } ]
Background. Reading dirty license plates accurately in moving vehicles is challenging for automatic license plate recognition systems. Moreover, license plates are often intentionally tampered with a malicious intent to avoid police apprehension. Usually, such groups and individuals know how to fool the existing recognition systems by making minor unnoticeable plate changes. Designing and developing deep learning methods resilient to such real-world 'attack' practices remains an active research problem. As a solution, this work develops a resilient method to recognize license plate characters. Methods. Extracting 1057 character images from 160 Nepalese vehicles, as the first step, we trained several standard deep convolutional neural networks to obtain 99.5% character classification accuracy. On adversarial images generated to simulate malicious tampering, however, our model's accuracy dropped to 25%. Next, we enriched our dataset by generating and adding geometrically masked images, retrained our models, and investigated the models' predictions. Results. The proposed approach of training with generated adversarial images helped our adversarial attack-aware license plate character recognition (AA-LPCR) model achieves an accuracy of 99.7%. This near-perfect accuracy demonstrates that the proposed idea of random geometric masking is highly effective for improving the accuracy of license plate recognition models. Furthermore, by performing interpretability studies to understand why our models work, we identify and highlight attack-prone regions in the input character images. In sum, although Nepal's embossed license plate detection systems are vulnerable to malicious attacks, our findings suggest that these systems can be upgraded to close to 100% resilience.
Adversarial sample generation and training using geometric masks for accurate and resilient license plate character recognition
[ { "figure_caption": "Fig. 1 :1Fig. 1: Example license plate images from captures from the front (first column) and rear (second column) of vehicles. (a) and (b) are from private vehicles, (c) and (d) are from governmental vehicles, and (e) and (f) are from public vehicles.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig.2:The heatmap shows the percentage of images that were classified correctly (cells in the diagonal) and incorrectly (cells not in the diagonal) when adversarial patches are added to the images. The patches applied are horizontal line (a), vertical lines (b), and circular (c). For instance, in (a), the cell in the first row and second-last column implies that the label '0' was heavily misclassified as 'B'. Cells in the left-to-right diagonal line represent correct classifications. From the heatmaps, it can be observed that the exhaustive geometric mask-based adversarial attack was able to misclassify most of the characters efficiently apart from the character 'A' regardless of whether the patches are horizontal (a), vertical (b), or circular (c).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "CNN architecture. The number inside the bracket represents the number of neurons on the connected layers. On the convolution layer, the numbers inside the bracket represent the height, width, and number of filters, respectively.", "figure_data": "Layer TypeShapeConvolution + BatchNorm + ReLU[3, 3, 16]Convolution + BatchNorm + ReLU[3, 3, 16]Max pooling[2]Convolution + BatchNorm + ReLU[3, 3, 32]Convolution + BatchNorm + ReLU[3, 3, 32]Max pooling[2]Convolution + BatchNorm + ReLU[3, 3, 64]Convolution + BatchNorm + ReLU[3, 3, 64]Max pooling[2]Convolution + BatchNorm + ReLU[3, 3, 128]Convolution + BatchNorm + ReLU[3, 3, 128]Convolution + BatchNorm + ReLU[3, 3, 128]Max pooling[2]Fully connected + ReLU[2304]Dropout[0.5]Fully connected + ReLU[500]Softmax[13]", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": ".", "figure_data": "Algorithm 1 Exhaustive geometric mask-based adversarialattack using horizontal patchesInput: XOutput: X ′1: success ← F alse2: hit ← 03: loss ′ ← 04: thickness ← 15: x ← width6: y ← height7: rgb[3] ← darkestpixel[3]8: while thickness ≤ y 2 do9:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "This figure shows all the cases in which AA-LPCR fails to correctly classify the adversarial images. This demonstrates the limitation of our resilient AA-LPCR model in rare cases.", "figure_data": "Adversarial ImagePredicted Label (Confidence)Result of our new resilient model (Confidence)6 (57.0%)0 (99.9%)A (54.0%)0 (99.9%)A (57.5%)7 (84.8%)A (74.7%)7 (67.4%)6 (48.6%)0 (99.9%)A (76.0%)7 (96.8%)B (52.9%)6 (66.7%)B (49.7%)6 (52.3%)", "figure_id": "tab_6", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Geometric PatchesLPCRAA-LPCRHorizontal75.6%56.09%Vertical75.9%21.95%Circular76.1%63.41%", "figure_id": "tab_7", "figure_label": "V", "figure_type": "table" } ]
Bishal Shrestha; Griwan Khakurel; Kritika Simkhada; Badri Adhikari
[ { "authors": "Naveed Akhtar; Ajmal Mian; Navid Kardan; Mubarak Shah", "journal": "IEEE Access", "ref_id": "b0", "title": "Advances in adversarial attacks and defenses in computer vision: A survey", "year": "2021-11" }, { "authors": "Kevin Eykholt; Ivan Evtimov; Earlence Fernandes; Bo Li; Amir Rahmati; Chaowei Xiao; Atul Prakash; Tadayoshi Kohno; Dawn Song", "journal": "", "ref_id": "b1", "title": "Robust physical-world attacks on deep learning visual classification", "year": "2018" }, { "authors": "Zhichao Wang; Yu Jiang; Jiaxin Liu; Siyu Gong; Jian Yao; Feng Jiang", "journal": "Journal of Electrical and Computer Engineering", "ref_id": "b2", "title": "Research and implementation of fast-lprnet algorithm for license plate recognition", "year": "2021" }, { "authors": "Junbin Fang; You Jiang; Canjian Jiang; Zoe L Jiang; Siu-Ming Yiu; Chuanyi Liu", "journal": "", "ref_id": "b3", "title": "State-of-the-art optical-based physical adversarial attacks for deep learning computer vision systems", "year": "2023" }, { "authors": "Derek Samer Y Khamaiseh; Abdullah Bagagem; Mathew Al-Alaj; Hakam W Mancino; Alomari", "journal": "IEEE Access", "ref_id": "b4", "title": "Adversarial deep learning: A survey on adversarial attacks and defense mechanisms on image classification", "year": "2022" }, { "authors": "Chengyu Wang; Jia Wang; Qiuzhen Lin", "journal": "Springer", "ref_id": "b5", "title": "Adversarial attacks and defenses in deep learning: A survey", "year": "2021" }, { "authors": "Christos-Nikolaos E Anagnostopoulos; Ioannis E Anagnostopoulos; D Ioannis; Vassili Psoroulas; Eleftherios Loumos; Kayafas", "journal": "IEEE Transactions on intelligent transportation systems", "ref_id": "b6", "title": "License plate recognition from still images and video sequences: A survey", "year": "2008" }, { "authors": "Christos Nikolaos; E Anagnostopoulos; Vassilis Ioannis E Anagnostopoulos; Eleftherios Loumos; Kayafas", "journal": "IEEE Transactions on Intelligent transportation systems", "ref_id": "b7", "title": "A license plate-recognition algorithm for intelligent transportation system applications", "year": "2006" }, { "authors": "Erdinc Kocer; K Kursat; Cevik ", "journal": "Procedia Computer Science", "ref_id": "b8", "title": "Artificial neural networks based vehicle license plate recognition", "year": "2011" }, { "authors": "Hui Li; Peng Wang; Chunhua Shen", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b9", "title": "Toward end-to-end car license plate detection and recognition with deep neural networks", "year": "2018" }, { "authors": "Shyang-Lih Chang; Li-Shien Chen; Yun-Chung Chung; Sei-Wan Chen", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b10", "title": "Automatic license plate recognition", "year": "2004" }, { "authors": "Christian Szegedy; Wojciech Zaremba; Ilya Sutskever; Joan Bruna; Dumitru Erhan; Ian Goodfellow; Rob Fergus", "journal": "", "ref_id": "b11", "title": "Intriguing properties of neural networks", "year": "2013" }, { "authors": "Marco Barreno; Blaine Nelson; Anthony D Joseph; Doug Tygar", "journal": "Machine Learning", "ref_id": "b12", "title": "The security of machine learning", "year": "2010" }, { "authors": "Naveed Akhtar; Ajmal Mian", "journal": "Ieee Access", "ref_id": "b13", "title": "Threat of adversarial attacks on deep learning in computer vision: A survey", "year": "2018" }, { "authors": "Ian J Goodfellow; Jonathon Shlens; Christian Szegedy", "journal": "", "ref_id": "b14", "title": "Explaining and harnessing adversarial examples", "year": "2014" }, { "authors": "Seyed-Mohsen Moosavi-Dezfooli; Alhussein Fawzi; Pascal Frossard", "journal": "", "ref_id": "b15", "title": "Deepfool: a simple and accurate method to fool deep neural networks", "year": "2016" }, { "authors": "Nicholas Carlini; David Wagner", "journal": "IEEE", "ref_id": "b16", "title": "Towards evaluating the robustness of neural networks", "year": "2017" }, { "authors": "Hai Shu; Ronghua Shi; Hongtu Zhu; Ziqi Chen", "journal": "", "ref_id": "b17", "title": "Adversarial image generation and training for deep neural networks", "year": "2020" }, { "authors": "Hyun Kwon; Jang-Woon Baek", "journal": "Journal of Sensors", "ref_id": "b18", "title": "Adv-plate attack: Adversarially perturbed plate for license plate recognition system", "year": "2021" }, { "authors": "Yaguan Qian; Danfeng Ma; Bin Wang; Jun Pan; Jiamin Wang; Zhaoquan Gu; Jianhai Chen; Wujie Zhou; Jingsheng Lei", "journal": "Computers & Security", "ref_id": "b19", "title": "Spot evasion attacks: Adversarial examples for license plate recognition systems with convolutional neural networks", "year": "2020" }, { "authors": "Alexey Kurakin; Ian J Goodfellow; Samy Bengio", "journal": "", "ref_id": "b20", "title": "Adversarial examples in the physical world", "year": "2016" }, { "authors": "Xingxing Wei; Yingjie Guo; Jie Yu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b21", "title": "Adversarial sticker: A stealthy attack method in the physical world", "year": "2021" }, { "authors": "Yidan Xu; Juan Wang; Yajie Yuan Zhang Li; Zixuan Wang; Dianxin Xu; Wang", "journal": "", "ref_id": "b22", "title": "Universal physical adversarial attack via background image", "year": "2022" }, { "authors": "Yiqi Zhong; Xianming Liu; Deming Zhai; Junjun Jiang; Xiangyang Ji", "journal": "", "ref_id": "b23", "title": "Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon", "year": "2022" }, { "authors": "Ranjie Duan; Xiaofeng Mao; Alex K Qin; Yun Yang; Yuefeng Chen; Shaokai Ye; Yuan He", "journal": "", "ref_id": "b24", "title": "Adversarial laser beam: Effective physicalworld attack to dnns in a blink", "year": "2021" }, { "authors": "Chen-Hao Hu", "journal": "", "ref_id": "b25", "title": "Adversarial laser spot: Robust and covert physical adversarial attack to dnns", "year": "2022" }, { "authors": "Nicolas Papernot; Patrick Mcdaniel; Ian Goodfellow; Somesh Jha; Z Berkay Celik; Ananthram Swami", "journal": "", "ref_id": "b26", "title": "Practical black-box attacks against machine learning", "year": "2017" }, { "authors": "Jason Yosinski; Jeff Clune; Yoshua Bengio; Hod Lipson", "journal": "", "ref_id": "b27", "title": "How transferable are features in deep neural networks? Advances in neural information processing systems", "year": "2014" }, { "authors": "Tong Wu; Liang Tong; Yevgeniy Vorobeychik", "journal": "", "ref_id": "b28", "title": "Defending against physically realizable attacks on image classification", "year": "2019" }, { "authors": "Taeheon Kim; Youngjoon Yu; Yong Man Ro", "journal": "", "ref_id": "b29", "title": "Defending physical adversarial attack on object detection via adversarial patch-feature energy", "year": "2022" }, { "authors": "Baoguang Shi; Xiang Bai; Cong Yao", "journal": "", "ref_id": "b30", "title": "An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition", "year": "2015" } ]
[ { "formula_coordinates": [ 5, 313.75, 212.56, 249.29, 45.97 ], "formula_id": "formula_0", "formula_text": "X ′ ← perturbimage(X, i, thickness) ▷ Perturb X at position i 11: if X ′ ̸ = X then 12:" } ]
10.1038/s41523-023-00557-8
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b3", "b4" ], "table_ref": [], "text": "The field of artificial intelligence (AI) has undergone a remarkable evolution in recent years, with significant advancements, particularly noticeable in natural language processing (NLP) and the development of Large Language Models (LLMs). These models represent a paradigm shift in AI's capability to understand, generate, and interact using human language. At their foundation, LLMs are complex algorithms trained on vast, text-based documents and datasets [1] . Such extensive training allows them to recognize patterns adeptly, predict subsequent words in a sentence, and generate coherent, contextually relevant text for the specified inputs, often called prompts within the NLP community. This ability demonstrates the technical prowess of LLMs and signifies their potential to revolutionize how machines understand and process human language. One of the most prominent features of LLMs is their proficiency in processing and analyzing large volumes of text rapidly and accurately, a capability that far surpasses human potential in speed and efficiency [2] . This quality makes them indispensable in areas requiring the analysis of extensive data sets. They are also known as \"few-shot\" learners, meaning once trained on massive datasets, they can be retrained for new domains utilizing a small number of domain-specific examples [3] .\nLLMs have become increasingly prevalent in the medical domain, demonstrating their versatility and expanding influence. Their applications in healthcare are multifaceted, ranging from processing vast quantities of medical data and interpreting clinical notes to generating comprehensive, human-readable reports [4] . This broad spectrum of functionalities shows how LLMs are not just tools for data processing but are also instrumental in providing innovative solutions across various aspects of healthcare. LLMs are increasingly being utilized to tackle critical challenges in patient care. This includes providing customized educational content to patients, assisting healthcare professionals in making complex diagnostic decisions, and easing the administrative burdens often associated with healthcare provision [4,5] .\nWhile large language models have been applied across a spectrum of activities in healthcare, including medical question answering, examination, pure research-oriented tasks, and administrative duties in hospitals, this review will focus exclusively on their practical applications in healthcare, such as diagnostics and treatment purposes. We uncover their deployment in critical areas such as cancer care, dermatology, dental, and mental health. This exploration is crucial, as it showcases LLMs' capacity to innovate medical diagnostics and patient care, streamline treatment tasks, and address the challenges and opportunities in harnessing their full potential in complex medical areas. We conduct an in-depth analysis of the applications of LLMs across different medical fields, aiming to present a brief yet thorough summary. We focus on the advancements and challenges of integrating these sophisticated models into routine healthcare practices. We offer insights into the current state of progress and identify barriers to their widespread adoption in clinical settings. The paper is structured to cover each medical specialty and associated challenges, followed by examining various data types in the medical field. The conclusion summarizes the findings and implications." }, { "figure_ref": [], "heading": "Cancer Care (Oncology)", "publication_ref": [ "b5", "b6", "b7", "b8" ], "table_ref": [], "text": "Cancer is characterized by the uncontrolled growth of abnormal cells in the body. It is examined within oncology-studying cancer types and related factors. Adopting Large Language Models (LLMs) such as ChatGPT in oncology has become a focal point of recent research, especially in supporting decision-making processes for cancer treatment. These advanced models are being explored for their capability to enhance diagnostic accuracy, personalize therapy options, and streamline patient care in oncology. By analyzing vast amounts of data, LLMs can provide insights that potentially improve treatment outcomes and patient management strategies. In the subsequent discussion, we explore the studies dedicated to integrating LLMs within oncological care, encapsulating the innovative efforts to harness LLMs' capabilities in enhancing the diagnostic, treatment, and management processes associated with cancer care.\nIn a study conducted by Vera Sorin and Eyal Klang [6] , the capabilities of ChatGPT, a large language model (LLM), were explored as a decision-support tool for breast tumor boards. The research's primary objective was determining how ChatGPT's recommendations align with expert- Subsequently, the model's management recommendations were compared with the final decisions made by the tumor board. Moreover, two senior radiologists independently evaluated ChatGPT's responses, grading them on a scale from 1 (complete disagreement) to 5 (complete agreement) across three categories: summarization of the case, the recommendation provided, and the explanation for that recommendation. Most patients in the study, 80%, had invasive ductal carcinoma, with one case each of ductal carcinoma in-situ and a phyllodes tumor with atypia.\nChatGPT's recommendations aligned with the tumor board's decisions in seven out of the ten cases, marking a 70% concordance. Upon grading, the first reviewer gave mean scores of 3.7, 4.3, and 4.6 for summarization, recommendation, and explanation, respectively, while the second reviewer's scores were 4.3, 4.0, and 4.3 in the same categories. As an initial exploration, the study suggests that LLMs like ChatGPT could potentially be a valuable asset for breast tumor boards.\nHowever, as technology rapidly advances, medical professionals must know its advantages and potential limitations.\nIn a study by Stefan Lukac and Davut Dayan in January 2023, the capabilities of ChatGPT to assist in the decision-making process for therapy planning in primary breast cancer cases were investigated [7] . Though the ChatGPT was able to identify specific risk factors for hereditary breast cancer and could discern elderly patients requiring chemotherapy assessment for cost/benefit evaluation, it generally offered non-specific recommendations concerning various treatment modalities such as chemotherapy and radiation therapy. Notably, it made errors in patient-specific therapy suggestions, misidentifying patients with Her2 1+ and 2+ (FISH negative) as candidates for trastuzumab therapy and mislabeling endocrine therapy as \"hormonal treatment.\" The study concluded that while ChatGPT demonstrates potential utility in clinical medicine, its current version lacks the precision to offer specific therapy recommendations for primary breast cancer patients. It underscores the necessity for further refinement before it can be a reliable adjunct in multidisciplinary tumor board decisions.\nGeorges Gebrael assessed the utility of ChatGPT 4.0 to enhance triage efficiency and accuracy in emergency rooms for patients with metastatic prostate cancer [8] . Between May 2022 and April 2023, clinical data of 147 patients presenting with metastatic prostate cancer were examined, of which 56 were selected based on inclusion criteria. ChatGPT demonstrated a high sensitivity of 95.7% for determining patient admissions but had a low specificity of 18.2% for discharges. It agreed with physicians' primary diagnoses in 87.5% of cases. It outperformed physicians regarding accurate terminology usage (42.9% vs. 21.4%) and diagnosis comprehensiveness, having a median diagnosis count of 3 compared to physicians' 2. ChatGPT was more concise in its responses but provided more additional treatment recommendations than physicians. The data suggests that ChatGPT could serve as a valuable tool for assisting medical professionals in emergency room settings, potentially enhancing triage efficiency and the overall quality of patient care.\nA study led by Arya Rao et al. investigated the potential of ChatGPT-3.5 and GPT-4 (OpenAI) in aiding radiologic decision-making, specifically focusing on breast cancer screening and breast pain imaging services [9] . The researchers measured the models' responses against the ACR Appropriateness Criteria using two prompt formats: open-ended (OE) and select all that apply (SATA). For breast cancer screening, both versions scored an average of 1.830 (out of 2) in the OE format, but GPT-4 outperformed ChatGPT-3.5 in the SATA format, achieving 98.4% accuracy compared to 88.9%. Regarding breast pain, GPT-4 again showed superiority, registering an average OE score of 1.666 and 77.7% in SATA, while ChatGPT-3.5 scored 1.125 and 58.3%, respectively. The data suggests the growing viability of large language models like ChatGPT in enhancing radiologic decision-making processes, with potential benefits for clinical workflows and more efficient radiological services. However, further refinement and broader use cases are needed for full validation." }, { "figure_ref": [], "heading": "Hana et al. conducted a retrospective study in February 2023 to evaluate the appropriateness of", "publication_ref": [ "b9", "b10", "b11", "b12", "b13" ], "table_ref": [], "text": "ChatGPT's responses to common questions concerning breast cancer prevention and screening [10] .\nLeveraging methodologies from prior research that assessed ChatGPT's capacity to address cardiovascular disease-related inquiries, the team formulated 25 questions rooted in the BI-RADS Atlas and their clinical experiences within tertiary care breast imaging departments. Each question was posed to ChatGPT three times, and three fellowship-trained breast radiologists critically assessed the responses. The radiologists categorized each response as \"appropriate,\" \"inappropriate,\" or \"unreliable\" based on the content's clinical relevance and consistency. Their evaluations considered two hypothetical scenarios: content for a hospital website and direct chatbot-patient interactions. The majority's opinion dictated the final determination of appropriateness. Results revealed that ChatGPT provided suitable answers for 88% (22 out of 25) of the questions in both contexts. However, one question pertained to mammography scheduling in light of COVID-19 vaccination, which elicited an inappropriate response.\nAdditionally, there were inconsistencies in answers related to breast cancer prevention and screening location queries. While ChatGPT frequently referenced guidelines from the American Cancer Society in its responses, it omitted those from the American College of Radiology and the US Preventive Services Task Force. These findings aligned with earlier research by Sarraju et al. [11] , where 84% of ChatGPT's cardiovascular disease prevention responses were deemed appropriate. Despite showing considerable potential as an automated tool for patient education on breast cancer, ChatGPT exhibited certain limitations, emphasizing the essential role of physician oversight and the ongoing need for further refinement and research into large language models in healthcare education. Brian Schulte, in 2023, explored the ability of ChatGPT to identify suitable treatments for advanced solid cancers [12] . Through a structured approach, the study assessed ChatGPT's capacity to list appropriate systemic therapies for newly diagnosed advanced solid malignancies and then compared the treatments ChatGPT suggested with those recommended by the National Comprehensive Cancer Network (NCCN) guidelines. This comparison resulted in the valid therapy quotient (VTQ) measure. The research encompassed 51 diagnoses and found that ChatGPT could identify 91 unique medications related to advanced solid tumors. On average, the VTQ was 0.77, suggesting a reasonably high agreement between ChatGPT's suggestions and the NCCN guidelines. Furthermore, ChatGPT always mentioned at least one systemic therapy aligned with NCCN's suggestions. However, there was a minimal correlation between the frequency of each cancer type and the VTQ. In conclusion, while ChatGPT displays promise in aligning with established oncological guidelines, its current role in assisting medical professionals and patients in making treatment decisions still needs to be defined. As the model evolves, it is hoped that its accuracy in this area will be enhanced, but continued research is essential to fully understand and harness its potential.\nIn a study led by Julien Haemmerli et al., the capability of ChatGPT was explored in the context of CNS tumor decision-making, specifically for glioma management [13] . Using clinical, surgical, imaging, and immunopathological data from ten randomly chosen glioma patients discussed in a Tumor Board, ChatGPT's recommendations were compared with those of seven CNS tumor experts. While most patients had glioblastomas, findings revealed that ChatGPT's diagnostic accuracy was limited, with a notable discrepancy in glioma classifications. However, it demonstrated competence in recommending adjuvant treatments, aligning closely with expert opinions. Despite its limitations, ChatGPT shows potential as a supplementary tool in oncological decision-making, particularly in settings with constrained expert resources.\nIn Shan Chen et al.'s research on the effectiveness of ChatGPT in offering cancer treatment advice, the study scrutinized the model's alignment with the National Comprehensive Cancer Network (NCCN) guidelines for breast, prostate, and lung cancer treatments [14] . Through four diverse prompt templates, the study assessed if the mode of questioning influenced the model's responses. While ChatGPT's recommendations aligned with NCCN's guidelines in 98% of the prompts, 34.3% of these recommendations also presented information that needed to be more in sync with the NCCN guidelines. The study concluded that, despite its potential, ChatGPT's performance in consistently delivering reliable cancer treatment advice was unsatisfactory. Consequently, patients and medical professionals must exercise caution when relying on ChatGPT and similar tools for educational purposes." }, { "figure_ref": [], "heading": "Challenges associated with LLMs as a decision-support tool in Cancer Care:", "publication_ref": [ "b12", "b6", "b12", "b7", "b9" ], "table_ref": [], "text": "While integrating Large Language Models (LLMs) like ChatGPT into oncology shows promise, particularly in decision support for cancer treatment, it also presents several critical challenges, as discussed in the previous section. These challenges must be addressed to ensure LLMs' safe and effective use in high-stakes medical environments. Firstly, the issue of accuracy and precision in LLMs is a significant concern. For instance, in Julien Haemmerli's [13] study on glioma therapy, ChatGPT demonstrated limitations in accurately classifying glioma types. Similarly, the study by Stefan Lukac and Davut Dayan [7] revealed errors in patient-specific therapy suggestions, such as misidentifying patients for trastuzumab therapy. These inaccuracies highlight the risk of potential misdiagnoses or inappropriate treatment recommendations, which could have profound implications for patient care.\nAnother challenge is the capacity of LLMs to consider the comprehensive clinical picture, including patient functional status, which is often a nuanced judgment call made by experienced physicians. ChatGPT's moderate performance in this area, as seen in Haemmerli's study [13] ,\nindicates a gap between current LLM capabilities and the complex decision-making processes in medical practice. Furthermore, the integration of LLMs into existing medical workflows raises concerns. For example, Georges Gebrael's [8] study on triage in metastatic prostate cancer showed that while ChatGPT had high sensitivity, its low specificity for discharges could lead to operational inefficiencies. Integrating LLMs within healthcare systems also poses challenges in data privacy, interoperability, and the need for robust IT infrastructure.\nLastly, the role of LLMs in patient education and communication is not without limitations. Hana L Haver et al. [10] studies demonstrated inconsistencies in ChatGPT's responses to breast cancer prevention and screening questions. This inconsistency highlights the importance of human oversight in verifying the information provided by LLMs, ensuring it aligns with established medical guidelines and practices. In summary, while LLMs present exciting opportunities for enhancing cancer care, their current limitations in accuracy, comprehensive clinical assessment, integration into existing systems, and patient education necessitate a cautious and critical approach. These models should be viewed as supplementary tools that augment, rather than replace, the expertise of medical professionals. Continuous evaluation, refinement, and ethical consideration are essential to harness the full potential of LLMs in oncology." }, { "figure_ref": [], "heading": "Skin Care: Dermatology", "publication_ref": [ "b14", "b15", "b16", "b17" ], "table_ref": [], "text": "Our skin is a barrier against external threats such as viruses, bacteria, and other harmful organisms.\nDermatology is the branch of medicine dealing with skin diseases. There has been a surge in cases related to skin diseases in the past years, affecting people of all ages [15] . Common skin-related diseases include acne, alopecia, bacterial skin infections, decubitus ulcers, fungal skin diseases, pruritus, and psoriasis [16] . Traditional dermatology diagnosis is based on a visual inspection of skin features and subjective evaluation by a dermatologist [17] . The realm of dermatology diagnosis faces several significant challenges. Firstly, accurately interpreting skin disease imagery is complex due to the wide variety of skin conditions and their subtle visual differences. This task requires a high level of expertise, leading to the second challenge: a noticeable shortage of dermatologists, especially in remote or underserved areas. Lastly, creating patient-friendly diagnostic reports is another hurdle. These reports need to be detailed yet understandable to non-specialists, making their production time-consuming and labor-intensive for dermatologists.\nIn addressing the above challenges in dermatological diagnostics, Zhou et al. introduced SkinGPT-4, an innovative interactive dermatology diagnostic system underpinned by an advanced visual Large Language Model [18] . This study was mainly focused on tackling the prevalent issues in dermatology, such as the shortage of specialized medical professionals in remote areas, the intricacies involved in interpreting skin disease images accurately, and the demanding nature of creating patient-friendly diagnostic reports. SkinGPT-4, utilizing a refined version of MiniGPT-4, trained on an extensive dataset that included 52,929 images of skin diseases, both from public domains and proprietary sources, along with detailed clinical concepts and doctors' notes. This comprehensive training on skin-related disease images enabled SkinGPT-4 to articulate medical features in skin disease images using natural language and make precise diagnoses. The functionality of SkinGPT-4 allows users to upload images of their skin conditions, after which the system autonomously analyzes these images. It identifies the characteristics and categorizes the skin conditions, performs an in-depth analysis, and provides interactive treatment recommendations. A notable aspect of SkinGPT-4 is its local deployment feature, combined with a solid commitment to maintaining user privacy, making it a viable option for patients seeking accurate dermatological assessments. To ascertain the efficacy of SkinGPT-4, the study conducted a series of quantitative evaluations on 150 real-life dermatological cases. Certified dermatologists independently reviewed these cases to validate the diagnoses provided by SkinGPT-4. Among the 150 cases, a commendable 78.76% of the diagnoses rendered by SkinGPT-4 were validated as either accurate or relevant by the dermatologists, breaking down into 73.13% that firmly aligned and another 5.63% that agreed. The outcomes of this evaluation underscored the accuracy of SkinGPT-4 in diagnosing skin diseases. While SkinGPT-4 is not positioned as a replacement for professional medical consultation, its contribution to enhancing patient comprehension of medical conditions, improving communication between patients and doctors, expediting dermatologists' diagnostic processes, and potentially fostering human-centered care and healthcare equity in underdeveloped regions is significant." }, { "figure_ref": [], "heading": "Challenges associated with utilizing LLMs in Dermatology:", "publication_ref": [ "b17" ], "table_ref": [], "text": "The introduction of SkinGPT-4 by Zhou et al. marks a significant advancement in dermatological diagnostics, addressing challenges like the dermatologist shortage and the complexities of skin disease image interpretation and patient-friendly report generation [18] . Despite its innovative approach and the training on an extensive dataset to articulate medical features in skin images, there are inherent challenges. Some challenges associated with deploying SkinGPT-4 include ensuring consistent diagnostic accuracy across various skin conditions, safeguarding patient privacy while managing sensitive health data, and integrating the technology seamlessly into existing healthcare systems. Additionally, despite SkinGPT-4's high diagnostic accuracy, continuous human oversight in medical diagnosis and treatment planning remains critical to complement the AI's capabilities with professional medical judgment and ensure optimal patient care outcomes. Additionally, advancements might focus on developing models that can adapt to new, emerging skin conditions and leveraging telemedicine to extend dermatological care to remote areas, thus promoting healthcare equity." }, { "figure_ref": [], "heading": "Neurodegenerative Disorders: Dementia & Alzheimer's", "publication_ref": [ "b18", "b19", "b21", "b20", "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "Neurodegenerative disorders involve the gradual deterioration of specific neuron groups, differing from the non-progressive neuron loss seen in metabolic or toxic conditions. These diseases are categorized by their primary symptoms (such as dementia, parkinsonism, or motor neuron disease), the location of neurodegeneration within the brain (including frontotemporal degenerations, extrapyramidal disorders, or spinocerebellar degenerations), or by the underlying molecular abnormalities [19] . Dementia is a broad category of brain diseases that cause a long-term and often gradual decrease in the ability to think and remember, affecting daily functioning. Alzheimer's disease (AD) is the most common cause of dementia, characterized by memory loss, language problems, and unpredictable behavior.\nLLMs such as Google Bard and ChatGPT have emerged as valuable tools for predicting neurodegenerative disorders. A study by Koga et al. evaluated these models' predictive accuracy using cases from Mayo Clinic conferences [20] . Using the Mayo Clinic brain clinicopathological conferences as their sample pool, the researchers extracted 25 cases of neurodegenerative disorders. These clinical summaries were then utilized for training and testing the models. The diagnoses offered by each model were compared against the official diagnosis provided by medical professionals. Findings from the study highlighted that ChatGPT-3.5 aligned with 32% of all the physician-made diagnoses, Google Bard with 40%, and ChatGPT-4 with 52%. When assessing the accuracy of these diagnostic predictions, ChatGPT-3.5 and Google Bard both achieved a commendable score of 76%, while ChatGPT-4 led the pack with an impressive accuracy rate of 84%. The evident proficiency exhibited by LLMs, specifically ChatGPT and Google Bard, highlights their considerable potential in revolutionizing diagnostic processes in neurodegenerative disorders.\nThis study conducted by Agbavor and Liagn (2022) explored the use of GPT-3-generated text embeddings to predict dementia, utilizing data from the ADReSSo Challenge (Alzheimer's Dementia Recognition through Spontaneous Speech only challenge [22] ), which focuses on identifying cognitive impairment through spontaneous speech [21] . The author proposed using the model to identify individuals with dementia against healthy individuals as controls. Using the 237 speech recordings derived from the ADReSSO (Alzheimer's Dementia Recognition through Spontaneous Speech only challenge), the author used a 70/30 split and obtained 71 data samples as the testing set and 166 as the training set. In the training set, 87 individuals had AD, and 79 were healthy controls. GPT-3 was innovatively used for embedding the transcribed speech texts.\nThen, the model extracts the acoustic features such as temporal analysis (periodicity of speech, pause rate, phonation rate, etc.) and speech production (vocal quality, articulation, prosody, etc.).\nThese features serve as the input for the classification model used in AD prediction. GPT-3 embeddings are then compared with BERT and traditional acoustic features. The findings reveal that text embeddings outperform traditional acoustic methods and compare well with fine-tuned models such as BERT. This suggests that GPT-3's text embeddings offer a promising approach for early dementia diagnosis.\nAnother study conducted by Mao and colleagues [23] outlines developing and applying a deep learning framework utilizing the BERT model for predicting the progression from Mild Cognitive Impairment (MCI) to Alzheimer's Disease (AD) using unstructured EHR notes. The study cataloged 3,657 MCI-diagnosed patients and their clinical notes from Northwestern Medicine Enterprise Data Warehouse (NMEDW) between 2000 and 2020, using only their initial MCI diagnosis notes for analysis. These notes underwent de-identification, cleaning, and segmentation before training an AD-specific BERT model (AD-BERT). AD-BERT transformed patient note sections into vector forms, which a fully connected network analyzed to predict MCI-to-AD progression. For validation, a similar methodology was applied to 2,563 MCI patients from Weill Cornell Medicine (WCM). AD-BERT outperformed seven baseline models, showing superior accuracy in both patient groups, evidenced by its AUC and F1 scores.\nIn the diagnosis of complex conditions like Alzheimer's disease, medical professionals use a variety of data such as images, patient demographics, genetic profiles, medication history, cognitive assessments, and speech data. Some of the recent studies have proposed multi-modal AD diagnosis or prediction methods leveraging the popular pre-trained large language model (LLM) to add text data sources, in addition to images and other data types [24,[25][26] ." }, { "figure_ref": [], "heading": "Challenges associated with LLMs in Neurodegenerative disorders", "publication_ref": [], "table_ref": [], "text": "Utilizing LLMs in diagnosing and managing neurodegenerative disorders like dementia and Alzheimer's disease presents several challenges. Firstly, the complexity and variability of these conditions require highly accurate and deep understanding, which LLMs may not always provide due to limitations in their training data. The ethical and privacy concerns about handling sensitive patient data pose significant hurdles. Furthermore, integrating these models into clinical workflows demands substantial validation to ensure they complement, rather than complicate, healthcare professionals' decision-making processes. Lastly, there's a need for continuous updates and improvements in these models to keep pace with the latest medical research and clinical practices" }, { "figure_ref": [], "heading": "Dentistry", "publication_ref": [ "b26", "b27", "b28" ], "table_ref": [], "text": "The World Health Organization reports that oral diseases impact approximately 3.5 billion individuals globally, with dental caries, periodontal diseases, and tooth loss being the most prevalent. These conditions, largely preventable and manageable with early diagnosis, have seen the application of AI methodologies in recent years, including the diagnosis of dental caries [27,28] and periodontitis [29] . Despite this, exploring Large Language Models (LLMs) in dentistry remains notably scarce, with limited studies demonstrating their practical application." }, { "figure_ref": [], "heading": "Huang et al. stand out by proposing LLM-based deployment strategies within dentistry, marking", "publication_ref": [ "b28" ], "table_ref": [], "text": "the emerging area of research with significant potential for advancement [29] . To showcase the effectiveness and potential of applying Large Language Models (LLMs) in dentistry, this work introduced a framework for an automated diagnostic system utilizing Multi-Modal LLMs. This innovative system incorporated three distinct input modules: visual, auditory, and textual data, enabling comprehensive analysis. Visual inputs, such as dental X-rays and CT scans, are evaluated for anomalies using vision-language models, facilitating precise diagnostics. Audio inputs serve dual purposes: detecting voice anomalies and understanding patient narratives, which are converted to text for further analysis by LLM. To illustrate the capabilities of the multi-modal LLM AI system in dental practice, Huang et al. proposed its application in diagnosing and planning treatment for dental caries. The process begins with inputting a tooth's X-ray into the system, where vision-language modeling is employed to detect any decay on the tooth. Once identified, the system utilizes LLM to propose a comprehensive treatment plan, articulated through seven detailed steps. These steps range from initial patient communication to scheduling follow-up appointments, highlighting a thorough approach to patient care. Despite its advanced diagnostics, the system's limitations, such as failing to detect potential bone loss, are acknowledged, suggesting areas for further research and development to enhance its effectiveness in dental diagnostics." }, { "figure_ref": [], "heading": "Challenges associated with dental care:", "publication_ref": [], "table_ref": [], "text": "The accuracy of LLMs like ChatGPT depends on the availability of high-quality, relevant dental data. A significant hurdle in designing and training LLMs for dental care is limited access to the dental records owned by private dental clinics and concerns over patient privacy, which restricts access to comprehensive and current datasets. LLMs' development and effectiveness in dentistry must navigate these challenges, ensuring access to extensive, up-to-date information while addressing privacy and ownership issues to avoid biases and maintain data integrity.\nThe potential of LLMs in dental healthcare seems promising and can revolutionize how dental professionals diagnose, treat, and manage patient care today. LLMs could significantly improve diagnostic precision by leveraging the vast amounts of data available in patient records and imaging, allowing for early detection and intervention in dental conditions. Furthermore, the ability of LLMs to generate personalized treatment plans and educational materials tailored to individual patient needs could enhance the effectiveness of patient care. This personalization and the model's ability to process and analyze data swiftly could lead to more efficient and patientcentered dental healthcare practices. As LLMs continue to evolve, their integration into dental healthcare is expected to deepen, offering innovative solutions to longstanding challenges and improving patient outcomes worldwide." }, { "figure_ref": [], "heading": "Mental Health: Psychiatry and Psychology", "publication_ref": [ "b27", "b28", "b29", "b30", "b31", "b32" ], "table_ref": [], "text": "Mental health disorders, which affect millions globally, significantly reduce the life quality of individuals and their families. In psychiatry, LLMs have the potential to refine diagnostic precision, optimize treatment outcomes, and enable more tailored patient care, moving beyond traditional, subjective diagnostic approaches prone to inaccuracies. By leveraging AI to analyze extensive patient data, it's possible to uncover patterns not easily detectable by humans, thereby improving diagnosis [28,29] .\nGalatzer-Levy and colleagues, in 2023, delved into exploring the potential role of large language models (LLM) in psychiatry [30] . Their primary investigation tool was Med-PALM 2, an LLM equipped with comprehensive medical knowledge. The model was trained and tested using a blend of clinical narratives and patient interview transcripts. The dataset encompassed expert evaluations using instruments like the 8-item Patient Health Questionnaire and the PTSD Checklist-Civilian Version (PCL-C). The study intended to gauge the severity of PTSD using the PCL-C while employing the PHQ-8 to assess depression and anxiety levels. The evaluation process involved extracting from Med-PALM 2 clinical scores, the rationale for such scores, and the model's confidence in its derived results. The gold standard for this evaluation was the DSM 5 (Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition). The researchers' rigorous testing process involved the analysis of 46 clinical case studies, 115 PTSD evaluations, and 145 depression instances. These were probed using prompts to tease out diagnostic information and clinical scores. The rigorous assessment also saw Med-PaLM 2 fine-tuned through many natural language applications and a substantial textual database. Notably, research-quality clinical interview transcripts were employed as inputs when assessing the model's efficacy. Med-PaLM 2 demonstrated its prowess in evaluating psychiatric states across various psychiatric conditions.\nRemarkably, when tasked with predicting psychiatric risk from clinician and patient narratives, the model showcased an impressive accuracy rate ranging between 80% and 84%.\nAnother study evaluated the performance of various LLMs, including Alpaca and its variants, FLAN-T5, GPT-3.5, and GPT-4, across different mental health prediction tasks such as mental state (depressed, stressed or risk actions like suicide) using online text [31] . Through extensive experimentation, including zero-shot, few-shot, and instruction fine-tuning methods, it was found that instruction fine-tuning notably enhances LLMs' effectiveness across all tasks. Notably, the fine-tuned models, Mental-Alpaca and Mental-FLAN-T5, demonstrated superior performance over larger models like GPT-3.5 and GPT-4 and matched the accuracy of task-specific models.\nThe use of conversational agents based on LLMs for mental well-being support is growing, yet the effects of such applications still need to be fully understood. A qualitative study by Ma et al. of 120 Reddit posts and 2,917 comments from a subreddit dedicated to mental health support apps like Replika reveals mixed outcomes [32] . While Replika offers accessible, unbiased support that can enhance confidence and self-exploration, it struggles with content moderation, consistent interactions, memory retention, and preventing user dependency, potentially increasing social isolation.\nFollowing the advancements with ChatGPT, research into automated therapy using AI's latest technologies is gaining momentum. This new direction aims to shift mental health assessments from traditional rating scales to a more natural, language-based communication. The emergence of large language models, like those powering ChatGPT and BERT, marks a significant shift in artificial intelligence, potentially revolutionizing standardized psychological assessments. This evidence points towards AI's capacity to transform mental health evaluations into interactions that mirror natural human communication, pending comprehensive validation in specific application scenarios [33] ." }, { "figure_ref": [], "heading": "Challenges associated with applications of LLMs for Mental Health", "publication_ref": [], "table_ref": [], "text": "In mental health applications, LLMs face challenges like ensuring content sensitivity and safety to avoid harmful advice, maintaining accuracy and reliability to prevent misdiagnoses, and offering personalized, empathetic responses for adequate support. Data privacy and security are paramount due to the personal nature of discussions. There's also a need to prevent user over-reliance on LLMs, potentially deterring professional help. Ethical considerations include the impact of replacing human interactions with AI and avoiding biases. Additionally, navigating regulatory compliance within mental health laws and guidelines is crucial for lawful operation." }, { "figure_ref": [], "heading": "Other Medical Specialties: Nephrology, Gastroenterology, Allergy and immunology", "publication_ref": [], "table_ref": [], "text": "The integration of Large Language Models into medical specialties like nephrology and gastroenterology remains in the early stages, with their full potential yet to be realized. Current applications in these areas are sparse, highlighting opportunities for future exploration and implementation. This brief overview aims to shed light on the existing implementations of LLMs within these specific fields, indicating the nascent but promising role of advanced AI technologies in enhancing diagnostic and treatment methodologies in nephrology and gastroenterology." }, { "figure_ref": [], "heading": "8.1.Nephrology", "publication_ref": [ "b33" ], "table_ref": [], "text": "Within the domain of nephrology, LLMs are being utilized to assist in diagnosing kidney diseases, providing treatment guidance, and monitoring renal function, as noted by Wu and colleagues [34] .\nThese LLMs facilitate the evaluation of crucial data such as laboratory results, clinical data, and a patient's medical history during the diagnostic phase. As such, the LLMs chosen for nephrological applications are often preferred to possess a sophisticated medical knowledge capability, especially in multiple-choice medicine test-taking. Various LLMs, including Orca Mini 13B, Stable Vicuna 13B, Falcon 7B, Koala 7B, Claude 2, and GPT-4, have found applications in treating and diagnosing kidney diseases. However, owing to their unique zero-shot reasoning capabilities, GPT-4 and Claude 2 are particularly suitable for this intricate medical specialty. Currently, these models are employed to respond to multiple-choice questions about nephrology. Wu et al. incorporated questions from clinical backgrounds linked to 858 nephSAP multiple-choice queries collated between 2016 and 2023. When evaluating the proficiency of Claude 2 and GPT-4, performance was gauged based on the proportion of correctly answered nephrology-related nephSAP multiple-choice questions. GPT-4 demonstrated superior performance, garnering a score of 73.3%, in contrast to Claude 2, which achieved a score of 54.4%. When individual nephrology topics were examined, GPT-4 consistently outperformed its counterparts, including Claude 2, Vuna, Kaola, Orca-mini, and Falcon." }, { "figure_ref": [], "heading": "Gastroenterology", "publication_ref": [ "b34" ], "table_ref": [], "text": "Lahat et al. explored the capabilities of large language models, specifically OpenAI's ChatGPT, in responding to queries within the realm of gastrointestinal health [35] . Their evaluation employed 110 real-world questions, benchmarking ChatGPT's responses against the expert consensus of seasoned gastroenterologists. These queries spanned a spectrum of topics, from diagnostic tests and prevalent symptoms to treatments for a range of gastrointestinal issues. The source of these questions was public internet platforms. The researchers evaluated the outputs of ChatGPT on metrics such as accuracy, clarity, up-to-dateness, and efficacy, rating them on a scale from 1 to 5. These outputs were then categorized into symptoms, diagnostic tests, and treatments. ChatGPT averaged scores of 3.7 for clarity, 3.4 for accuracy, and 3.2 for efficacy in the symptom category.\nDiagnostic test-related queries resulted in scores of 3.7 for clarity, 3.7 for accuracy, and 3.5 for efficacy. As for treatment-related questions, the model achieved 3.9 for clarity, 3.9 for accuracy, and 3.3 for efficacy. The results indicated the substantial potential of ChatGPT in providing valuable insights within the gastrointestinal specialty." }, { "figure_ref": [], "heading": "Allergy and immunology", "publication_ref": [ "b35" ], "table_ref": [], "text": "In allergy and immunology, LLMs akin to their applications in dermatology, have shown promising potential. According to a study by Goktas et al., LLMs, specifically models like GPT-4 and Google Med-PaLM2, significantly enhance the diagnostic process within allergy and immunology disciplines [36] . These advanced models elevate the precision of diagnosis and can tailor treatment plans to suit individual patient needs. Beyond the clinical realm, they also play a pivotal role in fostering patient engagement, ensuring patients are actively involved and informed in their healthcare journey. As a result, the integration of LLMs in allergy and immunology represents a paradigm shift towards more accurate, personalized, and patient-centric medical care." }, { "figure_ref": [], "heading": "Section 9: Handling different types of data in the medical industry", "publication_ref": [], "table_ref": [], "text": "This section provides an overview of how different data formats and types are handled in the medical industry when used as training data or inputs for a large language model." }, { "figure_ref": [], "heading": "Clinical Notes", "publication_ref": [ "b36", "b36" ], "table_ref": [], "text": "Clinical notes, an integral component of patient health records, have increasingly been utilized in medicine as input to large language models (LLMs). These notes, typically generated by healthcare professionals, serve as rich patient information repositories, including their medical history, present symptoms, diagnoses, treatments, and more. Clinical notes are fed into LLMs to extract meaningful patterns, predictions, and insights. Before using these notes, they are often preprocessed to ensure they are in a format that's easily digestible for the models. This preprocessing can involve converting handwritten notes into digital formats, anonymizing patient data to maintain privacy, and structuring the data in a consistent format. LLMs can directly process these notes and produce a range of tools suited for activities like condensing medical data, assisting in clinical decisions, and creating medical reports [37] . To utilize clinical notes in LLMs, prompts containing questions, scenarios, or comments about the note are used, such as \"Assume the role of a neurologist at the Mayo Clinic brain bank clinicopathological conference.\" Based on this, the model provides an output that aids in evaluation or diagnosis across different medical fields [37] ." }, { "figure_ref": [], "heading": "9.2.X-rays/ Images", "publication_ref": [ "b37", "b38" ], "table_ref": [], "text": "X-rays are medical imaging that utilizes ionizing radiation to produce images of internal body organs. This data type may include CT scans (tomography), chest X-rays, and bone X-rays. In medicine, X-ray images can be processed by a computer-aided detection (CAD) model, which is pre-trained to derive the outputs in tensor form. These tensors are then translated into natural language, where they can be used as LLM input to generate summaries or descriptions of the Xray images. Wang et al. illustrated how the X-rays of exam images are handled for utilizing them with the LLMs [38] . They established that the model is fed into pre-trained CAD models to derive the output. Then, translate the tensor (output) into natural language. Lastly, the language models are used to make conclusions and summarize the results. They establish that X-ray images can be used as input in the LLM and fed into the model with prompts to generate the image summarization or descriptive caption. The LLM supports visual question answering, where the x-ray images of the patients are fed into an image encoder (BLIP-2), where the natural language presentation is generated and embedded based on the image understanding.\nBazi and colleagues proposed a transformer encoder-decoder architecture to handle the visual data when using the LLM [39] . They extracted the image features using the vision transformer (ViT) model and then used the textual encoder transformer to embed the questions. It is then fed to the resulting textual and visual representations into a multi-modal decoder to generate the answers. To demonstrate how LLM handles the visual data, they used VQA datasets for radiology images, termed PathVQA and VQA-RAD. In decoding the radiology images, the proposed model achieved 72.97% and 8.99% for the VQA-RAD and 62.37% or 83.86% for PathVQA." }, { "figure_ref": [], "heading": "9.3.Radiological reports", "publication_ref": [ "b39" ], "table_ref": [], "text": "Radiological reports are documents from radiologists that present the findings or interpretation of medical imaging studies such as MRIs, X-rays, and CT scans. These data are processed as texts within the report to be input for LLMs in medicine. After data augmentation, the radiological reports are used as inputs in the LLM model. Tan and colleagues collected 10,602 CT scan reports from patients with cancer at a single facility [40] . These reports were categorized into four response types: no evidence of disease, partial response, stable disease, or progressive disease. To analyze these reports, we utilized various models, including transformer models, a bidirectional LSTM model, a CNN model, and traditional machine learning approaches. Techniques such as data augmentation through sentence shuffling with consistency loss and prompt-based fine-tuning were applied to enhance the performance of the most effective models." }, { "figure_ref": [], "heading": "9.4.Speech data", "publication_ref": [ "b20" ], "table_ref": [], "text": "Speech data, encompassing medical interviews, consultations, and patient audio interactions, serves as a valuable reservoir of information. Before its use in Large Language Models (LLMs), this data is converted into a textual format through automatic speech recognition (ASR) systems.\nNotably, converting audio data into text is accomplished using pre-trained models, with Wav2vec 2.0 emerging as a leading contender in speech recognition technology. In their groundbreaking work, Agbavor and Liang [21] " }, { "figure_ref": [], "heading": "9.5.Tabular Data", "publication_ref": [ "b40" ], "table_ref": [], "text": "In the medical domain, tabular data typically encompasses clinical measurements, patient records, and lab outcomes, arranged methodically in a matrix of rows and columns. A transformation via tabular modeling is requisite for this structured data to be effectively utilized by Large Language Models (LLMs). The ubiquity of this tabular format in clinical and physician databases has often led to the use of tree-based models like bagging and boosting. However, these models come with their share of limitations. Highlighting an innovative approach to this challenge, Chen et al.\npresented a study employing a data set of 1479 patients undergoing immune checkpoint blockade (ICB) treatments for various cancer types [41] . Segmenting the dataset, with 295 patients for testing and 1184 for training, they unveiled how LLMs process tabular data. Crucial to this process is serializing the feature columns into coherent sequences of natural language tokens that the LLM can interpret. This serialization can be achieved through various methods, be it the promptingbased regeneration approach, using {attribute} is {value} functions, or manual serialization templates.\nFurthermore, Chen and his team introduced an advanced tabular model, ClinTaT, augmented from its original design. This refined model incorporates a continuous embedding layer harmonized with multiple distinct layers that mirror the table's continuous feature count. Continuous variables are melded with embedded categorical data for the final processing step, which is then channeled into the transformer for analysis." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Large Language Models (LLMs) applications have carved out a transformative niche in the healthcare sector. From patient engagement and education to diagnostic assistance, administrative support, and medical research, the multifaceted applications of LLMs have demonstrated their potential to optimize various facets of the medical landscape. Their expansive knowledge repositories and adeptness at understanding context and generating human-like textual responses have positioned LLMs as invaluable assets within the healthcare domain. Their integration with chatbots offers a more personalized and efficient patient experience, aiding in tasks ranging from medication clarification to mental health support. On the diagnostic front, incorporating LLMs with electronic health systems and medical imaging promises to enhance the accuracy and efficiency of diagnosis and treatment plans. LLMs' capability to assist in clinical documentation, medical language translation, and medical education for patients highlights their adaptability and relevance in varied healthcare scenarios. However, while the benefits of LLMs are numerous, their practical application in the healthcare sector also underscores the importance of precision, context awareness, and ethical considerations, given the critical nature of medical decision-making. While LLMs like ChatGPT and Med-PaLM have shown significant potential, there's an imperative for ongoing refinement, especially when handling complex or rare medical cases. As LLMs become more integrated into patient care, research addressing the ethical implications, including data privacy, the balance between automation and human intervention, and informed patient consent, will be paramount.\nCollaborative research exploring the fusion of LLMs with other emerging technologies, such as augmented reality or wearable health devices, can open new avenues for patient care and remote monitoring. Enhancing the LLMs' contextual understanding is crucial. Future work should focus on the model's ability to consider a patient's medical history and present conditions before offering recommendations. In sum, the horizon of LLMs in healthcare is expansive and promising. As we continue to witness the convergence of technology and medicine, the collaboration of multidisciplinary teams-combining expertise from AI, medicine, ethics, and other domains-will be integral to harnessing the full potential of LLMs in healthcare." } ]
We aim to present a comprehensive overview of the latest advancements in utilizing Large Language Models (LLMs) within the healthcare sector, emphasizing their transformative impact across various medical domains. LLMs have become pivotal in supporting healthcare, including physicians, healthcare providers, and patients. Our review provides insight into the applications of Large Language Models (LLMs) in healthcare, specifically focusing on diagnostic and treatmentrelated functionalities. We shed light on how LLMs are applied in cancer care, dermatology, dental care, neurodegenerative disorders, and mental health, highlighting their innovative contributions to medical diagnostics and patient care. Throughout our analysis, we explore the challenges and opportunities associated with integrating LLMs in healthcare, recognizing their potential across various medical specialties despite existing limitations. Additionally, we offer an overview of handling diverse data types within the medical field.
LLMs-Healthcare : Current Applications and Challenges of Large Language Models in various Medical Specialties
[ { "figure_caption": "Figure 1 :1Figure 1 : Visualizing LLM Applications in different medical specialties w.r.t input data type and medical use-case", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "employed the Wav2vec2-base-960 base model, an advanced tool fine-tuned on an extensive 960-hour dataset of 16 kHz speech audio. Their methodology incorporated Librosa for audio file loading and Wav2Vec2Tokenizer for the crucial task of waveform audio tokenization. These tokenized audio segments are inputted into the Wav2Vec2ForCTC model depending on memory capacities. This model decodes the tokens, resulting in the generation of text transcripts. Furthermore, an alternative approach to leveraging speech data in LLMs involves using open MILE, an open-source toolkit. Open MILE offers functionalities like speech classification and facilitates extracting audio features from speech or musical signals, proving its versatility in handling audio data for various applications.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" } ]
Ummara Mumtaz; Awais Ahmed; Summaya Mumtaz
[ { "authors": "B Min; H Ross; E Sulem", "journal": "ACM Computing Surveys", "ref_id": "b0", "title": "Recent advances in natural language processing via large pre-trained language models: A survey", "year": "2023" }, { "authors": "J Wei; Y Tay; R Bommasani", "journal": "", "ref_id": "b1", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "T Brown; B Mann; N Ryder", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "A J Thirunavukarasu; Dsj Ting; K Elangovan", "journal": "Nature medicine", "ref_id": "b3", "title": "Large language models in medicine", "year": "2023" }, { "authors": "M Cascella; J Montomoli; V Bellini; E Bignami", "journal": "Journal of Medical Systems", "ref_id": "b4", "title": "Evaluating the feasibility of ChatGPT in healthcare: an analysis of multiple clinical and research scenarios", "year": "2023" }, { "authors": "V Sorin; E Klang; Sklair-Levy", "journal": "NPJ Breast Cancer", "ref_id": "b5", "title": "Large language model (ChatGPT) as a support tool for breast tumor board", "year": "2023" }, { "authors": "S Lukac; D Dayan; V Fink", "journal": "Arch Gynecol Obstet", "ref_id": "b6", "title": "Evaluating ChatGPT as an adjunct for the multidisciplinary tumor board decision-making in primary breast cancer cases", "year": "2023" }, { "authors": "G Gebrael; K K Sahu; B Chigarira", "journal": "Cancers", "ref_id": "b7", "title": "Enhancing Triage Efficiency and Accuracy in Emergency Rooms for Patients with Metastatic Prostate Cancer: A Retrospective Analysis of Artificial Intelligence-Assisted Triage Using ChatGPT 4.0", "year": "2023" }, { "authors": "Arya Rao; John Kim; Meghana Kamineni", "journal": "Journal of the American College of Radiology", "ref_id": "b8", "title": "Evaluating GPT as an Adjunct for Radiologic Decision Making: GPT-4 Versus GPT-3.5 in a Breast Imaging Pilot", "year": "2023" }, { "authors": "H L Haver; E B Ambinder; M Bahl", "journal": "Radiology", "ref_id": "b9", "title": "Appropriateness of Breast Cancer Prevention and Screening Recommendations Provided by ChatGPT", "year": "2023" }, { "authors": "A Sarraju; D Bruemmer; E Van Iterson", "journal": "JAMA", "ref_id": "b10", "title": "Appropriateness of Cardiovascular Disease Prevention Recommendations Obtained From a Popular Online Chat-Based Artificial Intelligence Model", "year": "2023" }, { "authors": "B Schulte", "journal": "Cureus", "ref_id": "b11", "title": "Capacity of ChatGPT to Identify Guideline-Based Treatments for Advanced Solid Tumors", "year": "2023" }, { "authors": "J Haemmerli; L Sveikata; A Nouri", "journal": "BMJ Health Care Inform", "ref_id": "b12", "title": "ChatGPT in glioma adjuvant therapy decision making: ready to assume the role of a doctor in the tumour board?", "year": "2023" }, { "authors": "S Chen; B H Kann; M B Foote", "journal": "JAMA Oncol", "ref_id": "b13", "title": "Use of Artificial Intelligence Chatbots for Cancer Treatment Information", "year": "2023" }, { "authors": "A Yakupu; R Aimaier; B Yuan", "journal": "Front Public Health", "ref_id": "b14", "title": "The burden of skin and subcutaneous diseases: findings from the global burden of disease study 2019", "year": "2023" }, { "authors": "K Urban; S Chu; R L Giesey", "journal": "JAAD Int", "ref_id": "b15", "title": "Burden of skin disease and associated socioeconomic status in Asia: a cross-sectional analysis from the Global Burden of Disease Study 1990-2017", "year": "2020" }, { "authors": "M Burlando; A Muracchioli; E Cozzani", "journal": "Case Rep. Dermatol", "ref_id": "b16", "title": "Biologic Therapy: Case Report and Narrative Review", "year": "2021" }, { "authors": "J Zhou; X He; L Sun", "journal": "Electrical Engineering and Systems Science", "ref_id": "b17", "title": "SkinGPT-4: An Interactive Dermatology Diagnostic System with Visual Large Language Model", "year": "2023" }, { "authors": "B N Dugger; D W Dickson", "journal": "Cold Spring Harb Perspect Biol", "ref_id": "b18", "title": "Pathology of Neurodegenerative Disease", "year": "2017" }, { "authors": "S Koga; N B Martin; D W Dickson", "journal": "Brain Pathology", "ref_id": "b19", "title": "Evaluating the performance of large language models: ChatGPT and Google bard in generating differential diagnoses in clinicopathological conferences of neurodegenerative disorders", "year": "2023" }, { "authors": "F Agbavor; H Liang", "journal": "PLOS Digital Health", "ref_id": "b20", "title": "Predicting dementia from spontaneous speech using large language models", "year": "2022" }, { "authors": "S Luz; F Haider; S De La Fuente", "journal": "", "ref_id": "b21", "title": "Detecting cognitive decline using speech only: The ADReSSo Challenge", "year": "2021" }, { "authors": "C Mao; J Xu; L Rasmussen", "journal": "Journal of Biomedical Informatics", "ref_id": "b22", "title": "AD-BERT: Using pre-trained language model to predict the progression from mild cognitive impairment to Alzheimer's disease", "year": "2023" }, { "authors": "H Cai; X Huang; Z Liu", "journal": "", "ref_id": "b23", "title": "Exploring Multimodal Approaches for Alzheimer's Disease Detection Using Patient Speech Transcript and Audio Data", "year": "2023" }, { "authors": "Y Feng; J Wang; X Gu", "journal": "", "ref_id": "b24", "title": "Large language models improve Alzheimer's disease diagnosis using multi-modality data", "year": "2023" }, { "authors": "Y Ying; T Yang; H Zhou", "journal": "Applied Intelligence", "ref_id": "b25", "title": "Multimodal fusion for alzheimer's disease recognition", "year": "2023" }, { "authors": "H Mohammad-Rahimi; S R Motamedian; M H Rohban", "journal": "J Dent", "ref_id": "b26", "title": "Deep learning for caries detection: A systematic review", "year": "2022-03-30" }, { "authors": "R Urban", "journal": "Electronics", "ref_id": "b27", "title": "AI-assisted CBCT data management in modern dental practice: benefits, limitations and innovations", "year": "2023" }, { "authors": "H Huang; O Zheng; D Wang", "journal": "International Journal of Oral Science", "ref_id": "b28", "title": "ChatGPT for shaping the future of dentistry: The potential of multi-modal large language model", "year": "2023" }, { "authors": "I R Galatzer-Levy; D N Mcduff; A Karthikesalingam; M Malgaroli", "journal": "Computation and Language", "ref_id": "b29", "title": "The Capability of Large Language Models to Measure Psychiatric Functioning", "year": "2023" }, { "authors": "X Xu; B Yao; Y Dong", "journal": "", "ref_id": "b30", "title": "Leveraging large language models for mental health prediction via online text data", "year": "2023" }, { "authors": "Z Ma; Y Mei; Z Su", "journal": "AMIA Annu Symp Proc", "ref_id": "b31", "title": "Understanding the Benefits and Challenges of Using Large Language Model-based Conversational Agents for Mental Well-being Support", "year": "2023" }, { "authors": "O Kjell; K Kjell; H A Schwartz", "journal": "", "ref_id": "b32", "title": "AI-based large language models are ready to transform psychological health assessment", "year": "2023" }, { "authors": "S Wu; M Koo; Blum", "journal": "", "ref_id": "b33", "title": "A comparative study of open-source large language models, GPT-4 and Claude 2: Multiple-choice test taking in nephrology", "year": "2023" }, { "authors": "A Lahat; E Shachar; B Avidan", "journal": "Diagnostics", "ref_id": "b34", "title": "Evaluating the utility of a large language model in answering common patients' gastrointestinal health-related questions: Are we there yet?", "year": "1950" }, { "authors": "P Goktas; G Karakaya; Kalyoncu", "journal": "The Journal of Allergy and Clinical Immunology: In Practice", "ref_id": "b35", "title": "Artificial intelligence chatbots in allergy and immunology practice: Where have we been and where are we going?", "year": "2023" }, { "authors": "K Singhal; S Azizi; T Tu", "journal": "", "ref_id": "b36", "title": "Large language models encode clinical knowledge", "year": "2023" }, { "authors": "S Wang; Z Zhao; X Ouyang", "journal": "Computer Science", "ref_id": "b37", "title": "ChatCAD: Interactive Computer-Aided Diagnosis on Medical Image using Large Language Models", "year": "2023" }, { "authors": "Y Bazi; M M Rahhal; L Bashmal; M Zuair", "journal": "Bioengineering", "ref_id": "b38", "title": "Vision-language model for visual question answering in medical imagery", "year": "2023" }, { "authors": "R S Tan; Q Lin; G H Low", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b39", "title": "Inferring cancer disease response from radiology reports using large language models with data augmentation and prompting", "year": "2023" }, { "authors": "Z Chen; M M Balan; K Brown", "journal": "", "ref_id": "b40", "title": "Language models are few-shot learners for prognostic prediction", "year": "2023" } ]
[]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b68", "b70", "b89", "b92", "b93", "b10", "b10", "b44", "b2", "b10", "b44", "b94", "b30", "b52", "b68", "b94", "b5", "b26", "b68", "b89", "b44", "b90", "b30", "b89", "b9", "b68", "b80", "b68", "b18", "b68", "b31", "b53", "b91", "b12", "b71", "b76", "b18", "b30", "b30", "b89", "b9", "b68", "b6", "b9", "b24", "b45", "b68", "b21" ], "table_ref": [], "text": "Though digital pathology images have been widely used for Cancer diagnosis [4,51,68,70,88,91,92] and prog-* Corresponding author. nosis [9,11] via automatic computer-assisted analysis, the Giga-pixels of resolution, as large as 150, 000 × 150, 000 pixels [11,51], still poses great challenges on both precise annotations and efficient computation for model training [45]. Thus, previous methods [3,9,11,44,45,93] focus on developing annotation-& computational-efficient learning to cope with those problems by employing Multiple Instance Learning (MIL) [31,52] with only WSI-level supervision. The MIL is defined as predicting the highest level category of instances as final result within a bag, where all small patches in the WSI are regarded as instances to con-arXiv:2311.12885v1 [cs.CV] 21 Nov 2023 stitute a bag sample [5,51,68,93], and the category of WSI corresponds to the max lesion level of all patch instances. Currently, there are mainly three steps (or mainstream genre) for WSI-MIL analysis framework: 1) accessing better instance-level patch embedding via Self-supervised Learning [6,9,27,44]. 2) designing WSI head architectures [51, 68,88] and train the head with frozen instance embedding. 3) fine-tuning embedding and WSI head simultaneously [45,89] with top-K instances for better taskspecific results. Here in this paper, we focus on the step-2 and find that there are still some room for improvement: Firstly, the global-attention used in AB-MIL, DS-MIL, CLAM, etc. [31,44,51,88] with light computational cost (compared to self-attention) can not model contextual information within WSI (including local-spatial context and long-range dependency). In other words, the relation between different instances, or pairwise interaction is ignored, which is quite useful indeed for prediction decision making [10,68] and should be performed by self-attention. Secondly, though the self-attention computation complexity on long sequence WSI instances (Figure 1a) can be alleviated by Linear Attention [79,83], also used in TransMIL [68,83], its softmax approximation only get sub-optimal performance compared to self-attention as pointed out by Tri et.al [19]. Most importantly, shape varying large WSIs (as shown in Figure 1b) makes absolute position embedding for WSI-MIL used in [9,68] can not be well generalized (see more visual illustration Figure 2).\nAbove issues present a strong need for better positional embedding with input length extrapolation ability as well as memory-efficient Transformer for shape varying, long contextual WSIs modelling. Motivated by recent advancements of Large Language Model [32,53,74,90] on long-context modelling [12-14, 60, 71], we propose to leverage relative positional embeddings [13,60,71] to replace traditional absolute embeddings [76,81]. Specifically, we employ Attention with Linear Bias (ALiBi) [60] that biases query-key attention scores with a penalty which is proportional to their distance. Since such Bias is original designed as linearly dependent on words index distance for 1-d language modelling, we adapt it as also linearly dependent on the Euclidean distance for the 2-d large scale WSI. In addition to the positional embedding, we further use FlashAttention (FA) [19] for long sequence memory efficient Transformer modelling especially on memory saving, which also keeps full ability like original self-attention, compared to Linear Attention. Assisted by the efficient Transformer and relative 2-d spatial positional embedding provided by FA and ALiBi respectively, we can model both the semantic and spatialpositional correlation among instances within the extremely long sequence of 2-d shape varying WSI. Our main contributions can be concluded into 3 folds: [31] is adopted to learns instance weights adaptively, allowing the model to focus on informative regions within the WSIs. This approach significantly reduces the annotation burden on pathologists while still providing valuable insights for patient-level diagnosis. In the context of weakly-supervised pathology WSI analysis, several innovative approaches, DS-MIL, CLAM, DTFD, etc. [31,44,51,88] have been proposed. However, their utilized global-attention with light computational cost can not model WSI contextual information, which is used in pathologist diagnosis decision making [10,68]. The fine-grained details and global contextual information can also be captured by multi-scale modeling [9,44]. Graph Network [7,10,25,46] is also useful to make model be context-aware. Similar to this, HIPT [9] and TransMIL [68] explored the advantages of Transformer with pairwise interaction learning ability to model such contextual information. Since Transformer can be generalized to Graph Network [22], both modelling the pairwise interaction, thus in this paper we focus more on Transformer and try to adapt it better to fit the shape varying and long context properties of WSI." }, { "figure_ref": [], "heading": "Efficient Transformer for Long Range Arena", "publication_ref": [ "b61", "b4", "b0", "b38", "b72", "b79", "b81", "b38", "b80", "b36", "b68", "b18", "b17", "b53", "b91", "b54", "b23", "b19", "b59", "b18", "b33" ], "table_ref": [], "text": "The primary goal of this area is to alleviate the computation and memory complexity of self-attention mechanism on long sequence input. Earliest modifications simply sparsify the attention matrix, including Blockwise [61], Local Attention[55], Sparse Transformer [15] and Longformer [1] relative position bias with large or dilated stride. Extend to above fixed patterns, some work [39,67,72,78,80] using learnable patterns in a data-driven fashion, e.g. Reformer [39] introduces a hash-based similarity measure to efficiently cluster tokens into chunks. Linformer [79] technique leverage low-rank approximations of the self-attention matrix, decomposing the N × N matrix to N × k. The kernels also serve as a approximation of the attention matrix, including Performers [37], Linear Transformers [16] and Random Feature Attention ([57]) Another popular method of reducing computation cost is to reduce the resolution of the sequence, hence reducing computation cost by a commensurate factor, e.g. Perceiver[33], Swin Transformer[50]. The recent Nyströmformer[83], been used in TransMIL [68] of WSI-MIL, can be seem like kernel-based low-rank approach. Above work mainly focus on a light approximation of self-attention or using sparse attention, which is indeed worse than the full attention [19].\nAnother lines of work try to merge RNN and Transformer, e.g. Transformer-XL [18] proposed a segmentlevel recurrence mechanism that connects multiple segments and blocks, and now is widely used in most successful LLM [53,74,90]. Attention Free Transformer [87] replaces dot-product self-attention with a computationally efficient alternative. RWKV [56], Linear Recurrent Units [54], State space models [24] and its variants [20,59] are also proposed, but these recurrent ability of attention is designed for text sequence with causal or auto-regressive property, not fit well for image recognition. Recent work like FlashAttention [19] and others [34,62] using chunked computation scheme and IO-aware mechanism to be memory-efficient and gain full ability like self-attention. We argue that this kind of work is more suitable for WSI analysis task since the total sequence length of most WSI-MIL tasks will be around 10-20k, but self-attention approximation or attention free work try to scaling the model into infinite length for language modelling, which will lose some self-attention ability." }, { "figure_ref": [], "heading": "Long Contextual Positional Embedding", "publication_ref": [ "b18", "b80", "b76", "b20", "b16", "b20", "b40", "b8", "b63", "b64", "b22", "b46", "b66", "b71", "b12", "b46" ], "table_ref": [], "text": "Recently, explore positional embeddings to longer sequences play a vital role in LLM to solving long context modelling [19,79]. Initially, absolute positional embedding assign a positional vector and adds it to the embedding vector by the first work predefined sinusoidal function [76]. Followed by the success of BERT [21], learnable absolute positional embeddings have been applied to the task of masked language modeling [17,21,41,49], Autoregressivedecoding [63,64], and sequence-to-sequence [23,43] settings.\nA line of work studies the ways to extrapolate sinusoidal positional embeddings to longer sequences by randomly shifting absolute positions during training [40] or augmenting with continuous signals [47]. While being easy to implement, it is challenging to extend absolute positional embeddings to unseen longer sequence lengths, which is known as the length extrapolation issue [60]. As opposed to the modeling of absolute position, relative positional embeddings (RPE) that model the positional difference has become popular in the literature [8, 18, 28-30, 38, 69, 84]. In particular, the T5 model that considers bucketed relative distances and log-binning has been shown to perform well on various Transformer architectures [66]. Rotary positional embedding (RoPE) [71] encodes the position with rotations, with the rotation's property, the query-key product exhibits a positional difference. However, the RoPE only fits well trained length, showing poor performance on unseen or seldom seen length or position. Attention with Linear Bias (ALiBi) [13,60] proposes to add relative positional bias term directly to the attention matrix, which provide the extrapolation ability ( training short but testing long) for language models. Despite above work focuses on the NLP domain, recent work has applied positional embeddings to other domains such as vision [81] and speech [47]. However, similar problem also happens in ViT's [81] naive 2d absolute position embedding, making it not fit our WSI task well. Since there is no special long sequence modelling problem is most computer vision tasks, the histopathology WSI analysis tasks present us a special challenging for 2-d image long-sequence modelling to tackle with in this paper." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Attention-based WSI Analysis", "publication_ref": [ "b77", "b25", "b35", "b68" ], "table_ref": [], "text": "Given a WSI X as input, the goal is to make slidelevel prediction Ŷ by learning a classifier f (X; θ). X is firstly patched into a long sequence of small instances X = {x 1 , ..., x N } because of its extremely large resolution, where N is the number of instance. The slide-level supervision Y is given by doctors who considering the latent label y i of all instance x i . Most previous work [5,9,51,77] try to model this process by a Max-pooling operation, so initially, this annotation process can be treated as:\nY = max{y 1 , ..., y N }.(1)\nSince the end-to-end training from raw image input to WSIlevel output is impossible because of large memory usage, conventional approaches convert it into two separate stages: Firstly, convert all small patches into instance em-beddings Z = {z 1 , ..., z N } by a pretrained backbone such as CNN [26] or ViT [81], which refers to general features from public ImageNet, or learned on the related dataset to extract the domain-specific representations [9,36]. Then, aggregate all patches' features within a slide and producing the slide-level prediction Y = g(Z; θ). In this paper, we mainly focus on the latter one, where g is an globalattention function followed by a linear classifier head as:\nY = σ( N i=1 a i z i ),(2)\nwhere a i is attention weights and σ(•) is a linear head. However, above vanilla method utilize global-attention (assign adaptive weight to each instance to make simple summation or pooling) can not model the interactions among different instances. Thus, to handle this problem, Transformer with self-attention is employed in TransMIL [68] and HIPT[9], where the attention sublayer computes the attention scores for the i-th query q i ∈ R 1×d , (1 ≤ i ≤ N ) in each head, where d is the head dimension. In other words, each instance will compute a attention score as interaction with all instances. These attention scores are then multiplied by the values to return the output of the attention sub-layer as:\no i = softmax(q i K ⊤ )v i ,(3)\nwhere the {Q, K, V } ∈ R N ×d is got by linear transform from Z with different parameters and O ∈ R N ×d is the output attention score. Given O, which encodes the interactions among instances (or pairwise interactions), we can further use Equation ( 2) and input O to replace Z for final prediction, also mean-pooling and class token in ViT [81] can be adopted. Note that here we omit dropout, FFN, residual connection and some detailed blocks in Transformer for simplicity." }, { "figure_ref": [ "fig_4" ], "heading": "Memory-efficient Attention for WSI", "publication_ref": [ "b68" ], "table_ref": [], "text": "Though above Transformer with self-attention can well model the interactions among different instances, its computational memory usage is too heavy O(N 2 ) for long sequence of WSI (average 8k tokens in 20× magnitude of 224x224 patch size) due to the interactive attention score calculation (as shown in Figure 4 ablations). Also find details in supplementary materials about the computational complexity and GPU memory usage during forward and backward of self-attention or linear-attention. Instead of using attention matrix approximation by Nyströmformer [83] in TransMIL [68], we seek FlashAttention (FA) for help without information lose but show comparable speed. We omit the algorithm of FA here for its method complexity and interactions with hardware, please check the original paper and supplementary materials for details. " }, { "figure_ref": [ "fig_2", "fig_2", "fig_3" ], "heading": "Attention with Positional Bias", "publication_ref": [ "b76", "b71", "b11", "b83" ], "table_ref": [], "text": "Since the operation in Equation ( 3) is position-agnostic, Transformer [76] try to model contextual interactions by incorporate position information, which can also be seen as inductive bias for self-attention. Absolute positional embeddings assign a positional vector p m to each position m and adds it to the embedding vector as:\nz i = z i + p m,i .\nTo improve long sequence ability, Relative positional embeddings that model the positional difference m -n has become popular. Rotary positional embedding (RoPE) [71] encodes the position with rotations: f (q m , m) = R m q m where R m is a rotation matrix with angles proportional to m. With the rotation's property, the query-key product exhibits a positional difference:\nf (q m , m)f (k n , n) ⊤ = q m R n-m k ⊤ n .(4)\nThe core idea of RoPE is to insert position m, n signal on q, k and reflect the relative position on the newly attention matrix, where the rotary can match this product property well and result in above rotation matrix. Though the RoPE is design for 1-d language sequence, it can also be extended to 2-d paradigm for application on WSI analysis. We omit it here, please check Supplementary Material for details. However, RoPE need to be well trained or fine-tuned on unseen or seldom seen long length [12,48,82], as shown in Figure 2b. So here, we introduce Attention with Linear Bias [60] to the 2-d shape varying WSI analysis (2d-ALiBi). The main modification is to add bias term after the query-key dot producted attention matrix. For the original 1-d ALiBi [60], the bias is a static, non-learned matrix softmax(q m K ⊤ + τ • [-(m -1), ..., -2, -1, 0]), computed by the distance between tokens from different position (closer position tokens using smaller bias):\nq m k ⊤ n -τ |m -n|(5)\nwhere scalar τ is multi-head coefficients fixed before training. With this predefined distance aware or position-aware bias matrix (see Figure 2c for visualization), no matter how long the unseen sequence is, the relative positions can always be well encoded, or in other word, extrapolation. This property fit well for our shape varying WSI analysis, where too many masks will be add to seldom seen positions, resulting sub-optimal learning of RoPE.\nTo extend the ALiBi for 2-d WSI, the bias matrix in Equation ( 5) can be convert by calculating the 2-d Euclidean distance among positions (as shown in Figure 3 for a simple 2-d (2 × 2) • (2 × 2) position matrix for visualization, we give a larger matrix in supplementary materials):\nq m k ⊤ n -τ |m j -n j | 2 + |m k -n k | 2 , (6\n)\nwhere the j, k represents the 2-d Coordinate axes." }, { "figure_ref": [ "fig_3" ], "heading": "Long-MIL framework and implementation", "publication_ref": [ "b35", "b18" ], "table_ref": [], "text": "To realize long contextual MIL modelling and better WSI analysis performance, the overall framework (as depicted in Figure 3) of our method includes 3 stages: BRACS tumor subtyping ViT-S [36] ViT 1) Segmenting and patching WSI into instances, then save its corresponding foreground patch feature embedding and 2-d positions for preparation. 2) Performing pairwise computations among all positions within a WSI to get distances as 2-d positional bias matrix for attention. For efficiency, we pre-compute a large matrix (300×300)•(300×300), then for each WSI, using the foreground standardized positions as indexing to get sub-matrix for needing (2-d ALiBi). 3) Calculating the vanilla attention matrix, then add it with above 2-d position bias matrix, thus the 2-d positionaware attention (ATTN) is obtained. Using the newly ATTN to make sof tmax operation to get pairwise attention score and finally we finish the long contextual spatial information interaction and fusion by the position aware attention score adaptive summation. This step is fully supported and accelerated by FlashAttention [19] ( inputting the bias matrix as the mask term, which is widely used in NLP for causal generation)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we present the performance of the proposed method and compare it with various baselines. Ablation experiments are performed to further study the proposed method and for paper length, more experimental results are presented in the Supplementary." }, { "figure_ref": [], "heading": "Datasets and Tasks.", "publication_ref": [ "b1", "b58" ], "table_ref": [], "text": "We use four datasets to evaluate our method. For the slide-level tumor subtyping performance, our method is evaluated on two datasets: BReAst Carcinoma Subtyping (BRACS) [2] collect H&E stained Histology Images, containing 547 WSIs for three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. Here, since the WSIs number is limited, we only perform three class subtyping. The WSIs are segmented in 20× magnitude and non-overlapping patching with 224 × 224 size. The Cancer Genome Atlas Breast Cancer (TCGA-BRCA) [58,73] is a public dataset for breast invasive carcinoma cohort for Invasive Ductal Carcinoma versus Invasive Lobular Carcinoma subtyping. The WSIs are segmented into nonoverlapping tissue-containing patches with a size of 256 × 256 (keep consistency to previous work [9]) at 20× magnification patches were curated from 1038 WSIs. For the slidelevel survival prediction, we includes 2 TCGA histology datasets: 1) A combination dataset of the Colon adenocarcinoma and Rectum adenocarcinoma Esophageal carcinoma (TCGA-COADREAD), which includes 316 WSIs as used in HIPT [9]. 2) Stomach adenocarcinoma (TCGA-STAD) dataset including 321 WSIs." }, { "figure_ref": [], "heading": "Pretraining Backbones.", "publication_ref": [ "b35", "b35", "b5", "b25", "b5", "b5", "b18" ], "table_ref": [], "text": "Our work mainly focus on the WSI-head results based on some good pretrained embeddings for histopathology [9,36]. For tumor subtyping of BRACS, we utilize the embedding proposed in [36] which add some pathology-domain specific augmentations to DINO [6] pretraining process on their collected diverse histology patches. We also compare the results on ResNet-50 [26] the consistency of the method. Thus for TCGA-BRCA, we also pretrain the backbone with DINO [6] by extracting all patch raw image. For fair comparisons on the experiments of TCGA-related data of survival prediction with previous work, we use the pretrained VIT-small embedding proposed by HIPT[9] which is finished with DINO [6] on about 10k TCGA histopathology WSI data. We omit the ResNet-50 embedding for survival prediction since it get quite low unacceptable results. Implementation Details.\nWe train our model with PyTorch on a RTX-3090 GPU, with a WSI-level batchsize of 1, learning rate of 1e-4, and weight decay of 1e-2. To save memory usage and boosting the self-attention operation, we employ Flash-Attention [19] instead. The position bias term is fed into flashattention on the traditional mask term. Also checking our codes in supplementary material for further details." }, { "figure_ref": [], "heading": "Slide-level Tumor Subtyping", "publication_ref": [ "b30", "b89", "b68", "b30", "b89", "b68", "b35", "b35" ], "table_ref": [ "tab_2", "tab_3", "tab_3", "tab_3" ], "text": "Evaluation Metrics. For all the experiments, we report the macro-AUC and macro-F1 scores since all these dataset suffering class imbalance. For TCGA-BRCA, we perform 10-fold cross-validation with the same data split adopted in HIPT [9]. Besides, the dataset BRACS is officially split into training, validation and testing, thus the experiment is conducted 5-times with different random seeds. The mean and standard variance values of performance metrics are reported for multi-runs or cross-validation runs. Baselines for comparison. We first show the results of Mean-/Max-pooling and KNN for traditional evaluation.\nThen we directly evaluate several classical WSI-MIL methods, including AB-MIL [31], DS-MIL[44], CLAM [51], DTFD-MIL [88]. Then we compare our method with some state-of-the-art combining position embedding on Transformer, TransMIL [68]. We omit HIPT [9] in this task since it need WSI larger than a threshold. We finally show our proposed long-contextual position embedding module, including RoPE and ALiBi in 2-d form.\nResults Analysis: For BRACS 3-categories tumor subtyping, the results are reported in Table 1. We can first observe that both FA and 2-d positional embedding show their improvement respectively. For FA, attributing to its full self-attention for pairwise interaction ability, it shows better performance compared to all global-attention modules [31,44,51,88] and especially TransMIL [68] which use linear attention approximation. We also notice that backbone embedding extracted from ViT-S pretrained by Kang et al. [36] showing superiority compared our DINO pretrained model on the training set of BRACS data, which may because of its large dataset learned generalization. The embedding of ResNet-50 pretrained on ImageNet is included in supplementary materials. Based on better embedding with [36], the AUC score only shows slight improvement equipped with 2-d positional embedding, especially for RoPE without extrapolation ability showing no improvement also in F1-score. However, our 2d-ALiBi still show significant improvement on F1-score, which is quite important for multi-class problem.\nFor TCGA-BRCA 2-categories tumor subtyping, the results are reported in Table 2. For fair comparisons to former work on this data, we utilize ViT-S pretrained in HIPT[9], as well as commonly used ResNet-50 pretrained on Ima-geNet. We find that there is limited improvement of our method on ResNet-50 (right column of Table 2) given that its knowledge or semantic domain gap to histopathology data. In other words, such kind of embedding may lose the spatial information after layers of convolution thus result in bad performance. With better embedding pretrained on pathology domain (left column of Table 2), we find that our method show promising improvement. Thus we argue that only a good pretrained embedding helps discovering the potential of positional embedding. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Slide-level Survival Prediction", "publication_ref": [ "b87", "b30", "b86", "b45", "b68", "b10", "b34", "b10", "b41", "b65", "b75" ], "table_ref": [ "tab_4" ], "text": "Evaluation Metrics. For all the experiments, C-Index scores are reported for the 2 datasets. We follow the data splits and pretrained patch embedding proposed in HIPT[9] for fair comparison. Also the performance results are reported via the mean and standard variance values of performance metrics by multiple folder cross-validation with the same running setting to HIPT [9].\nComparison with baselines. For this task, we use the survival cross-entropy loss proposed by Zadeh et al. [86].\nThe results are summarized in Table 3, where we directly evaluate several survival prediction WSI-MIL methods, including AB-MIL [31], AMISL [85], DS-MIL[44], GCN-MIL [46]. Then we compare our method with some stateof-the-art combining position embedding on Transformer: TransMIL [68] and HIPT [9]. Though our method show some improvement, the C-index score is still too low to daily clinical usage depending on only WSI information.\nIn the near future, we would like to investigate more on this task, e.g. combining multi-modality features as used in [11,35], since Transformer also born with great ability on multi-modality fusion [11,42,65,75]." }, { "figure_ref": [ "fig_4" ], "heading": "Further Ablation Experiments", "publication_ref": [], "table_ref": [], "text": "Here we provide ablations on the training efficiency of different transformer implementation as shown in Figure 4. We also provide some other performance ablations (number of Transformer blocks and multi-head, bias slope coefficient, weight decay, dropout ratio) in supplementary materials since Transformer is easy to be over-fitting on this task. We also show markers about the max instance number of WSI used in this paper to show potentials on future higher magnitude learning." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, our proposed Long-contextual MIL (Long-MIL) method addresses the challenges in histopathology image analysis, offering superior performance in handling shape-varying Whole Slide Images (WSIs). By introducing Linear Bias into Attention and leveraging the Flash-Attention module, our approach enhances position embedding and tackles computational complexity, respectively. Extensive evaluations across four datasets affirm the effectiveness of Long-MIL in WSI classification and survival prediction tasks. Given the strong ability of long sequence modelling of our method, in the future we would like to adapt it longer sequence with higher resolution thus stronger information. Also, we will try to make more efforts on the unresolved problem of multi-modality survival prediction for more life-saving." } ]
Histopathology image analysis is the golden standard of clinical diagnosis for Cancers. In doctors daily routine and computer-aided diagnosis, the Whole Slide Image (WSI) of histopathology tissue is used for analysis. Because of the extremely large scale of resolution, previous methods generally divide the WSI into a large number of patches, then aggregate all patches within a WSI by Multi-Instance Learning (MIL) to make the slide-level prediction when developing computer-aided diagnosis tools. However, most previous WSI-MIL models using global-attention without pairwise interaction and any positional information, or selfattention with absolute position embedding can not well handle shape varying large WSIs, e.g. testing WSIs after model deployment may be larger than training WSIs, since the model development set is always limited due to the difficulty of histopathology WSIs collection. To deal with the problem, in this paper, we propose to amend position embedding for shape varying long-contextual WSI by introducing Linear Bias into Attention, and adapt it from 1-d long sequence into 2-d long-contextual WSI which helps model extrapolate position embedding to unseen or underfitted positions. We further utilize Flash-Attention module to tackle the computational complexity of Transformer, which also keep full self-attention performance compared to previous attention approximation work. Our method, Longcontextual MIL (Long-MIL) are evaluated on extensive experiments including 4 dataset including WSI classification and survival prediction tasks to validate the superiority on shape varying WSIs. The source code will be open-accessed soon.
Long-MIL: Scaling Long Contextual Multiple Instance Learning for Histopathology Whole Slide Image Analysis
[ { "figure_caption": "Figure 1 .1Figure 1. Two attributes of WSIs' shape pose challenges for contextual modelling. 1) extremely long sequence even in 20× magnitude, which could be quadruple in 40×. 2) shape variance of WSIs and their foreground distribution.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Here we show why the shape varying WSI need long sequence positional embeddings with extrapolation ability. a) The normalized 2-d position index distribution of WSI foreground patches mainly scattered within a circle (index<100). Thus the positions in area enclosed by the dashed line suffers under-fitting if using traditional positional embedding as shown in b), where performance gets quickly decreased during testing on unseen longer input length (fail to extrapolate). Though longer training input can smooth the problem in NLP, it is unable to replicate this to histopathology due to the scarcity of WSI training data. Thus to address this issue, ALiBi is a more appropriate tool with strong extrapolation ability, whose intuitive visualization can be find in c): longer distance needs larger penalty or bias to attention score. Since the relative position bias is pre-defined and needs no training, it shows strong generalization on unseen long positions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Long-MIL framework for WSI spatial contextual information fusion. 1) Preparing foreground patch feature embedding with its Q, K, V transformation and 2-d positions of WSIs. 2) Performing pairwise computations among all positions within a WSI to get distances as 2-d positional bias matrix for attention. 3) Calculating the vanilla attention matrix and add it with above 2-d position bias matrix, then using the standard Transformer process to finish WSI prediction.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure4. Training memory usage and speed using different Attentions. We also show markers about the max instance number of WSI used in this paper to show potentials on future higher magnitude learning.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "1) We propose to adapt Attention with Linear Bias into 2-d positional embedding for shape varying large WSI, which provides input length extrapolation ability to generalize on different input size and under-fitted positions of WSI", "figure_data": "during testing.2) We use FlashAttention for efficient Transformer compu-tation to replace current self-attention and Linear Atten-tion, which helps us modelling long sequence of WSI in-stances in lightweight GPU memory and computationalcost without no information loss.3) Our WSI-analysis experiments are performed on both di-agnosis and prognosis task on 4 WSI dataset includingBreast, Stomach, Colon and Rectal Carcinoma, whichshow strong universality of the method and practical po-tential for real-world applications.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Slide-Level Tumor Subtyping on BRACS by using two pre-trained embeddings. Top Rows. Various WSI-MIL architectures with global-attention (no interaction among different instances). Bottom Rows. Previous state-of-the-art model TransMIL (using Linear self-attention and learnable absolute position embedding), and our proposed FlashAttention and relative positional embedding modules.", "figure_data": "-S DINO (our pretrain)", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "embedding pretrained Ima-geNet just like most previous work do[44, 51,68]. The multiple kinds of embedding backbone also help demonstrating Slide-Level Tumor Subtyping on TCGA-BRCA by using two pre-trained embeddings. We show various WSI-MIL architectures with global-attention, Linear self-attention, learnable absolute position embedding and our proposed FlashAttention with relative positional embedding modules.", "figure_data": "TCGA-BRCA tumor subtypingViT-S DINO (our pretrain)ResNet-50 (ImageNet pretrain)MethodF1AUCF1AUCKNN (Mean)0.671±0.0550.843±0.0200.585±0.0480.742±0.016KNN (Max)0.652±0.0380.718±0.0040.516±0.0330.691±0.016Mean-pooling0.832±0.0420.936±0.0100.751±0.0490.861±0.026Max-pooling0.843±0.0200.935±0.0080.780±0.0270.886±0.301AB-MIL [31]0.854±0.0130.940±0.0150.760±0.0460.851±0.057DS-MIL[44]0.850±0.0530.933±0.0110.797±0.0360.894±0.029CLAM-SB [51]0.853±0.0200.926±0.0210.779±0.0350.878±0.027DTFD-MIL MaxS[88]0.799±0.0560.900±0.0350.653±0.6040.798±0.236DTFD-MIL AFS[88]0.841±0.0250.921±0.0120.787±0.0370.897±0.027TransMIL [68]0.831±0.0370.928±0.0150.741±0.1260.854±0.051FlashAttention (FA) [19]0.861±0.0350.943±0.0100.800±0.0140.901±0.014FA + 2d-RoPE (ours)0.863±0.0180.939±0.0300.772±0.0650.907±0.017FA + 2d-ALiBi (ours)0.871±0.0400.946±0.0110.781±0.0470.919±0.008", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Slide-Level Survival Prediction based on HIPT[9] pre-", "figure_data": "COADREADSTADAB-MIL [31]0.566±0.0750.562±0.049AMISL [85]0.561±0.0880.563±0.067DS-MIL[44]0.470±0.0530.546±0.047GCN-MIL [46]0.538±0.0490.513±0.069HIPT [9]0.608±0.0880.570±0.081TransMIL [68]0.597±0.1340.564±0.080FlashAttention(FA)0.603±0.0480.568±0.074FA + 2d-RoPE0.613±0.0770.575±0.045FA + 2d-ALiBi0.624±0.0570.589±0.066trained embedding abd Various WSI-MIL architectures includingglobal-attention, GCN, linear attention (TransMIL with absolutelearnable embedding) and self-attention (HIPT with absolute em-bedding). Our Flash Attention with extrapolation relative positionembedding (2d-ALiBi) show strong performance.", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Honglin Li; Yunlong Zhang; Chenglu Zhu; Jiatong Cai; Sunyi Zheng; Lin Yang
[ { "authors": "Iz Beltagy; Matthew E Peters; Arman Cohan", "journal": "", "ref_id": "b0", "title": "Longformer: The long-document transformer", "year": "2020" }, { "authors": "Nadia Brancati; Anna Maria Anniciello; Pushpak Pati; Daniel Riccio; Giosuè Scognamiglio; Guillaume Jaume; Giuseppe De Pietro; Maurizio Di Bonito; Antonio Foncubierta; Gerardo Botti; Maria Gabrani; Florinda Feroce; Maria Frucci", "journal": "", "ref_id": "b1", "title": "Bracs: A dataset for breast carcinoma subtyping in h&e histology images", "year": "2021" }, { "authors": "Wouter Bulten; Kimmo Kartasalo; Peter Po-Hsuan Cameron Chen; Hans Ström; Kunal Pinckaers; Yuannan Nagpal; David F Cai; Hester Steiner; Robert Van Boven; Vink", "journal": "Nature medicine", "ref_id": "b2", "title": "Artificial intelligence for diagnosis and gleason grading of prostate cancer: the panda challenge", "year": "2022" }, { "authors": "Jiatong Cai; Chenglu Zhu; Can Cui; Honglin Li; Tong Wu; Shichuan Zhang; Lin Yang", "journal": "Springer", "ref_id": "b3", "title": "Generalizing nucleus recognition model in multi-source ki67 immunohistochemistry stained images via domain-specific pruning", "year": "2021-10-01" }, { "authors": "Gabriele Campanella; Matthew G Hanna; Luke Geneslaw; Allen Miraflor; Vitor Werneck Krauss; Klaus J Silva; Edi Busam; Brogi; E Victor; David S Reuter; Thomas J Klimstra; Fuchs", "journal": "Nature medicine", "ref_id": "b4", "title": "Clinical-grade computational pathology using weakly deep learning on whole slide images", "year": "2019" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b5", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Fernando Tsai Hor Chan; Julio Cendra; Lan Ma; Guosheng Yin; Lequan Yu", "journal": "", "ref_id": "b6", "title": "Histopathology whole slide image analysis with heterogeneous graph representation learning", "year": "2023" }, { "authors": "Pu-Chin Chen; Henry Tsai; Srinadh Bhojanapalli; Hyung Won Chung; Yin-Wen Chang; Chun-Sung Ferng", "journal": "", "ref_id": "b7", "title": "A simple and effective positional encoding for transformers", "year": "2021" }, { "authors": "Richard J Chen", "journal": "", "ref_id": "b8", "title": "Scaling vision transformers to gigapixel images via hierarchical self-supervised learning", "year": "2008" }, { "authors": "Richard J Chen; Ming Y Lu; Muhammad Shaban; Chengkuan Chen; Tiffany Y Chen; F K Drew; Faisal Williamson; Mahmood", "journal": "", "ref_id": "b9", "title": "Whole slide images are 2d point clouds: Context-aware survival prediction using patch-based graph convolutional networks", "year": "2021" }, { "authors": "Richard J Chen; Ming Y Lu; Wei-Hung Weng; Tiffany Y Chen; F K Drew; Trevor Williamson; Maha Manz; Faisal Shady; Mahmood", "journal": "", "ref_id": "b10", "title": "Multimodal co-attention transformer for survival prediction in gigapixel whole slide images", "year": "2021" }, { "authors": "Yukang Chen; Shengju Qian; Haotian Tang; Xin Lai; Zhijian Liu; Song Han; Jiaya Jia", "journal": "", "ref_id": "b11", "title": "Longlora: Efficient finetuning of long-context large language models", "year": "2023" }, { "authors": "Ta-Chung Chi; Peter J Ting-Han Fan; Alexander Ramadge; Rudnicky", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Kerple: Kernelized relative positional embedding for length extrapolation", "year": "2022" }, { "authors": "Ta-Chung Chi; Alexander I Ting-Han Fan; Rudnicky", "journal": "", "ref_id": "b13", "title": "Receptive field alignment enables transformer length extrapolation", "year": "2022" }, { "authors": "Rewon Child; Scott Gray; Alec Radford; Ilya Sutskever", "journal": "", "ref_id": "b14", "title": "Generating long sequences with sparse transformers", "year": "2019" }, { "authors": "Krzysztof Choromanski; Valerii Likhosherstov; David Dohan; Xingyou Song; Jared Davis; Tamas Sarlos; David Belanger; Lucy Colwell; Adrian Weller", "journal": "", "ref_id": "b15", "title": "Masked language modeling for proteins via linearly scalable long-context transformers", "year": "2020" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b16", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Zihang Dai; Zhilin Yang; Yiming Yang; Jaime Carbonell; Ruslan Quoc V Le; Salakhutdinov", "journal": "ACL", "ref_id": "b17", "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "year": "2019" }, { "authors": "Tri Dao; Daniel Y Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "", "ref_id": "b18", "title": "FlashAttention: Fast and memory-efficient exact attention with IO-awareness", "year": "2022" }, { "authors": "Tri Dao; Daniel Y Fu; Khaled K Saab; Armin W Thomas; Atri Rudra; Christopher Ré", "journal": "", "ref_id": "b19", "title": "Hungry hungry hippos: Towards language modeling with state space models", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b20", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Vijay Prakash; Dwivedi ; Xavier Bresson", "journal": "", "ref_id": "b21", "title": "A generalization of transformer networks to graphs", "year": "2020" }, { "authors": "Jonas Gehring; Michael Auli; David Grangier; Denis Yarats; Yann N Dauphin", "journal": "", "ref_id": "b22", "title": "Convolutional sequence to sequence learning", "year": "2017" }, { "authors": "Albert Gu; Karan Goel; Christopher Ré", "journal": "", "ref_id": "b23", "title": "Efficiently modeling long sequences with structured state spaces", "year": "2022" }, { "authors": "Yonghang Guan; Jun Zhang; Kuan Tian; Sen Yang; Pei Dong; Jinxi Xiang; Wei Yang; Junzhou Huang; Yuyao Zhang; Xiao Han", "journal": "", "ref_id": "b24", "title": "Node-aligned graph convolutional network for whole-slide image representation and classification", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b25", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross B Girshick", "journal": "", "ref_id": "b26", "title": "Masked autoencoders are scalable vision learners", "year": "2021" }, { "authors": "Pengcheng He; Xiaodong Liu; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b27", "title": "{DEBERTA}: {DECODING}-{enhanced} {bert} {with} {disentangled} {attention}", "year": "2021" }, { "authors": "Cheng-Zhi Anna Huang; Ashish Vaswani; Jakob Uszkoreit; Noam Shazeer; Ian Simon; Curtis Hawthorne; Andrew M Dai; Matthew D Hoffman; Monica Dinculescu; Douglas Eck", "journal": "", "ref_id": "b28", "title": "Music transformer", "year": "2019" }, { "authors": "Zhiheng Huang; Davis Liang; Peng Xu; Bing Xiang", "journal": "", "ref_id": "b29", "title": "Improve transformer models with better relative position embeddings", "year": "2020" }, { "authors": "Maximilian Ilse; Jakub Tomczak; Max Welling", "journal": "PMLR", "ref_id": "b30", "title": "Attention-based deep multiple instance learning", "year": "2018" }, { "authors": "Srinivasan Iyer; Xi Victoria Lin; Ramakanth Pasunuru; Todor Mihaylov; Dániel Simig; Ping Yu; Kurt Shuster; Tianlu Wang; Qing Liu; Punit Singh Koura", "journal": "", "ref_id": "b31", "title": "Opt-iml: Scaling language model instruction meta learning through the lens of generalization", "year": "2022" }, { "authors": "Andrew Jaegle; Felix Gimeno; Andrew Brock; Andrew Zisserman; Oriol Vinyals; Joao Carreira", "journal": "", "ref_id": "b32", "title": "Perceiver: General perception with iterative attention", "year": "2021" }, { "authors": "Hanhwi Jang; Joonsung Kim; Jae-Eon Jo; Jaewon Lee; Jangwoo Kim", "journal": "", "ref_id": "b33", "title": "Mnnfast: A fast and scalable system architecture for memory-augmented neural networks", "year": "2019" }, { "authors": "Guillaume Jaume; Anurag Vaidya; Richard Chen; Drew Williamson; Paul Liang; Faisal Mahmood", "journal": "", "ref_id": "b34", "title": "Modeling dense multimodal interactions between biological pathways and histology for survival prediction", "year": "2023" }, { "authors": "Mingu Kang; Heon Song; Seonwook Park; Donggeun Yoo; Sérgio Pereira", "journal": "", "ref_id": "b35", "title": "Benchmarking self-supervised learning on diverse pathology datasets", "year": "2023" }, { "authors": "Angelos Katharopoulos; Apoorv Vyas; Nikolaos Pappas; Franc ¸ois; Fleuret ", "journal": "", "ref_id": "b36", "title": "Transformers are rnns: Fast autoregressive transformers with linear attention", "year": "2020" }, { "authors": "Guolin Ke; Di He; Tie-Yan Liu", "journal": "", "ref_id": "b37", "title": "Rethinking positional encoding in language pre-training", "year": "2021" }, { "authors": "Nikita Kitaev; Łukasz Kaiser; Anselm Levskaya", "journal": "", "ref_id": "b38", "title": "Reformer: The efficient transformer", "year": "" }, { "authors": "Shun Kiyono; Sosuke Kobayashi; Jun Suzuki; Kentaro Inui", "journal": "", "ref_id": "b39", "title": "SHAPE: Shifted absolute position embedding for transformers", "year": "2021" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b40", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2020" }, { "authors": "Kuang-Huei Lee; Xi Chen; Gang Hua; Houdong Hu; Xiaodong He", "journal": "", "ref_id": "b41", "title": "Stacked cross attention for image-text matching", "year": "2018" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b42", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Bin Li; Yin Li; Kevin W Eliceiri", "journal": "", "ref_id": "b43", "title": "Dual-stream multiple instance learning network for whole slide image classification with self-supervised contrastive learning", "year": "2021" }, { "authors": "Honglin Li; Chenglu Zhu; Yunlong Zhang; Yuxuan Sun; Zhongyi Shui; Wenwei Kuang; Sunyi Zheng; Lin Yang", "journal": "", "ref_id": "b44", "title": "Task-specific fine-tuning via variational information bottleneck for weakly-supervised pathology whole slide image classification", "year": "2023" }, { "authors": "Ruoyu Li; Jiawen Yao; Xinliang Zhu; Yeqing Li; Junzhou Huang", "journal": "Springer", "ref_id": "b45", "title": "Graph cnn for survival analysis on whole slide pathological images", "year": "2018" }, { "authors": "Tatiana Likhomanenko; Qiantong Xu; Gabriel Synnaeve; Ronan Collobert; Alex Rogozhnikov", "journal": "", "ref_id": "b46", "title": "Cape: Encoding relative positions with continuous augmented positional embeddings", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b47", "title": "", "year": "2021" }, { "authors": "Xiaoran Liu; Hang Yan; Shuo Zhang; Chenxin An; Xipeng Qiu; Dahua Lin", "journal": "", "ref_id": "b48", "title": "Scaling laws of rope-based extrapolation", "year": "2023" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b49", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b50", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Ming Y Lu; Tiffany Y Drew Fk Williamson; Richard J Chen; Matteo Chen; Faisal Barbieri; Mahmood", "journal": "Nature Biomedical Engineering", "ref_id": "b51", "title": "Data-efficient and weakly supervised computational pathology on wholeslide images", "year": "2007" }, { "authors": "Oded Maron; Tomás Lozano-Pérez", "journal": "Advances in neural information processing systems", "ref_id": "b52", "title": "A framework for multiple-instance learning", "year": "1997" }, { "authors": " Openai", "journal": "", "ref_id": "b53", "title": "", "year": "2023" }, { "authors": "Antonio Orvieto; L Samuel; Albert Smith; Anushan Gu; Caglar Fernando; Razvan Gulcehre; Soham Pascanu; De", "journal": "", "ref_id": "b54", "title": "Resurrecting recurrent neural networks for long sequences", "year": "2023" }, { "authors": "Niki Parmar; Ashish Vaswani; Jakob Uszkoreit; Lukasz Kaiser; Noam Shazeer; Alexander Ku; Dustin Tran", "journal": "PMLR", "ref_id": "b55", "title": "Image transformer", "year": "2018" }, { "authors": "Bo Peng; Eric Alcaide; Quentin Anthony; Alon Albalak; Samuel Arcadinho; Huanqi Cao; Xin Cheng; Michael Chung; Matteo Grella; Kranthi Kiran; G V ", "journal": "", "ref_id": "b56", "title": "Rwkv: Reinventing rnns for the transformer era", "year": "2023" }, { "authors": "Hao Peng; Nikolaos Pappas; Dani Yogatama; Roy Schwartz; Noah Smith; Lingpeng Kong", "journal": "", "ref_id": "b57", "title": "Random feature attention", "year": "2021" }, { "authors": "Shazia Nicholas A Petrick; Akbar; H H Kenny; Sharon Cha; Berkman Nofech-Mozes; Marios A Sahiner; Jayashree Gavrielides; Karen Kalpathy-Cramer; Anne Ll Drukker; Martel", "journal": "Journal of Medical Imaging", "ref_id": "b58", "title": "Spie-aapm-nci breastpathq challenge: an image analysis challenge for quantitative tumor cellularity assessment in breast cancer histology images following neoadjuvant treatment", "year": "2021" }, { "authors": "Michael Poli; Stefano Massaroli; Eric Nguyen; Daniel Y Fu; Tri Dao; Stephen Baccus; Yoshua Bengio; Stefano Ermon; Christopher Ré", "journal": "", "ref_id": "b59", "title": "Hyena hierarchy: Towards larger convolutional language models", "year": "2023" }, { "authors": "Ofir Press; Noah A Smith; Mike Lewis", "journal": "", "ref_id": "b60", "title": "Train short, test long: Attention with linear biases enables input length extrapolation", "year": "2021" }, { "authors": "Jiezhong Qiu; Hao Ma; Omer Levy; Scott Wen-Tau Yih; Sinong Wang; Jie Tang", "journal": "", "ref_id": "b61", "title": "Blockwise selfattention for long document understanding", "year": "2019" }, { "authors": "Markus N Rabe; Charles Staats", "journal": "", "ref_id": "b62", "title": "Self-attention does not need o(n 2 ) memory", "year": "2022" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b63", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b64", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b65", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b66", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Aurko Roy; Mohammad Saffar; Ashish Vaswani; David Grangier", "journal": "", "ref_id": "b67", "title": "Efficient content-based sparse attention with routing transformers", "year": "2020" }, { "authors": "Zhuchen Shao; Hao Bian; Yang Chen; Yifeng Wang; Jian Zhang; Xiangyang Ji; Zhang", "journal": "", "ref_id": "b68", "title": "Transmil: Transformer based correlated multiple instance learning for whole slide image classification", "year": "" }, { "authors": "Peter Shaw; Jakob Uszkoreit; Ashish Vaswani", "journal": "", "ref_id": "b69", "title": "Selfattention with relative position representations", "year": "2018" }, { "authors": "Zhongyi Shui; Sunyi Zheng; Xiaoxuan Yu; Shichuan Zhang; Honglin Li; Jingxiong Li; Lin Yang", "journal": "", "ref_id": "b70", "title": "Deformable proposal-aware p2pnet: A universal network for cell recognition under point supervision", "year": "2023" }, { "authors": "Jianlin Su; Yu Lu; Shengfeng Pan; Bo Wen; Yunfeng Liu", "journal": "", "ref_id": "b71", "title": "Roformer: Enhanced transformer with rotary position embedding", "year": "2021" }, { "authors": "Yi Tay; Dara Bahri; Liu Yang; Donald Metzler; Da-Cheng Juan", "journal": "PMLR", "ref_id": "b72", "title": "Sparse sinkhorn attention", "year": "2020" }, { "authors": "Katarzyna Tomczak; Patrycja Czerwińska; Maciej Wiznerowicz", "journal": "Contemporary Oncology/Współczesna Onkologia", "ref_id": "b73", "title": "Review the cancer genome atlas (tcga): an immeasurable source of knowledge", "year": "2015" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b74", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Yao-Hung Hubert Tsai; Shaojie Bai; Paul Pu Liang; J Zico Kolter; Louis-Philippe Morency; Ruslan Salakhutdinov", "journal": "NIH Public Access", "ref_id": "b75", "title": "Multimodal transformer for unaligned multimodal language sequences", "year": "2019" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b76", "title": "Attention is all you need", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b77", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b78", "title": "", "year": "2017" }, { "authors": "Apoorv Vyas; Angelos Katharopoulos; Franc ¸ois; Fleuret ", "journal": "", "ref_id": "b79", "title": "Fast transformers with clustered attention", "year": "2020" }, { "authors": "Sinong Wang; Belinda Z Li; Madian Khabsa; Han Fang; Hao Ma", "journal": "", "ref_id": "b80", "title": "Linformer: Self-attention with linear complexity", "year": "2020" }, { "authors": "Shuohang Wang; Luowei Zhou; Zhe Gan; Yen-Chun Chen; Yuwei Fang; Siqi Sun; Yu Cheng; Jingjing Liu", "journal": "", "ref_id": "b81", "title": "Clusterformer: Clustering-based sparse transformer for long-range dependency encoding", "year": "2020" }, { "authors": "Kan Wu; Houwen Peng; Minghao Chen; Jianlong Fu; Hongyang Chao", "journal": "", "ref_id": "b82", "title": "Rethinking and improving relative position encoding for vision transformer", "year": "2021" }, { "authors": "Guangxuan Xiao; Yuandong Tian; Beidi Chen; Song Han; Mike Lewis", "journal": "", "ref_id": "b83", "title": "Efficient streaming language models with attention sinks", "year": "2023" }, { "authors": "Yunyang Xiong; Zhanpeng Zeng; Rudrasis Chakraborty; Mingxing Tan; Glenn Fung; Yin Li; Vikas Singh", "journal": "", "ref_id": "b84", "title": "Nyströmformer: A nyström-based algorithm for approximating self-attention", "year": "2021" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime Carbonell; Russ R Salakhutdinov; Quoc V Le", "journal": "Curran Associates, Inc", "ref_id": "b85", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "Jiawen Yao; Xinliang Zhu; Jitendra Jonnagaddala; Nicholas Hawkins; Junzhou Huang", "journal": "Medical Image Analysis", "ref_id": "b86", "title": "Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks", "year": "2020" }, { "authors": "Gorgi Shekoufeh; Matthias Zadeh; Schmid", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b87", "title": "Bias in crossentropy-based training of deep survival networks", "year": "2020" }, { "authors": "Shuangfei Zhai; Walter Talbott; Nitish Srivastava; Chen Huang; Hanlin Goh; Ruixiang Zhang; Josh Susskind", "journal": "", "ref_id": "b88", "title": "An attention free transformer", "year": "2021" }, { "authors": "Hongrun Zhang; Yanda Meng; Yitian Zhao; Yihong Qiao; Xiaoyun Yang; Sarah E Coupland; Yalin Zheng", "journal": "", "ref_id": "b89", "title": "Dtfdmil: Double-tier feature distillation multiple instance learning for histopathology whole slide image classification", "year": "2022" }, { "authors": "Jingwei Zhang; Saarthak Kapse; Ke Ma; Prateek Prasanna; Joel Saltz; Maria Vakalopoulou; Dimitris Samaras", "journal": "", "ref_id": "b90", "title": "Prompt-mil: Boosting multi-instance learning schemes via task-specific prompt tuning", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b91", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Shichuan Zhang; Chenglu Zhu; Honglin Li; Jiatong Cai; Lin Yang", "journal": "IEEE", "ref_id": "b92", "title": "Weakly supervised learning for cell recognition in immunohistochemical cytoplasm staining images", "year": "2022" }, { "authors": "Yunlong Zhang; Yuxuan Sun; Honglin Li; Sunyi Zheng; Chenglu Zhu; Lin Yang", "journal": "Springer", "ref_id": "b93", "title": "Benchmarking the robustness of deep neural networks to common corruptions in digital pathology", "year": "2022" }, { "authors": "Yunlong Zhang; Honglin Li; Yuxuan Sun; Sunyi Zheng; Chenglu Zhu; Lin Yang", "journal": "", "ref_id": "b94", "title": "Attention-challenging multiple instance learning for whole slide image classification", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 122.48, 647.08, 163.88, 9.65 ], "formula_id": "formula_0", "formula_text": "Y = max{y 1 , ..., y N }.(1)" }, { "formula_coordinates": [ 4, 391.77, 178.55, 153.34, 30.32 ], "formula_id": "formula_1", "formula_text": "Y = σ( N i=1 a i z i ),(2)" }, { "formula_coordinates": [ 4, 379.06, 381.03, 166.05, 11.72 ], "formula_id": "formula_2", "formula_text": "o i = softmax(q i K ⊤ )v i ,(3)" }, { "formula_coordinates": [ 5, 208.84, 454.52, 61.62, 9.65 ], "formula_id": "formula_3", "formula_text": "z i = z i + p m,i ." }, { "formula_coordinates": [ 5, 94.79, 553.37, 191.58, 12.69 ], "formula_id": "formula_4", "formula_text": "f (q m , m)f (k n , n) ⊤ = q m R n-m k ⊤ n .(4)" }, { "formula_coordinates": [ 5, 388.99, 440.35, 156.12, 12.69 ], "formula_id": "formula_5", "formula_text": "q m k ⊤ n -τ |m -n|(5)" }, { "formula_coordinates": [ 5, 347.58, 623.32, 193.66, 13.59 ], "formula_id": "formula_6", "formula_text": "q m k ⊤ n -τ |m j -n j | 2 + |m k -n k | 2 , (6" }, { "formula_coordinates": [ 5, 541.24, 626.61, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" } ]
2023-12-04
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b17", "b27", "b32", "b33", "b17", "b27", "b32", "b33", "b22", "b28", "b58", "b11", "b52", "b53", "b3", "b46", "b53", "b54", "b13", "b25", "b31", "b47", "b2", "b4", "b4", "b40", "b26", "b28" ], "table_ref": [], "text": "The field of image animation has recently gained attention, especially for creating short videos and animated GIFs for social media and photo-sharing platforms. However, current image animation methods [3,9,18,28,33,34] are limited in animating specific object types, such as fluid [18,28,33,34,38], natural scenes [7, 23,29,46,59], human hair [58], portraits [12,53,54], and bodies [3,4,25,47,54,55], limiting their practical application in open domain scenarios. Recent advancements in video diffusion models [11,14,17,26,32,48,52,65] have enabled the generation of diverse and realistic videos based on reference texts and images. In this paper, we aim to address the open domain image animation problem by leveraging the motion priors of video diffusion models. We propose a controllable diffusion-based image animation method capable of animating arbitrary objects within an image while pre-serving their details. Our practical experience has shown that creating animations solely through prompt text is laborious and challenging, and results in limited control over finer details. To enhance user control over the animation process, we introduce the motion area guidance and motion strength guidance, allowing for precise and interactive control of the motion speed of multiple objects, significantly improving controllable and fine-grained image animation.\nTo accurately identify movable objects and their corresponding movable regions within an image, we introduce motion area masks. Inspired by ControlNet [63], we append the mask along the channel dimension of the video latent representation and initialize the convolutional weights to zero, allowing them to adjust incrementally during the training process. This approach enables fine-grained and precise control over multiple movable areas in the input image, even when using multiple prompting texts. Training the model to follow the guidance of the motion area mask presents a significant challenge, as it is difficult to amass and annotate a substantial corpus of real videos in which only specific regions are in motion. To address this issue, we propose an unsupervised technique to generate synthetic videos with motion area masks derived from actual videos. The model is trained on both synthetic and real videos to ensure the generation of realistic videos directed by the motion area mask.\nTo effectively control the speed of moving objects in image animation, we introduce the metric of motion strength. Frame rate per second (FPS) represents the number of frames displayed in one second and previous methods [5,21,52] relied on FPS to control motion speed. However, it is important to note that different object types may exhibit varying motion speeds, and FPS primarily serves as a global scaling factor to indirectly adjust the motion speed of multiple objects. For instance, a video featuring a sculpture may have a high FPS but zero motion speed. To enable direct learning of motion speed by the video diffusion model, we propose a novel motion strength loss to supervise the model in learning inter-frame variances in the latent space.\nIn recent developments, image-to-video diffusion models [5,17,52] have emerged for generating realistic videos from input images. These models employ the CLIP vision encoder [41] to encode the reference image, capturing its semantic structure and style. However, they often struggle to preserve fine-grained visual details of the reference image. To address this limitation and apply video diffusion models to image animation tasks, we propose encoding the reference image into a latent space using a VAE [27]. This latent representation can then serve as the first frame of the generated video, effectively preserving image details without introducing additional parameters to existing video diffusion models, thereby enabling efficient training and inference for image animation tasks.\nThe combination of prompt text, motion area mask, and motion strength guidance enables the generation of complex image animations in diverse real-world scenarios. A sequential approach can be adopted, where a specific object is initially animated with a prompt text, followed by the animation of another object with a different prompt text. This iterative process allows users to progressively and interactively modify the image animation until a satisfactory outcome is achieved. An example of interactive image animation is illustrated in the last row of Figure 1. Initially, the actions of the boy are manipulated using a motion area mask marked in red together with the prompting text: \"the boy is running on the beach\". Subsequently, one of the two girls is animated through another motion area mask marked in green, accompanied by the prompting text: \"the girl runs to the sea\". In summary, our contributions in this work are threefold:\n• We introduce a highly flexible motion guidance mechanism that allows for fine-grained open domain image animation guided by motion area and motion strength. We propose a synthetic motion area mask generation method and a novel motion strength loss for effective training. adopts a two-stage process for human-centric video generation, while Generative Dynamics [29] focuses on modeling oscillatory motion in natural scenes. However, these methods are domain-specific. In contrast, our approach addresses the challenge of open-domain image animation." }, { "figure_ref": [], "heading": "Image generation with diffusion models", "publication_ref": [ "b12", "b26", "b5", "b14", "b41", "b40", "b6", "b44", "b41", "b39", "b34", "b2", "b21" ], "table_ref": [], "text": "The evolution of image generation research has transitioned from traditional frameworks such as Generative Adversarial Networks (GANs) [13], Variational Autoencoders (VAEs) [27], and autoregressive transformer models (ARMs) [6], to the more recent diffusion models (DMs) [15]. This shift is attributed to the stability, superior sample quality, and conditional generation capabilities of DMs. DALLE-2 [42] represents a significant advancement by integrating the CLIP model [41] for text-image feature alignment, enabling text-prompted image synthesis. GLIDE [37] introduces classifier-free guidance to refine image quality, while Imagen [45] utilizes a sequence of diffusion models for high-resolution image creation. Latent Diffusion Models (LDMs) [42] employ an autoencoder [10] to manage the diffusion process in latent space, enhancing efficiency. Subsequent models like DiTs [39] and SDXL [40] further concentrate on latent space manipulation. For conditional generation, T2I-Adapter [35] and ControlNet [63] have been developed to integrate spatial conditions such as depth maps and sketches into LDMs. Composer [22] extends this by training LDMs with multiple conditions for more precise control. Building on these developments, our work introduces motion area and motion strength guidance to provide fine-grained control of image animation." }, { "figure_ref": [], "heading": "Video generation with diffusion models", "publication_ref": [ "b6", "b39", "b44", "b43", "b15", "b47", "b25", "b63", "b4" ], "table_ref": [], "text": "Recent advancements in diffusion models (DMs) have shown great promise in video generation [37,39,40,43,45]. The Video Diffusion Model (VDM) [17] pioneers this domain by adapting the image diffusion U-Net architecture [44] into a 3D U-Net structure for joint image and video training. Imagen Video [16] employs a series of video diffusion models for high-resolution, temporally coherent video synthesis. Make-A-Video [48] innovatively learns motion patterns from unlabeled video data, while Tune-A-Video\n[57] explores one-shot video generation by fine-tuning LDMs with a single text-video pair. Text2Video-Zero [26] tackles zero-shot video generation using pretrained LDMs without further training. ControlVideo [64] introduces a hierarchical sampler and memory-efficient framework to craft extended videos swiftly. Concurrently, VideoCrafter1 [5] integrates image guidance from CLIP into diffusion models via cross-attention. Despite these innovations, capturing complex motion and camera dynamics remains challenging. VideoComposer [52] and DragNUWA [62] propose motion trajectory-based control for video generation, yet they fall short in interactive animation with multiple objects. Our approach addresses this by incorporating motion masks and motion strength parameters to precisely manipulate individual objects within an image, thus facilitating more user-centric interactive animation generation." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We provide an overview of the video diffusion model in Section 3.1, followed by its adaptation for the image animation task in Section 3.2. Sections 3.3 and 3.4 offer detailed information on the integration of motion guidance into the generation process. Lastly, Sections 3.5 and 3.6 present the inference process under different guidance. This results in the input latent with shape (frames+1, height, width, channel+1) for the 3D U-Net. To control the motion strength of the generated video, we project the motion strength as positional embedding and concatenate it with the time step embedding." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b43" ], "table_ref": [], "text": "In this section, we introduce the preliminary knowledge of the latent diffusion-based model (LDM) [43]. Given an image sample x 0 ∈ R 3×H×W , the LDM initially utilizes a pre-trained VAE to encode x 0 into a down-scaled latent representation z 0 ∈ R c×h×w . The forward process of the LDM can be described as a Markov chain that incrementally introduces Gaussian noise to the latent representation:\nq(z t |z t-1 ) = N (z t ; 1 -β t z t-1 , β t I),(1)\nwhere t = 1, ..., T and T denotes the total number of timesteps. β t is a coefficient that controls the noise strength in step t. The iterative noise adding can be simplified as:\nz t = √ ᾱt z 0 + √ 1 -ᾱt ϵ, ϵ ∼ N (0, I),(2)\nwhere ᾱt =\nt i=1 (1-β t ).\nDuring training, the LDM learns the latent space distribution of the real data by predicting the noise ϵ added on z t , resulting in a reduction of computational complexity for diffusion models. The objective function can be written as:\nl ϵ = ||ϵ -ϵ θ (z t , t, c)|| 2 2 ,(3)\nwhere ϵ θ (•) denotes the noise prediction function of diffusion models, which is implemented by an U-Net [44] architecture. To control the generation process flexibly, LDM employs a domain-specific encoder to map the user-input condition c into an intermediate representation, which is then injected into the UNet via a cross-attention layer.\nVideo diffusion models [11, 17, 48, 52] expand upon the image LDM by incorporating a 3D U-Net, enabling them to effectively handle video data. The 3D U-Net incorporates an extra temporal convolution following each spatial convolution and a temporal attention block following each spatial attention block. To inherit the generation capacity from image data, the 3D U-Net is trained concurrently with both image and video data." }, { "figure_ref": [ "fig_1" ], "heading": "Image Animation with Video Diffusion Model", "publication_ref": [ "b40", "b23", "b29" ], "table_ref": [], "text": "We employ the LDM VAE [43] to encode the reference image into a latent representation, denoted as z ref , in order to retain more appearance details. The VAE is trained for image reconstruction, thus z ref contains rich low-level image features. Although it may contain less semantic information compared to CLIP [41] vision tokens, the diffusion model itself has demonstrated powerful semantic understanding capabilities in tasks such as semantic segmentation [24,30]. As illustrated in Figure 2, our training pipeline uses the reference image as the initial frame and adopts an auto-regressive strategy to forecast subsequent frames, facilitating image animation without extra model parameters. The first frame's content is propagated to later frames via temporal convolution and attention mechanisms. Consequently, only the temporal layers are fine-tuned, while the spatial layers remain frozen. At each time step t, we concatenate the clean z ref with the noisy latent z t , which contains N frames, resulting in a latent representation with (N + 1) frames as input. We then select only the last N frames from the denoised z t . " }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Motion Area Guidance", "publication_ref": [ "b2" ], "table_ref": [], "text": "we introduce motion area guidance to provide users with precise control over the movable area of the input image. As shown in Figure 2, we concatenate the motion area mask with the video latent in the channel dimension. Drawing inspiration from ControlNet [63], we initialize the convolution kernel of the mask channel with zeros to maintain the original video generation capability.\nWe construct the training pairs of video and corresponding motion area masks from real videos using the following approach. Firstly, we convert the given video sample with N frames to gray-scale. Then, we calculate frame differences that exceed a threshold value T m . These differences are then combined to create the difference binary mask d:\nd = N -1 i=1 (|x i gray -x i-1 gray | > T m ),(4)\nwhere x i gray is the gray-scale of the i-th frame, the threshold T m determines the intensity of motion in both movable and non-movable areas. If T m is set too high, objects in non-movable areas may still appear to move. Conversely, if T m is set too low, objects in non-movable areas might be completely frozen, potentially causing image artifacts near the boundary of the motion area mask. Subsequently, we identify the contours of these difference areas in d and construct the motion area mask m by assigning label 1 to the pixels contained within these contours, indicating the movable area. Finally, considering the motion area mask m, we post-process the video latent z 0 such that pixels in the nonmovable area are reset to the values of the first frame z 0 0 :\nz ′ 0 = (1 -m) • z 0 0 + m • z 0 .(5)\nWe use z i t to denote the i-th frame of the video latent at time step t. As illustrated in Section 4.4, the post-processing step significantly enhances the effectiveness of the motion area guidance. To address subtle movements imperceptible to the human eye, which should not be marked as the movable area, we explicitly instruct the model to keep these pixels unchanged. The motion threshold T m is adjusted to ensure that the visual differences between the reconstructed video z ′ 0 and z 0 remain reasonably small. The impact of motion area guidance on is demonstrated in Figure 3." }, { "figure_ref": [ "fig_3" ], "heading": "Motion Strength Guidance", "publication_ref": [], "table_ref": [], "text": "During our training process, we observed that the sampling FPS affects the motion speed of the movable objects in the generated videos. However, using FPS alone as the motion speed guidance for video generation is inadequate, as videos with the same FPS may exhibit varying motion speeds based on their content. Therefore, FPS alone cannot effectively regulate the speed of object motion. Consequently, we propose a metric called motion strength s to quantitatively measure the motion speed of the target motion area:\ns(z) = 1 N -1 N i=1 |z i -z i-1 |. (6\n)\nHere, the motion strength quantifies the differences between frames in the latent space. Similar to the time step, we project the motion strength into a positional embedding and add it to each frame in the residual block to ensure uniform application of the motion strength to every frame. The impact of motion strength guidance on the animation results is illustrated in Figure 4.\nTraining our motion strength guided pipeline directly with the noise prediction loss defined in Equation 3 is hard to converge. This is likely due to the fact that the noise prediction loss is primarily influenced by the frame-level image difference and does not directly supervise the inter-frame difference. Therefore, we introduce a motion strength loss to directly supervise the inter-frame difference:\nl s = ||s(z 0 ) -s(ẑ 0 )|| 2 2 ,(7)\nwhere ẑ0 represents the model's estimated clean video latent z 0 , which can be obtained by transforming Equation 2as:\nẑ0 = z 0 - √ 1 -ᾱt ϵ θ (z t , t, c) √ ᾱt . (8\n)\nFinally, we combine the noise prediction loss and the motion strength loss with the scaling factor λ.\nl = l ϵ + λ • l s .(9)" }, { "figure_ref": [ "fig_0" ], "heading": "Guidance Composition", "publication_ref": [], "table_ref": [], "text": "Our image animation model integrates guidance from reference images, text, motion areas, and motion strength. During training, we vary the textual prompt and motion area to allow the model to accept different combinations of guidance during inference. However, conflicting guidance inputs can diminish their individual effects. For instance, if the text prompt does not align with the content of the reference image, the model prioritizes fidelity to the image. Fortunately, with motion area guidance, objects outside the motion area mask are completely frozen, allowing for interactive editing of the generated animation. As demonstrated in the last row of Figure 1, different objects can be animated with different texts. " }, { "figure_ref": [], "heading": "Shared Noise Inference", "publication_ref": [ "b14" ], "table_ref": [], "text": "During training, we construct the input latent by adding noise on the clean video latent. This noise schedule leaves some residual signal even at the terminal diffusion timestep T. As a consequence, the diffusion model fails to generalize faithful image animation during test time when we sample from random Gaussian noise without any real data signal.\nTo solve this train-test discrepancy problem, during testing, we obtain the base noise by adding noise on z ref using the forward process of DDPM [15]. As the DDPM independently samples random noise ϵ i for each frame, it allows for frame diversity. The DDPM independently samples random noise ϵ i for each frame, allowing for frame diversity. The noise latent for frame i can be expressed as:\nz i T = √ ᾱT z ref + √ 1 -ᾱT ϵ i ,(10)\nwhere ᾱT denotes the diffusion factor. This approach combines the base noise with z ref to achieve a balance between preserving the reference image information and introducing frame-specific diversity through the random noise ϵ i . This design decision is critical for high image fidelity animation." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b21", "b1" ], "table_ref": [], "text": "Datasets. Our model is initialized from VideoComposer [22] which pretrains on WebVid10M [2]. Then we " }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative Results", "publication_ref": [ "b0", "b4" ], "table_ref": [], "text": "We present several visual examples of our method with three baselines: VideoComposer [52], Gen-2 [1] and VideoCrafter1 [5] in Figure 5. All the methods have the ability to generate videos based on reference image, thus showcasing the significant progress in image-conditioned video generation. VideoComposer and VideoCrafter1 achieves satisfactory levels of fluency but lost details of the reference image. In contrast, our proposed method preserves the latent representation of the first frame and denoises the subsequent frames, which plays a fundamental role in ensuring faithful video generation. Despite Gen-2's relatively successful preservation of temporal consistency, it falls short in effectively controlling the motion amplitudes of the generated videos. In contrast, our method offers the flexibility of adjusting motion strength, where larger strengths correspond to greater motion amplitudes." }, { "figure_ref": [ "fig_4" ], "heading": "Ablation Study", "publication_ref": [ "b4" ], "table_ref": [ "tab_3", "tab_4" ], "text": "Image condition We conduct an analysis of various design strategies for injecting reference image information in Table 2. The \"CLIP Vision Global Token\" method used by Video Composer [52] encodes the input image to vision tokens using CLIP image encoder and only injects the global [CLS] token into the U-Net through cross attention. The \"CLIP Vision Full Tokens\" method, used by VideoCrafter1 [5], projects and concatenates all vision tokens with the text embedding. As shown in Figure 5, CLIP vision tokens contain rich semantic information by pretraining on text and image pairs, but image details such as background texture and face details may be lost. Using all vision tokens also requires more computation cost to compute the cross attention between the latent and additional vision tokens. To preserve more image details, we first try the \"Concat Latent Spatial\" method, which concatenates the reference image latent from VAE with every frame on the channel dimension. However, we notice that this method limits the diversity of video motion when generating long videos. Finally, we concatenate the input image latent and the noise latent on the temporal dimension. As shown in \"Concat Latent Temporal\", this method produces videos with better image fidelity and achieves better inference efficiency without using the additional ViT vision encoder. free approach to freeze the area outside the motion area is to apply the motion mask on the noise video latent, therefore the latent values outside motion area are to the same. However, this method produces videos nearly not moves, since the freeze noise latent is not Gaussian noise, which is inconsistent with the training setting. In Table 3, the \"Mask Guidance\" method concatenates the motion area mask and the noise latent as the input to the U-Net. \"Mask Guidance + Freeze\" is our proposed method that the latent values in non-movable areas are frozen and set to be the same as the first frame. We can observe the strategy of freezing nonmovable areas helps the model to easily learn the guidance between the motion area mask and the target video." }, { "figure_ref": [], "heading": "Motion area guidance", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "Motion strength guidance We compare different design choices of motion strength guidance in Table 4. Since the initial noise video latent is generated by adding noise on z ref , we can perform limited control over motion strength by the amount of added noise, as indicated by \"Adding Noise\". The performance of FPS guidance is largely depend on the video content, so its variance is large. Comparing to FPS, our motion strength guidance performs more stable and achiever lower motion strength error." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "In this paper, we have presented an open domain image animation pipeline with a video diffusion model, incorporating motion area and motion strength guidance. While our method has shown promising results in achieving finegrained and interactive image animation, it is important to acknowledge that our model has not been trained on highresolution videos due to limited training resources. This limitation constrains the applicability and performance of our method in generating high-resolution animations." } ]
Image animation is a key task in computer vision which aims to generate dynamic visual content from a static image. Recent image animation methods employ neural based rendering technique to generate realistic animations. Despite these advancements, achieving fine-grained and controllable image animation guided by text remains challenging, particularly for open-domain images captured in diverse real environments. In this paper, we introduce an open domain image animation method that leverages the motion prior of video diffusion model. Our approach introduces targeted motion area guidance and motion strength guidance, enabling precise control the movable area and its motion speed. This results in enhanced alignment between the animated visual elements and the prompting text, thereby facilitating a fine-grained and interactive animation generation process for intricate motion sequences. We validate the effectiveness of our method through rigorous experiments on an open-domain dataset, with the results showcasing its superior performance.
AnimateAnything: Fine-Grained Open Domain Image Animation with Motion Guidance
[ { "figure_caption": "Figure 1 .1Figure 1. Examples of our method for image animation. The first three rows illustrate the use of prompt text to animate the reference image, with the 6th, 11th, and 16th frames of the generated animation visualized. The last three rows demonstrate the precise control of movable objects using motion mask guidance in images containing multiple objects. The last two rows show the iterative generation of animation using red and green prompts to animate objects in the corresponding masks. Additional examples are provided in the supplemental material.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Overview of our pipeline. We adopt the widely used 3D U-Net based video diffusion model[11, 52] for image animation. Given a noisy video latent with shape (frames, height, width, channel), we concatenate the clean latent of the reference image and the noisy frames in the temporal dimension. Additionally, we concatenate the motion area mask with the video latent in the channel dimension. This results in the input latent with shape (frames+1, height, width, channel+1) for the 3D U-Net. To control the motion strength of the generated video, we project the motion strength as positional embedding and concatenate it with the time step embedding.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Motion mask guidance examples. The first column and second column are the input mask and motion mask respectively. The user can specify one or multiple movable areas in the motion mask to fine grained control the video generation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Motion strength guidance examples. Augmenting the motion strength accelerates the alteration of Mona Lisa's expression, but excessive motion strength may lead to the loss of finegrained facial details.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative Results. Comparing to open sourced methods Video Composer and VideoCrafter1, our method achieves higher image fidelity. Comparing to the commercial product Gen-2, our method achieves higher frame consistency.finetune it on 20K videos randomly sampled from HD-VILA-100M [61] to remove watermark. Following[48,51,56], we conduct evaluation on MSR-VTT dataset[60] in a zero-shot setting. MSR-VTT is an open domain video retrieval dataset, where each video clip is accompanied by 20 natural sentences for description. Typically, the textual descriptions corresponding to the 2,990 video clips in its test set are utilized as prompts to generate videos. Evaluation Metrics. Following previous methods[11,48], we use Frechet Video Distance (FVD)[50] to measure the video generation quality. We also use the Frame Consistency measured via CLIP cosine similarity of consecutive frames to show the temporal consistency of video. We propose the metric Motion Mask Precision to measure the effect of motion mask guidance, which calculates the percentage of the moving area of generated video within the given motion area mask. To measure the effect of motion strength guidance, we propose Motion Strength Error, which is the mean squared error of the generated video motion strength and the given video motion strength. In this section, all statistics are collected by generating 16-frame videos in 256 × 256 resolution with DDIM[49] algorithm in 50 steps. Implements Details. We employ the AdamW[31] optimizer with a learning rate of 5 × 10 -5 for training our model. All experiments are conducted on a single NVIDIA A10 GPU, requiring approximately 20 GB of vRAM for training and 6 GB of vRAM for inference. To enhance the model performance, we conduct multiple frame rate sampling during training, utilizing various frame rates (e.g.,4,8,12) to obtain an 8-frame training clip with a resolution of 384 × 384. We train the model for 10,000 iterations with a", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "presents the quantitative results of the zero-shotvideo generation ability on MSR-VTT compared to ex-isting methods. To ensure a fair comparison, all modelsare evaluated on a resolution of 256 × 256. With thesame text and first frame condition, our method achieveslower FVD comparing to VideoComposer and concurrentwork VideoCrafter1, which is evident that integrating fine-grained spatial information into the conditions leads to asignificant improvement. This improvement suggests thatour method is capable of generating more coherent videosthan previous approaches.MethodConditions Params(B) FVD(↓)CogVideo [19]text15.51294LVDM [14]text1.16742MagicVideo [65]text-998VideoComposer [52]text1.85580VideoFusion [32]text1.83581ModelScope [51]text1.70550VideoComposer [52] text&image2.49551VideoCrafter1 [5]text&image3.24465Ourstext&image1.81443", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Video generation performance on the test set of MSR-VTT. \"Conditions\" denotes the type of condition for generation.", "figure_data": "MethodFVD Consistency Time(s) Mem(G)CLIP Vision Global Token 5510.87922.1311.1CLIP Vision Full Tokens 4570.91124.1211.7Concat Latent Spatial4500.91720.385.7Concat Latent Temporal4430.91621.545.9", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The performance comparison of image condition designs.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table 3 illustrates different design choices for motion area guidance. A simple and training-Ablation study of the motion mask guidance. Our proposed strategy demonstrates effective capability in following designated motion areas to generate corresponding animation videos.", "figure_data": "MethodMotion Mask PrecisionNo Control0.21Mask Guidance0.52Mask Guidance + Freeze0.82MethodMotion Strength ErrorNo Control14.19Adding Noise12.74FPS Guidance8.37Motion Strength Guidance4.82Motion Strength Guidance + Loss2.36", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablate the design of the motion strength guidance. Comparing to FPS guidance, our method offers greater flexibility in incorporating motion into animation videos.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" } ]
Zuozhuo Dai; Zhenghao Zhang; Yao Yao; Bingxue Qiu; Siyu Zhu; Long Qin; Weizhi Wang; Alibaba Group
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Gen-2", "year": "2023-10-22" }, { "authors": "Max Bain; Arsha Nagrani; Gul Varol; Andrew Zisserman", "journal": "", "ref_id": "b1", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "Hugo Bertiche; J Niloy; Kuldeep Mitra; Chun-Hao P Kulkarni; Tuanfeng Y Huang; Meysam Wang; Sergio Madadi; Duygu Escalera; Ceylan", "journal": "", "ref_id": "b2", "title": "Blowing in the wind: Cyclenet for human cinemagraphs from still images", "year": "2023" }, { "authors": "Andreas Blattmann; Timo Milbich; Michael Dorkenwald; Bjorn Ommer", "journal": "", "ref_id": "b3", "title": "Understanding object dynamics for interactive image-to-video synthesis", "year": "2021" }, { "authors": "Haoxin Chen; Menghan Xia; Yin-Yin He; Yong Zhang; Xiaodong Cun; Shaoshu Yang; Jinbo Xing; Yaofang Liu; Qifeng Chen; Xintao Wang; Chao-Liang Weng; Ying Shan", "journal": "", "ref_id": "b4", "title": "Videocrafter1: Open diffusion models for high-quality video generation", "year": "2023" }, { "authors": "Mark Chen; Alec Radford; Rewon Child; Jeffrey Wu; Heewoo Jun; David Luan; Ilya Sutskever", "journal": "PMLR", "ref_id": "b5", "title": "Generative pretraining from pixels", "year": "2020" }, { "authors": "Chia-Chi Cheng; Hung-Yu Chen; Wei-Chen Chiu", "journal": "", "ref_id": "b6", "title": "Time flies: Animating a still image with time-lapse video as reference", "year": "2020" }, { "authors": "Yung-Yu Chuang; Dan B Goldman; Ke ; Colin Zheng; Brian Curless; Richard David H Salesin; Szeliski", "journal": "", "ref_id": "b7", "title": "Animating pictures with stochastic motion textures", "year": "2005" }, { "authors": "Yuki Endo; Yoshihiro Kanamori; Shigeru Kuriyama", "journal": "ACM TOG", "ref_id": "b8", "title": "Animating landscape: self-supervised learning of decoupled motion and appearance for single-image video synthesis", "year": "2019" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b9", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Patrick Esser; Johnathan Chiu; Parmida Atighehchian; Jonathan Granskog; Anastasis Germanidis", "journal": "", "ref_id": "b10", "title": "Structure and content-guided video synthesis with diffusion models", "year": "2023" }, { "authors": "Jiahao Geng; Tianjia Shao; Youyi Zheng; Yanlin Weng; Kun Zhou", "journal": "ACM TOG", "ref_id": "b11", "title": "Warp-guided gans for single-photo facial animation", "year": "2018" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b12", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Yingqing He; Tianyu Yang; Yong Zhang; Ying Shan; Qifeng Chen", "journal": "", "ref_id": "b13", "title": "Latent video diffusion models for high-fidelity video generation with arbitrary lengths", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b14", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Fleet", "journal": "", "ref_id": "b15", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey Gritsenko; William Chan; Mohammad Norouzi; David J Fleet", "journal": "NeurIPS", "ref_id": "b16", "title": "Video diffusion models", "year": "2022" }, { "authors": "Aleksander Holynski; Brian L Curless; Steven M Seitz; Richard Szeliski", "journal": "", "ref_id": "b17", "title": "Animating pictures with eulerian motion fields", "year": "2021" }, { "authors": "Wenyi Hong; Ming Ding; Wendi Zheng; Xinghan Liu; Jie Tang", "journal": "", "ref_id": "b18", "title": "Cogvideo: Large-scale pretraining for text-to-video generation via transformers", "year": "2022" }, { "authors": "Yaosi Hu; Chong Luo; Zhenzhong Chen", "journal": "", "ref_id": "b19", "title": "Make it move: Controllable image-to-video generation with text descriptions", "year": "2021" }, { "authors": "Yaosi Hu; Chong Luo; Zhenzhong Chen", "journal": "", "ref_id": "b20", "title": "Make it move: controllable image-to-video generation with text descriptions", "year": "2022" }, { "authors": "Lianghua Huang; Di Chen; Yu Liu; Yujun Shen; Deli Zhao; Jingren Zhou", "journal": "", "ref_id": "b21", "title": "Composer: Creative and controllable image synthesis with composable conditions", "year": "2023" }, { "authors": "Wei-Cih Jhou; Wen-Huang Cheng", "journal": "IEEE TMM", "ref_id": "b22", "title": "Animating still landscape photographs through cloud motion creation", "year": "2015" }, { "authors": "Laurynas Karazija; Iro Laina; Andrea Vedaldi; C Rupprecht", "journal": "", "ref_id": "b23", "title": "Diffusion models for zero-shot open-vocabulary segmentation", "year": "2023" }, { "authors": "Johanna Karras; Aleksander Holynski; Ting-Chun; Ira Wang; Kemelmacher-Shlizerman", "journal": "", "ref_id": "b24", "title": "Dreampose: Fashion image-to-video synthesis via stable diffusion", "year": "2023" }, { "authors": "Levon Khachatryan; Andranik Movsisyan; Vahram Tadevosyan; Roberto Henschel; Zhangyang Wang; Shant Navasardyan; Humphrey Shi", "journal": "", "ref_id": "b25", "title": "Text2video-zero: Text-toimage diffusion models are zero-shot video generators", "year": "2023" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b26", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "Xingyi Li; Zhiguo Cao; Huiqiang Sun; Jianming Zhang; Ke Xian; Guo-Shing Lin", "journal": "", "ref_id": "b27", "title": "3d cinemagraphy from a single image", "year": "2023" }, { "authors": "Zhengqi Li; Richard Tucker; Noah Snavely; Aleksander Holynski", "journal": "", "ref_id": "b28", "title": "Generative image dynamics", "year": "2023" }, { "authors": "Ziyi Li; Qinye Zhou; Xiaoyun Zhang; Ya Zhang; Yanfeng Wang; Weidi Xie", "journal": "", "ref_id": "b29", "title": "Open-vocabulary object segmentation with diffusion models", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b30", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Zhengxiong Luo; Dayou Chen; Yingya Zhang; Yan Huang; Liang Wang; Yujun Shen; Deli Zhao; Jingren Zhou; Tieniu Tan", "journal": "", "ref_id": "b31", "title": "Videofusion: Decomposed diffusion models for high-quality video generation", "year": "2023" }, { "authors": "Aniruddha Mahapatra; Kuldeep Kulkarni", "journal": "", "ref_id": "b32", "title": "Controllable animation of fluid elements in still images", "year": "2022" }, { "authors": "Aniruddha Mahapatra; Aliaksandr Siarohin; Hsin-Ying Lee; S Tulyakov; Jun-Yan Zhu", "journal": "", "ref_id": "b33", "title": "Synthesizing artistic cinemagraphs from text", "year": "2023" }, { "authors": "Chong Mou; Xintao Wang; Liangbin Xie; Jian Zhang; Zhongang Qi; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b34", "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models", "year": "2023" }, { "authors": "Haomiao Ni; Changhao Shi; Kai Li; Sharon X Huang; Martin Renqiang; Min ", "journal": "", "ref_id": "b35", "title": "Conditional image-to-video generation with latent flow diffusion models", "year": "2023" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b36", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Makoto Okabe; Ken Anjyo; Takeo Igarashi; Hans-Peter Seidel", "journal": "Comput. Graph. Forum", "ref_id": "b37", "title": "Animating pictures of fluid using video examples", "year": "2009" }, { "authors": "William Peebles; Saining Xie", "journal": "", "ref_id": "b38", "title": "Scalable diffusion models with transformers", "year": "2023" }, { "authors": "Dustin Podell; Zion English; Kyle Lacey; Andreas Blattmann; Tim Dockhorn; Jonas Müller; Joe Penna; Robin Rombach", "journal": "", "ref_id": "b39", "title": "Sdxl: improving latent diffusion models for high-resolution image synthesis", "year": "" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b40", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b41", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022-07" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Bjorn Ommer", "journal": "", "ref_id": "b42", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b43", "title": "Unet: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Tamar Rott Shaham; Tali Dekel; Tomer Michaeli", "journal": "", "ref_id": "b45", "title": "Singan: Learning a generative model from a single natural image", "year": "2019" }, { "authors": "Aliaksandr Siarohin; J Oliver; Jian Woodford; Menglei Ren; Sergey Chai; Tulyakov", "journal": "", "ref_id": "b46", "title": "Motion representations for articulated animation", "year": "2021" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni", "journal": "", "ref_id": "b47", "title": "Make-a-video: Text-to-video generation without text-video data", "year": "2022" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "ICLR", "ref_id": "b48", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Thomas Unterthiner; Karol Sjoerd Van Steenkiste; Raphaël Kurach; Marcin Marinier; Sylvain Michalski; Gelly", "journal": "", "ref_id": "b49", "title": "Fvd: A new metric for video generation", "year": "2019" }, { "authors": "Jiuniu Wang; Hangjie Yuan; Dayou Chen; Yingya Zhang; Xiang Wang; Shiwei Zhang", "journal": "", "ref_id": "b50", "title": "Modelscope text-to-video technical report", "year": "2023" }, { "authors": "Xiang Wang; Hangjie Yuan; Shiwei Zhang; Dayou Chen; Jiuniu Wang; Yingya Zhang; Yujun Shen; Deli Zhao; Jingren Zhou", "journal": "", "ref_id": "b51", "title": "Videocomposer: Compositional video synthesis with motion controllability", "year": "2008" }, { "authors": "Yaohui Wang; Piotr Bilinski; Francois Bremond; Antitza Dantcheva", "journal": "", "ref_id": "b52", "title": "Imaginator: Conditional spatio-temporal gan for video generation", "year": "2020" }, { "authors": "Yaohui Wang; Di Yang; Francois Bremond; Antitza Dantcheva", "journal": "ICLR", "ref_id": "b53", "title": "Latent image animator: Learning to animate images via latent space navigation", "year": "2021" }, { "authors": "Chung-Yi Weng; Brian Curless; Ira Kemelmacher-Shlizerman", "journal": "", "ref_id": "b54", "title": "Photo wake-up: 3d character animation from a single photo", "year": "2019" }, { "authors": "Chenfei Wu; Jian Liang; Lei Ji; Fan Yang; Yuejian Fang; Daxin Jiang; Nan Duan", "journal": "Springer", "ref_id": "b55", "title": "Nüwa: Visual synthesis pretraining for neural visual world creation", "year": "2022" }, { "authors": "Jay Zhangjie Wu; Yixiao Ge; Xintao Wang; Stan Weixian Lei; Yuchao Gu; Yufei Shi; Wynne Hsu; Ying Shan; Xiaohu Qie; Mike Zheng Shou", "journal": "", "ref_id": "b56", "title": "Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation", "year": "2023" }, { "authors": "Wenpeng Xiao; Wentao Liu; Yitong Wang; Bernard Ghanem; Bing Li", "journal": "", "ref_id": "b57", "title": "Automatic animation of hair blowing in still portrait photos", "year": "2023" }, { "authors": "Wei Xiong; Wenhan Luo; Lin Ma; Wei Liu; Jiebo Luo", "journal": "", "ref_id": "b58", "title": "Learning to generate time-lapse videos using multi-stage dynamic generative adversarial networks", "year": "2018" }, { "authors": "Jun Xu; Tao Mei; Ting Yao; Yong Rui", "journal": "", "ref_id": "b59", "title": "Msr-vtt: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "Tiankai Hongwei Xue; Yanhong Hang; Yuchong Zeng; Bei Sun; Huan Liu; Jianlong Yang; Baining Fu; Guo", "journal": "", "ref_id": "b60", "title": "Advancing high-resolution video-language representation with large-scale video transcriptions", "year": "2022" }, { "authors": "Shengming Yin; Chenfei Wu; Jian Liang; Jie Shi; Houqiang Li; Gong Ming; Nan Duan", "journal": "", "ref_id": "b61", "title": "Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory", "year": "2023" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b62", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Yabo Zhang; Yuxiang Wei; Dongsheng Jiang; Xiaopeng Zhang; Wangmeng Zuo; Qi Tian", "journal": "", "ref_id": "b63", "title": "Controlvideo: Training-free controllable text-to-video generation", "year": "2023" }, { "authors": "Daquan Zhou; Weimin Wang; Hanshu Yan; Weiwei Lv; Yizhe Zhu; Jiashi Feng", "journal": "", "ref_id": "b64", "title": "Magicvideo: Efficient video generation with latent diffusion models", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 87.67, 462.5, 198.7, 9.65 ], "formula_id": "formula_0", "formula_text": "q(z t |z t-1 ) = N (z t ; 1 -β t z t-1 , β t I),(1)" }, { "formula_coordinates": [ 4, 84.88, 525.45, 201.48, 17.63 ], "formula_id": "formula_1", "formula_text": "z t = √ ᾱt z 0 + √ 1 -ᾱt ϵ, ϵ ∼ N (0, I),(2)" }, { "formula_coordinates": [ 4, 109.04, 549, 46.43, 14.11 ], "formula_id": "formula_2", "formula_text": "t i=1 (1-β t )." }, { "formula_coordinates": [ 4, 120.33, 620.03, 166.03, 12.69 ], "formula_id": "formula_3", "formula_text": "l ϵ = ||ϵ -ϵ θ (z t , t, c)|| 2 2 ,(3)" }, { "formula_coordinates": [ 5, 100.07, 602.61, 186.29, 30.32 ], "formula_id": "formula_4", "formula_text": "d = N -1 i=1 (|x i gray -x i-1 gray | > T m ),(4)" }, { "formula_coordinates": [ 5, 370.61, 511.76, 174.51, 12.69 ], "formula_id": "formula_5", "formula_text": "z ′ 0 = (1 -m) • z 0 0 + m • z 0 .(5)" }, { "formula_coordinates": [ 6, 105.95, 166.33, 176.54, 30.32 ], "formula_id": "formula_6", "formula_text": "s(z) = 1 N -1 N i=1 |z i -z i-1 |. (6" }, { "formula_coordinates": [ 6, 282.49, 177.06, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 6, 121.22, 383.93, 165.15, 12.69 ], "formula_id": "formula_8", "formula_text": "l s = ||s(z 0 ) -s(ẑ 0 )|| 2 2 ,(7)" }, { "formula_coordinates": [ 6, 104.91, 434.93, 177.59, 29.81 ], "formula_id": "formula_9", "formula_text": "ẑ0 = z 0 - √ 1 -ᾱt ϵ θ (z t , t, c) √ ᾱt . (8" }, { "formula_coordinates": [ 6, 282.49, 449.48, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 6, 139.02, 511.21, 147.34, 9.65 ], "formula_id": "formula_11", "formula_text": "l = l ϵ + λ • l s .(9)" }, { "formula_coordinates": [ 6, 365.46, 552.64, 179.65, 18.6 ], "formula_id": "formula_12", "formula_text": "z i T = √ ᾱT z ref + √ 1 -ᾱT ϵ i ,(10)" } ]
2023-11-25
[ { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b1", "b20", "b25", "b35", "b19", "b38", "b34", "b31", "b37", "b31", "b8", "b23", "b34", "b0", "b11" ], "table_ref": [], "text": "Large visual models [2,6,20,25,35] based on transformers, excel in various visual tasks, such as object detection [19], visual question answering [4], scene graph generation [38], and visual reasoning [34], by using large-scale unsupervised pre-training and supervised multi-task training. However, these end-to-end models do not essentially reveal internal logic and need fine-tuning for new tasks [26], which might be costly and challenging for complex and long-tailed tasks. In pursuit of task inference without additional training, interpretable programming-based approaches have been developed, allowing the expression of logic and reasoning for visual tasks through the assembled code modules. Previous works like Visual Programming [7] and ViperGPT [31], use code-generation models to compose vision-language models (VLMs) [37,40] for instance, uses a API to access modules and generates executable code. It requires no further training and leverages the expressive power of programming languages, making it effective for solving complex visual tasks.\nDespite their advantages, current programming-based methods tend to generate lines of atomic code sequentially in a single pass, without properly decomposing the task into smaller manageable subtasks and generating corresponding code blocks separately. Such a manner leads to two main issues: 1) Insufficient hierarchical task decomposition: Failing to plan the program structure in a hierarchical manner, previous works [7,31] largely struggle to handle complex tasks efficiently, especially for handling logically intricate tasks. This could potentially lead to sub-optimal or hard-to-maintain code and contradict the original design intention for the compositional task. Further, without a clear decomposition of the problem, identifying and fixing bugs becomes daunting as the error could be deeply embedded within intertwined program logic. Modular code, in con-trast, enables easier isolation and resolution of issues. 2) Ineffective intermediate feedback utilization: Since the program is generated in a single pass, programming-based methods fail to take advantage of intermediate results and system-returned values during program execution that can enhance code quality and facilitate debugging. This indicates a lack of adeptness in leveraging real-time feedback and adjusting code logic dynamically, which could otherwise lead to more refined and debuggable code outputs.\nDrawing an analogy from the process of human programmers [8,23], we consider four essential steps typically followed in program development: 1) Reference: search for the most relevant algorithm that aligns with the given task's logical structure to serve as a high-level logical reference. 2) Decomposition: based on the reference's structure, decompose and frame the programming task into several subtask modules, later constructing an executable draft program. 3) Feedback: obtain systematical feedback for program revision by comparing the expectation with program output, intermediate variables, and compiler return values. 4) Iteration: progressively refine the program based on the feedback until get the desired correct outcome. We exemplify the idea in Figure 1. For the query \"How many muffins on the table?\", one first retrieves a similar query (e.g. \"How many toys on the desk?\") and parses its logic structure as a reference. Then decompose the task into two subtasks: \"find the muffins on the table\" and \"count the number of muffins\". The programmer will further evaluate the output, intermediate results, and the program against the expectation, and finally refine the program iteratively until satisfied.\nInspired by this process in software engineering, we propose De-fine, a novel framework that decomposes the intricate tasks into executable program blocks by modeling the logic structure of relevant tasks and automatically refines the program based on multifaceted feedbacks from the execution. Our framework advances in relieving humans from the tedious process of converting ideas into programs. Specifically, as shown in Figure 2, De-fine generates a sketch-based logical prompt that reveals the internal task logic without redundancy. This prompt is selected based on semantic similarity to the task query and can imitate the logic of the program after sketching. Then, prompted by sketch-based examples, a large language model (LLM) can generate executable programs that decompose the tasks for the queries. After execution, De-fine automatically refines the program blocks by multifaceted feedback derived from the program results, intermediate variables, and compiler messages. These systematic feedbacks are summarized by categories through multiple targeted specific models.\nImportantly, the two core modules of De-fine collaborate well with each other. Mutually, the refinement part can extract applicable codes based on feedback and expand the codebase to a logically well-structured one which will be used as prompts for future tasks. This logical codebase even does not require manual annotation or any groundtruth programs. Reversely, the decomposition part also contributes to the refinement part by instructing the program to generate more detailed code that provides richer reference information for feedback. These two components are interdependent and synergistically reinforce each other.\nOur empirical evaluation on various benchmarks reveals that the proposed method is able to effectively decompose intricate multi-step problems and be adapted to various visual tasks such as image grounding [13], visual reasoning [34], and image question answering [1,11,22]. By capitalizing on multi-faceted feedback and the capabilities of other multi-modal language models, we achieve state-ofthe-art (SOTA) zero-shot results on five tasks without model fine-tuning or supervised training data.\nOverall, our contributions are as follows: " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b32", "b3", "b12", "b14", "b21" ], "table_ref": [], "text": "Program generation and self-optimization have seen renewed momentum owing to the incredible understanding and generation capabilities of LLMs. We now discuss previous program generation, recent work in using LLMs for vision, and advances in self-refinement.\nProgram Generation. Visual program generation is an active research domain that aims to synthesize programs performing vision tasks using neural symbols [27] or Python modules [18,32]. This approach is based on the assumption [3,12] that vision tasks are inherently compositional and can be decomposed into atomic perceptual units like lines of code. Yet, complex tasks pose a challenge for this approach, as the generated code is often sub-optimal due to the insufficient semantic understanding of LLMs [14]. In contrast, De-fine can generate wellperformance code with a hierarchical structure.\nVisual Programming with LLM. Programming-based methods [28,36,39] are scalable and interpretable for vision tasks, as they can incorporate any vision or language Refinement with Auto-feedback. Even for human programmers, there is no guarantee that the code written on the first try is always accurate. Therefore, we hope the model can provide multifaceted feedback and refine its previously generated program. Previous work like Self-debug [5] uses the error message of the program as feedback to modify the code generated. Self-refine [21] optimizes the output through feedback and refinement iteration. However, this feedback comes from a single modality of text and is only generated from the final result. In contrast, De-fine can provide feedback on variables during code execution and generate feedback for different variable types, enabling it to handle visual, textual, and error messages." }, { "figure_ref": [ "fig_2" ], "heading": "Methodology", "publication_ref": [ "b33", "b37" ], "table_ref": [], "text": "To address the limitations of current programming-based approaches in insufficient decomposing and utilizing feed-back, we propose De-fine, a novel decomposing and refining framework. Our method is shown in Figure 2, given a task query and a visual input, De-fine first retrieves the Top-K most semantically similar programs from the codebase to provide examples of logical structure. It next uses a sketcher [16] to transform these relevant programs into a sketch-based logical prompt that guides the decomposition and well-structured program synthesis (Section 3.1). During execution, the specified feedback models [33,37] take the program results, intermediate variables, and compiler messages as inputs, generating multifaceted feedback on substeps (Section 3.2). De-fine then automatically refines the program according to the systematic feedback to produce a well-performance code (Section 3.3). Moreover, these self-improved high-quality programs enrich the codebase for future use (Section 3.4). The whole process does not require any additional input or annotation and relies solely on the self-optimization of the model." }, { "figure_ref": [], "heading": "Sketch-based Logical Prompt Generation", "publication_ref": [ "b10" ], "table_ref": [], "text": "To better decompose the task and reveal its logical structure, we retrieve programs similar to the task query from the codebase as examples for post-processing. Nevertheless, the programs may contain redundant information (e.g. variable names) that impairs model reasoning. In addition, the programs may be too long to fit within the token limit, preventing us from providing enough examples. Therefore, we designed a sketch-based logical prompt, which replaces the irrelevant part with a placeholder for the specific retrieved program to simplify the prompt and retain the logical information including useful comments for reasoning.\nRetrieve Query-relevant Codes. Given a textual query q about its contents, we first use a retriever [16] R selects Top-K similar code snippets Z = R(B, q), where Z = {z 1 , z 2 , . . . , z K } and z i represents the i-th code corresponding to a description similar to q from a codebase B.\nThe objective of the retriever is to select code that shares a similar logical structure with the given task as a high-level logic reference. Following the intuition from previous studies [9,10], we assume that codes with similar natural language descriptions are likely to have similar functionality and structure. Therefore, we use the task description as a query to perform a semantic search over the natural language descriptions in the retrieval codebase. Then, we retrieve the code snippets that correspond to the most similar descriptions as candidates for the similar code.\nExtract Sketch-based Prompt. After getting candidates, a sketcher [16] S extracts sketched codes Ẑ = S(Z), where Ẑ = { ẑ1 , ẑ2 , . . . , ẑ K } and ẑi denotes each sketched code after padding. This sketch-based prompt, due to its ability to reveal internal logic and its well-logic structure, can guide the model to decompose the tasks hierarchically.\nThe process of generating code reveals the relevant logic from similar code snippets while discarding the irrelevant parts (e.g. variable names) that are not pertinent to the current task. The relevant parts can provide a canonical structure of the code to guide the code generation model in producing draft code. The tokens in the sketch-based prompt are either preserved if they are indispensable and useful code templates that convey valuable knowledge or substituted with placeholders (<pad>) if they are not.\nWith the sketch-based prompt, we can generate a program z = π(q, Ẑ) by a program generator (e.g. Chat-GPT [24]) π given the task query q. When generating, we instruct the model to produce a coherent program and insert comments before each code block, which will facilitate the extraction and feedback of relevant information." }, { "figure_ref": [], "heading": "Multifaceted Feedback Generation", "publication_ref": [ "b37", "b33" ], "table_ref": [], "text": "During program execution, De-fine takes advantage of intermediate results and system return values during code execution to enhance code quality and facilitate debugging. To achieve this, we systematically define several types of feedback: 1) Visual Feedback, 2) Textual Feedback, 3) Error Feedback, and 4) Human Feedback (optional) that can dynamically adjust code logic based on the execution outcomes. These feedbacks are generated by corresponding feedback generators, leading to more refined and debuggable code outputs.\nSpecifically, after getting the program z, we apply an execution engine ϕ(z, x) and a feedback generator G(ϕ) to execute the program z on the input image x. The generator G extracts intermediate variables V = {v 1 , v 2 , . . . , v n } (e.g. image patch, string, comment) and generates feedbacks F = {F visual , F textual , F error , F human }. These intermediate variables and feedbacks help De-fine assess the correctness of the program z concerning the substeps and the complex query. We categorize them into four types according to the return value of each execution step, and design corresponding feedback for them respectively:\nVisual Feedback: The execution of the grounding functions and the finding functions in the program z returns image patches as intermediate variables. We use a VLM (e.g. mPLUG-Owl [37]) for the intermediate image variable as input to generate Visual Feedback for two purposes: 1) Image caption extraction: The VLM generates captions for the input image x and the image patches in V . This allows us to extract the information from the images in textual form, which can reduce the ambiguity problem that may arise from relying solely on the query in the code generation process. 2) Substep verification: The VLM verifies whether the image patches in V match the expected results of each substep in z. For example, if a substep is supposed to crop a face, the VLM checks whether the cropped image contains a face or not and generates Visual Feedback accordingly.\nTextual Feedback: We use the reasoning power of language models (e.g. LLaMA [33]) to provide Textual Feedback for two purposes: 1) Logical question answering: We ask a language model to answer logical questions about the text output, such as how the intermediate variables V are related to the final answer, and whether they match the substep reasoning process. 2) Text summarization: At the atomic code level, the program may produce verbose and repetitive output strings if there is a loop in the program. At the overall program level, in some multi-step tasks, we also need to verify whether the reasoning between steps in the comments is correct. We summarize these string outputs by a language model and generate corresponding Textual Feedback from a higher level.\nError Feedback: When the compiler encounters syntax or semantic errors in the source code, such as missing semicolons, undeclared variables, or incompatible data types, it generates an error message. This message provides Error Feedback that can be used to modify or correct the erroneous code. Besides, if the source code lacks a function definition, we have to add the corresponding function in the API instruction manually.\nHuman Feedback: De-fine can also incorporate feedback directly from human programmers for optional. In programming, users can iteratively modify the program according to their intentions until it produces the desired outcome. This intention, which we term Human Feedback, is often more explicit and facilitates human-in-the-loop inference. De-fine can draw on human knowledge and experience, as well as human creativity and heuristic thinking, to offer high-quality feedback.\nThe above feedback generated by De-fine can consolidate the correct steps, clarify the ambiguous parts, and correct the wrong parts simultaneously. This enables the exploitation of the program itself and its intermediate variables, providing multifaceted feedback for the automatic refinement process of the model." }, { "figure_ref": [], "heading": "Automatic Code Refinement", "publication_ref": [], "table_ref": [], "text": "In the previous step, we obtained feedback that can integrate information from various sources, such as patches and strings from intermediate variables, result output from the program, and returns from the compiler, into new programs. This feedback will be used to optimize the draft program with the help of De-fine's refiner, which is especially useful for code optimization, as it can enhance the performance and logic of the program.\nThe automatic refiner reuses our previous code generation model π. Given a query q, an initial program z, and feedback f , the refiner π refine a new program z * = π(q, z, f ) that improves on z in terms of accuracy, conciseness, and logic. The execution engine ϕ then takes the input image x and the refined program z * , then generates a result r = ϕ(x, z * ) as the final output.\nOur method of program optimization has several distinctive features compared to existing methods. Unlike rulebased or heuristic-based methods, our method is data-driven and feedback-based, which enables it to adapt to different queries and programs and learn from the execution feedback. Moreover, our method is holistic, not modular or local, meaning that it optimizes the entire program as a whole, rather than each substep or statement individually. Furthermore, our method is interactive and iterative, not one-shot or static, performing well in enhancing the program through multiple feedback from the execution engine or the user." }, { "figure_ref": [], "heading": "Codebase Evolution", "publication_ref": [], "table_ref": [], "text": "De-fine takes advantage of the optimized program that has a consistent and hierarchical structure refined by feedback. This type of program can produce optimal results and facilitate code reuse. By adding the optimized code back to the codebase, we achieve quality enhancement and reliability of the programs in the codebase over iteration.\nAs described in Section 3.1, De-fine retrieves programs from a codebase and constructs a sketch-based logical prompt. To generate this initial codebase, we use feedback to evaluate the generated program. We employ the same language model as in Section 3.2 to provide textual feedback. The model assesses whether the optimized code is more well-performing for solving the current task than the draft one and whether the comments clearly explain the solving logic in the current program structure. We also use an execution engine to run the program and verify its executability. In this way, the codebase can be expanded as samples are continuously inferred, providing more accurate retrieval and more relevant results in subsequent iterations.\nIt is worth noting that the codebase does not require manual annotation or a ground-truth program. The model completely generates, evaluates, filters, and updates the program, sparing humans from tedious processes. Experiments show that it is easy to obtain more high-quality programs. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "De-fine can perform various visual tasks without any task-specific training or ground-truth data. In this section, we present the experimental setup (Section 4.1) and evaluate our framework on three different tasks: (1) visual grounding (Section 4.2), (2) compositional visual question answering (Section 4.3), and (3) zero-shot reasoning on image pairs (Section 4.4)." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b37", "b33", "b31", "b5", "b1", "b9", "b11" ], "table_ref": [ "tab_3" ], "text": "Model Setup. For both the program generator and the refiner, we adopt the ChatGPT [24] language model. For visual feedback, we use mPLUG-Owl [37] as an image caption extraction tool and a logical judgment tool. For textual feedback, we utilize LLaMA [33] to extract the text output and intermediate variable returns in the program and generate corresponding feedback. See Appendix A.1 for more details about the parameters of experimented models.\nBaselines. We use the following baselines: Visual Programming [7], ViperGPT [31], BLIP-2 [15], Flamingo [2], GLIP [17], and ReCLIP [29]. See Appendix A.2 for more details about the baselines.\nDraft Codebase Setup. To construct the initial retrieval codebase, we evaluate the program by feedback from the GQA [11] task and take the best 5000. Each program has to meet two criteria: better performance on the current task than the former and clear comments explaining the logic behind each code block. The language model performs these filtering and commenting tasks automatically. We also compile them to guarantee their executability. 1, De-fine achieves a significant improvement over existing models in visual grounding under zero-shot settings. A potential reason for the inferior performance of end-to-end models (e.g. GLIP, ReCLIP) is their lack of an explicit representation of the internal reasoning structure and their inability to leverage modular tools. In contrast, ViperGPT can access the available modules via a predefined API, but De-fine surpasses it by automatically optimizing the program and refining answers. This suggests that De-fine can validate and modify the generated program for better performance based on the feedback from intermediate variables at the refining stage. Furthermore, this result verifies the flexibility of De-fine, which can adapt to different queries and tasks by adjusting the program structure and parameters." }, { "figure_ref": [ "fig_4", "fig_4", "fig_5" ], "heading": "Compositional Visual Question Answering", "publication_ref": [ "b11", "b0" ], "table_ref": [ "tab_4", "tab_5", "tab_6" ], "text": "De-fine is a novel method specifically designed for complex visual question answering tasks. It can intuitively show the process in which a complex problem is decomposed step by step and continuously improved program by its own feedback. In this section, we demonstrate the effectiveness of our model on three datasets: GQA [11], OK-VQA [22], and TallyQA [1].\nAs shown in Tables 2 and3, programming-based methods, despite employing large foundation models, are constrained by their single-pass approach and fail to outperform De-fine. Different from Visual Programming which takes a majority voting strategy for maximum consensus predictions for each query, De-fine enhances its performance by decomposing the task into finer-grained subtasks without using any ground-truth data and incorporat- 4d shows a detailed illustration of the example.\nEffectiveness of Individual Components. We perform an ablation study in six configurations of our model on the GQA and OK-VQA tasks (Table 4): 0) backbone: only one-pass program generation and execution, 1) backbone + in-context prompt: with decomposition module and the in-context prompt that uses the retrieved sample directly 2) backbone + sketch-based prompt: with decomposition module and the sketch-based prompt we defined, 3) backbone + decompose + feedback: with feedback generation and refinement of program, and 4) backbone + decompose + feedback + code evolution: with codebase updating for better searching results. 5) We compare our approach with ViperGPT given in-context prompt. The result provides strong evidence that the performance improvement is not attributable to the use of in-context examples as prompts, but rather to the structure format of the sketch-based prompt.\nBy analyzing the data in the table, we conclude that 1) the decompose module enables rapid extraction of the internal logic and effective decomposition of the task into multiple subtasks. 2) A non-redundant prompt truly guides the model to focus on logical imitation, which results in a significant improvement in solving complex problems. 3) Refinement by feedback provides the most enhancement, as feedback conveys high-level information and allows the model to revise and update the code which can correct the potential errors or integrate information to obtain the correct answer. 4) By code evolution, the model can accumulate more experience stored in the codebase and utilize it for future problem-solving.\nAnalysis on the Number of Iterative Refinement. We show how the number of iterative refinements affects the performance in Figure 5a. The plot indicates that three iterations are sufficient to optimize the program, as the performance plateaus after that. Considering the token cost of accessing a large model, we adopt the outcomes of three iterations as our results. formance, we varied the number of sketch-based logical programs (0-3) as prompts for the code generation model. Figure 5b shows that two program examples are sufficient to showcase the capability of the model, so we adopt the Top-2 setting for our demonstrations. Human Evaluation. To conduct an error analysis for the GQA dataset, we follow the settings of Visual Programming and manually select 100 samples to identify the sources of errors. The results in Figure 6 indicate that our method outperforms Visual Programming in reducing the \"Incorrect Program\" errors, which demonstrates that De-fine can self-correct programs automatically and significantly." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes De-fine, a method that decomposes complex tasks by constructing a sketch-based logical prompt to guide the code generation model to generate an executable program. After execution, De-fine automatically refines the program based on the four types of systematic feedback we design. Through extensive experimentation and analysis, we demonstrate that De-fine is a self-optimization method that is model-agnostic, scalable, and interpretable, with superior zero-shot capability." }, { "figure_ref": [], "heading": "Query: How much is it, if $1 for a bottle of", "publication_ref": [], "table_ref": [], "text": "ing feedback from multiple variables in iteration. Despite using the same sketch-based prompt as a logical example for ViperGPT, De-fine consistently outperforms existing models by a large margin in all VQA tasks. De-fine can also perform error correction on the generated programs. By using the Error Feedback that we designed, the model can rapidly identify and fix the flaws in the program in the subsequent generation." }, { "figure_ref": [], "heading": "Zero-shot Reasoning on Image Pairs", "publication_ref": [ "b30" ], "table_ref": [], "text": "We extend De-fine to accomplish reasoning on multiple images, not just one. Our model performs well on the NLVRv2 [30] benchmark which involves verifying statements about image pairs, the results are shown in Table 3.\nCurrent visual models can process multiple images as input, yet they treat each image in isolation. The interrelation of different images relies on the network fusion, which is affected by the sequence and quantity of images. De-fine synthesizes information from diverse modalities via feedback and offers a comprehensive correction proposal. This enables the model to improve its performance on multiimage tasks substantially." }, { "figure_ref": [], "heading": "In-Depth Analysis", "publication_ref": [], "table_ref": [], "text": "Quantitative Analysis. Figure 4 presents some qualitative examples of De-fine, which can automatically update the program by exploiting feedback from multiple modalities. The illustration demonstrates how our model can leverage the feedback to modify the logic and reasoning sequence of the program accordingly. A notable advantage of our approach is that it can incorporate human feedback to inject reasoning intent and knowledge more directly, enabling human-in-the-loop programming. Human Feedback en-" } ]
Visual programming, a modular and generalizable paradigm, integrates different modules and Python operators to solve various vision-language tasks. Unlike endto-end models that need task-specific data, it advances in performing visual processing and reasoning in an unsupervised manner. Current visual programming methods generate programs in a single pass for each task where the ability to evaluate and optimize based on feedback, unfortunately, is lacking, which consequentially limits their effectiveness for complex, multi-step problems. Drawing inspiration from benders decomposition, we introduce De-fine, a general framework that automatically decomposes complex tasks into simpler subtasks and refines programs through autofeedback. This model-agnostic approach can improve logical reasoning performance by integrating the strengths of multiple models. Our experiments across various visual tasks show that De-fine creates more accurate and robust programs, setting new benchmarks in the field.
De-fine: Decomposing and Refining Visual Programs with Auto-Feedback
[ { "figure_caption": "Figure 1 .1Figure 1. De-fine decomposes the tasks into executable program blocks and automatically refines the program based on multifaceted feedback from the execution.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. De-fine is a programming-based framework that can decompose intricate tasks and automatically refine the program. We summarize the process into four steps: (1) De-fine first retrieves the most relevant code and constructs a sketched-based logical prompt. (2) Then we generate the program and execute it. (3) During execution, De-fine automatically generates multifaceted feedback for optimizing. (4) De-fine selects the well-optimized code based on feedback and uses it to expand the codebase for future use. module with a predefined interface. Additionally, they enable fine-grained image processing and editing through code-level operations. The progress of the program generation model [18] enhances the synthesis of programs for visual tasks without task-specific training. Nevertheless, they still require multiple manually labeled codes as contextlearning examples. While De-fine can automatically retrieve relevant examples to assist in program generation.Refinement with Auto-feedback. Even for human programmers, there is no guarantee that the code written on the first try is always accurate. Therefore, we hope the model can provide multifaceted feedback and refine its previously generated program. Previous work like Self-debug [5] uses the error message of the program as feedback to modify the code generated. Self-refine[21] optimizes the output through feedback and refinement iteration. However, this feedback comes from a single modality of text and is only generated from the final result. In contrast, De-fine can provide feedback on variables during code execution and generate feedback for different variable types, enabling it to handle visual, textual, and error messages.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. A pipeline of sketch-based logical prompt generation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Analysis on (a) the number of iterative refinement and (b) sketch-based logical prompts.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure6. Sources of error in GQA task.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "into subroutines and assemble a program for visual tasks. ViperGPT, † Corresponding Authors.", "figure_data": "Program", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Visual grounding task results. We report the accuracy of the REC task and testA split on the RefCOCO and RefCOCO+.", "figure_data": "IoU(%)RefCOCO RefCOCO+GLIP55.052.2ReCLIP58.660.5ViperGPT72.067.0De-fine74.168.3", "figure_id": "tab_3", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "GQA Results. We report accuracy on the GQA test-dev set. GT-data=ground-truth data, SP=sketch-based prompt.", "figure_data": "Accuracy(%) GT-data VotingVISPROG50.5BLIP-244.7ViperGPT48.1ViperGPT+SP49.8De-fine53.4", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Visual question answering and reasoning tasks results. We measure the accuracy (%) of the models on the val set of OK-VQA, the test set of TallyQA, and the test set of NLVRv2.We first compare several different models on the visual grounding task on RefCOCO and RefCOCO+ [13] datasets. This task requires the model to locate the object in an image that corresponds to a given natural language description, as well as to demonstrate the ability of the model to reason about spatial relations and visual features.As shown in Table", "figure_data": "Accuracy(%)OK-VQA TallyQA NLVRv2VISPROG52.666.062.4BLIP-245.948.4-Flamingo50.6--ViperGPT51.967.261.8ViperGPT+SP52.567.762.3De-fine55.470.763.54.2. Visual Grounding", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation results (%) on GQA and OK-VQA task.", "figure_data": "GQA OK-VQA0 Backbone48.251.81+ decompose (in-context example) 48.552.12+ sketch-based prompt49.553.23+ feedback52.655.04+ code evolution53.455.45 ViperGPT + in-context example48.752.4ables people to collaborate with De-fine. It allows usersto communicate their heuristic ideas and thinking processesto the model. Mutually, the model returns answers for ver-ification through program reasoning and output values fa-cilitating human-computer interaction. Figure", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" } ]
Minghe Gao; Juncheng Li; Hao Fei; Liang Pang; Wei Ji; Guoming Wang; Wenqiao Zhang; Siliang Tang; Yueting Zhuang
[ { "authors": "Manoj Acharya; Kushal Kafle; Christopher Kanan", "journal": "", "ref_id": "b0", "title": "Tallyqa: Answering complex counting questions", "year": "2018" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds; Roman Ring; Eliza Rutherford; Serkan Cabi; Tengda Han; Zhitao Gong; Sina Samangooei; Marianne Monteiro; Jacob L Menick; Sebastian Borgeaud; Andy Brock; Aida Nematzadeh; Sahand Sharifzadeh; Ricardo Mikoł Aj Bińkowski; Oriol Barreira; Andrew Vinyals; Karén Zisserman; Simonyan", "journal": "", "ref_id": "b1", "title": "Flamingo: a visual language model for few-shot learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b2", "title": "", "year": "2022" }, { "authors": "Jacob Andreas; Marcus Rohrbach; Trevor Darrell; Dan Klein", "journal": "", "ref_id": "b3", "title": "Neural module networks", "year": "2016" }, { "authors": "Stanislaw Antol; Aishwarya Agrawal; Jiasen Lu; Margaret Mitchell; Dhruv Batra; C Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b4", "title": "Vqa: Visual question answering", "year": "2015" }, { "authors": "Xinyun Chen; Maxwell Lin; Nathanael Schärli; Denny Zhou", "journal": "", "ref_id": "b5", "title": "Teaching large language models to self-debug", "year": "2023" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b6", "title": "Palme: An embodied multimodal language model", "year": "2023" }, { "authors": "Tanmay Gupta; Aniruddha Kembhavi", "journal": "", "ref_id": "b7", "title": "Visual programming: Compositional visual reasoning without training", "year": "2023" }, { "authors": "Stefan Haefliger; Georg ; Von Krogh; Sebastian Spaeth", "journal": "Management science", "ref_id": "b8", "title": "Code reuse in open source software", "year": "2008" }, { "authors": "B Tatsunori; Kelvin Hashimoto; Yonatan Guu; Percy S Oren; Liang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "A retrieve-and-edit framework for predicting structured outputs", "year": "2018" }, { "authors": "Anugrah Shirley; Raphael Hayati; Pravalika Olivier; Pengcheng Avvaru; Anthony Yin; Graham Tomasic; Neubig", "journal": "", "ref_id": "b10", "title": "Retrieval-based neural code generation", "year": "2018" }, { "authors": "A Drew; Christopher D Hudson; Manning", "journal": "", "ref_id": "b11", "title": "Gqa: A new dataset for real-world visual reasoning and compositional question answering", "year": "2019" }, { "authors": "Justin Johnson; Bharath Hariharan; Laurens Van Der Maaten; Judy Hoffman; Li Fei-Fei; C Lawrence Zitnick; Ross Girshick", "journal": "", "ref_id": "b12", "title": "Inferring and executing programs for visual reasoning", "year": "2017" }, { "authors": "Sahar Kazemzadeh; Vicente Ordonez; Mark Matten; Tamara Berg", "journal": "", "ref_id": "b13", "title": "Referitgame: Referring to objects in photographs of natural scenes", "year": "2014" }, { "authors": "Tushar Khot; Harsh Trivedi; Matthew Finlayson; Yao Fu; Kyle Richardson; Peter Clark; Ashish Sabharwal", "journal": "", "ref_id": "b14", "title": "Decomposed prompting: A modular approach for solving complex tasks", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b15", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Jia Li; Yongmin Li; Ge Li; Zhi Jin; Yiyang Hao; Xing Hu", "journal": "", "ref_id": "b16", "title": "Skcoder: A sketch-based approach for automatic code generation", "year": "2023" }, { "authors": "Liunian Harold; Li ; Pengchuan Zhang; Haotian Zhang; Jianwei Yang; Chunyuan Li; Yiwu Zhong; Lijuan Wang; Lu Yuan; Lei Zhang; Jenq-Neng Hwang", "journal": "", "ref_id": "b17", "title": "Grounded language-image pre-training", "year": "2022" }, { "authors": "Yujia Li; David Choi; Junyoung Chung; Nate Kushman; Julian Schrittwieser; Rémi Leblond; Tom Eccles; James Keeling; Felix Gimeno; Agustin Dal Lago; Thomas Hubert; Peter Choy; Cyprien De Masson D'autume; Igor Babuschkin; Xinyun Chen; Po-Sen Huang; Johannes Welbl; Sven Gowal; Alexey Cherepanov; James Molloy; Daniel J Mankowitz; Esme Sutherland Robson; Pushmeet Kohli; Koray Nando De Freitas; Oriol Kavukcuoglu; Vinyals", "journal": "Science", "ref_id": "b18", "title": "Competitionlevel code generation with alphacode", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b19", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b20", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Aman Madaan; Niket Tandon; Prakhar Gupta; Skyler Hallinan; Luyu Gao; Sarah Wiegreffe; Uri Alon; Nouha Dziri; Shrimai Prabhumoye; Yiming Yang", "journal": "", "ref_id": "b21", "title": "Self-refine: Iterative refinement with self-feedback", "year": "2023" }, { "authors": "Kenneth Marino; Mohammad Rastegari; Ali Farhadi; Roozbeh Mottaghi", "journal": "", "ref_id": "b22", "title": "Ok-vqa: A visual question answering benchmark requiring external knowledge", "year": "2019" }, { "authors": "Audris Mockus", "journal": "IEEE", "ref_id": "b23", "title": "Large-scale code reuse in open source software", "year": "2007" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "Ghatgpt", "year": "" }, { "authors": " Openai", "journal": "", "ref_id": "b25", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Emilio Parisotto; Abdel-Rahman Mohamed; Rishabh Singh; Lihong Li; Dengyong Zhou; Pushmeet Kohli", "journal": "", "ref_id": "b27", "title": "Neuro-symbolic program synthesis", "year": "2016" }, { "authors": "Ishika Singh; Valts Blukis; Arsalan Mousavian; Ankit Goyal; Danfei Xu; Jonathan Tremblay; Dieter Fox; Jesse Thomason; Animesh Garg", "journal": "IEEE", "ref_id": "b28", "title": "Progprompt: Generating situated robot task plans using large language models", "year": "2023" }, { "authors": "Sanjay Subramanian; William Merrill; Trevor Darrell; Matt Gardner; Sameer Singh; Anna Rohrbach", "journal": "", "ref_id": "b29", "title": "Reclip: A strong zero-shot baseline for referring expression comprehension", "year": "2022" }, { "authors": "Alane Suhr; Stephanie Zhou; Ally Zhang; Iris Zhang; Huajun Bai; Yoav Artzi", "journal": "", "ref_id": "b30", "title": "A corpus for reasoning about natural language grounded in photographs", "year": "" }, { "authors": "Dídac Surís; Sachit Menon; Carl Vondrick", "journal": "", "ref_id": "b31", "title": "Vipergpt: Visual inference via python execution for reasoning", "year": "2023" }, { "authors": "Alexey Svyatkovskiy; Shengyu Shao Kun Deng; Neel Fu; Sundaresan", "journal": "Association for Computing Machinery", "ref_id": "b32", "title": "Intellicode compose: Code generation using transformer", "year": "2020" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b33", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Wenhui Wang; Hangbo Bao; Li Dong; Johan Bjorck; Zhiliang Peng; Qiang Liu; Kriti Aggarwal; Owais Khan Mohammed; Saksham Singhal; Subhojit Som", "journal": "", "ref_id": "b34", "title": "Image as a foreign language: Beit pretraining for all vision and visionlanguage tasks", "year": "2022" }, { "authors": "Teng Xi; Yifan Sun; Deli Yu; Bi Li; Nan Peng; Gang Zhang; Xinyu Zhang; Zhigang Wang; Jinwen Chen; Jian Wang", "journal": "Springer", "ref_id": "b35", "title": "Ufo: unified feature optimization", "year": "2022" }, { "authors": "Zhengyuan Yang; Zhe Gan; Jianfeng Wang; Xiaowei Hu; Yumao Lu; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b36", "title": "An empirical study of gpt-3 for few-shot knowledge-based vqa", "year": "2022" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b37", "title": "mplug-owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Qifan Yu; Juncheng Li; Yu Wu; Siliang Tang; Wei Ji; Yueting Zhuang", "journal": "", "ref_id": "b38", "title": "Visually-prompted language model for fine-grained scene graph generation in an open world", "year": "2023" }, { "authors": "Andy Zeng; Maria Attarian; Brian Ichter; Krzysztof Choromanski; Adrian Wong; Stefan Welker; Federico Tombari; Aveek Purohit; Michael Ryoo; Vikas Sindhwani", "journal": "", "ref_id": "b39", "title": "Socratic models: Composing zero-shot multimodal reasoning with language", "year": "2022" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b40", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[]
2023-11-21
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b3", "b29" ], "table_ref": [], "text": "Existing rendering systems predominantly utilize polygonal geometric primitives, like triangles, and apply texture mapping to enhance visual appeal. These textures are sourced from photographs, manual paintings, or procedural computations [6]. However, most of these texture acquisition methods require domain specific expertise and manual efforts, which is of course, not as convenient as generating textures from a text prompt. In this paper, we introduce a method for generating textures for a 3D object based on text descriptions.\nThere exists a few works [4,30] trying to achieve the above goal. Typically, they generate a view using depthconditioned text-to-image (T2I) models, project this view onto the object's surface, and render the partially textured object from a rotated camera position. Next, the missing texture regions are filled using inpainting from the current viewpoint with the T2I model. These project-and-inpaint methods often result in texture inconsistencies, as shown in Fig. 5. Each view, diffused separately, lacks adequate consistency constraints, failing to ensure a seamless appearance from various viewpoints.\nThe primary issue, we believe, lies on the asynchronous diffusion among views. Note that even with depth map as guidance, it cannot fully align content nor avoid the error accumulation in sequential inpainting due to the limited context from previous textures. Our solution is a synchronized multi-view diffusion approach. This method achieves early-stage content consensus, essential for consistent structure and error correction across views.\nIn order to synchronize the diffusion among different views, it is necessary to allow the denoised content being shared among different views in each denoising step. The overlapped region among different views on the textured object (Fig. 2, left) serves as the information exchange sites. During each denoising step, we share (blend in our case) the latent from different views in the UV texture domain, if they have an overlap. The texture consensus can be obtained during the early stage of denoising as demonstrated in Fig 1a . Note that, all views share the same importance during this consensus process, i.e. no view is overriding the others during the denoising.\nWith the proposed approach, we obtain plausible and highly detailed textures for arbitrarily given 3D objects. Please refer to the numerous results presented in this paper and the supplement. We have evaluated the results generated by our synchronous multi-view diffusion, via quantitative and qualitative evaluations. Superior performance is achieved comparing to state-of-the-art methods. In summary, our contributions can be summarized as follow.\n• We identify the problem of existing methods stemmed from the asynchronous diffusion, and propose a novel synchronous multiview diffusion approach to generate text-guided texture on target 3D objects. • We proposed a practical solution to generate consistent, seamless, plausible, highly detailed textures given a text prompt. • We conduct extensive experiments on a variety of 3D meshes, which shows superior editing performance compared with state-of-the-art methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b0", "b6", "b14", "b15", "b4", "b7", "b37", "b13", "b34", "b39", "b16", "b18", "b31", "b22", "b26", "b34", "b29", "b3", "b30" ], "table_ref": [], "text": "Texture Synthesis Methods Early works on texture related topic mainly focused on generating 2D and 3D tileable texture maps from exemplars [1,7,15,16], and applying them to a given object in a seamless way [5,28]. Existing works also explored the correlation between surface structures and texture details, enabling geometry-aware synthesis and transfer of texture details [20,21,38]. However, both rule-based methods and early machine learning models are of limited capacity, synthesizing textures with rich semantics on complicated geometry is not feasible at that time.\nIn the recent few years, deep learning methods, especially convolutional neural networks exhibit their strong capabilities on image-related tasks. Texture synthesis methods in the recent years are based on popular generative models such as GANs [8,12], VAEs [14,37] and state-of-the-art diffusion model [10,35], to sample from a prior trained on voxels [34,42], point clouds [26,39,40] and implicit representations [9,17,19,25] of a textured 3D object. However, generating highly-detailed texture for a given mesh is seldom discussed, even it is demanded. Zero-shot Texturing Using Text-Image Prior Recently, priors trained on large scale text-image data empowered researches on various image generation and modification task. Due to the lack of large-scale 3D dataset with high quality annotation for training a prior natively in 3D, many previous 3D content generation works alternatively choose to use 2D priors, and achieved strong zero-shot ability when paired with carefully designed sampling pipelines. Following this idea, several works in texture generation [13,32,33] distill gradients from CLIP model [29], which trained with correlate text and image using contrastive learning. These gradients are used to update rendered views of a 3D object iteratively, to make it conforms to the given text.\nScore distillation methods [11,18,23,27] on the other hand, distill gradients from state-of-the-art image generation models, diffusion models [10,35]. For texture generation, SDS-based methods add noise to a rendered view of the current 3D object, and use the diffusion model to reconstruct the noiseless input. By this means, the result clean view is incorporated with prior knowledge from the diffusion model, and can be used to update the object texture. However, diversity and quality of their output, as well as the optimization time are unsatisfactory compared to normal diffusion inference on a 2D image.\nTEXTure [30] and Text2Tex [4] on the other hand, approached this problem by progressive inpainting with a depth-conditioned diffusion model [31]. Their methods start by generating a view of the object using the diffusion model with the depth map rendered from the mesh as control, and then projecting the screen-space textures onto the object surface. In every iteration, they rotate the object by an angle, and inpaint the partially visible texture from this new view. These methods achieve better texture sharpness and faster running time, but may suffer from obvious seams and over-fragmentation, due to the asynchronous nature of their diffusion processes. A recent work [3] attempts to address the inconsistency problem by performing a complete round of project-and-inpaint in each denoising time-step, in an auto-regressive manner. In contrast, our method treats each view equally, latent information from all views are fused in a synchronized fashion, in order to reach a consensus in terms of content structure and color distribution. We show that consensus can be quickly reached in the very early stage of denoising process (Fig. 1a)." }, { "figure_ref": [], "heading": "Diffusion Model Preliminaries", "publication_ref": [ "b30" ], "table_ref": [], "text": "Denoising Diffusion Models are generative neural networks trained to reverse a Markov process which diffuses its input data to pure Gaussian noise. During training, given input data x, a fixed forward process gradually destroys the information in each intermediate time step t and eventually approaches to a pure Gaussian noise x T at t = T . Then a neural noise predictor ϵ θ (x t , t) (e.g., a U-Net) is trained to estimate the true noise ϵ mixed into the input, given arbitrary time step t and corresponding noisy data x t . Inference can be performed by sampling from the Gaussian noise at time T , and iteratively removing part of the predicted noise to reach a fully denoised x 0 .\nIn our work, we utilize Stable Diffusion model [31], which is trained to denoise in low-resolution latent space z = E(x) encoded by a pre-trained VAE encoder E, as it can significantly reduce the computational cost. Then, an image can be generated through the following steps:\n1. Sample a noise image z T from the standard normal distribution." }, { "figure_ref": [], "heading": "For each intermediate diffusion time step t:", "publication_ref": [], "table_ref": [], "text": "(a) Given noisy latent image z t , the model predicts the noise ϵ θ (z t , t) in the current latent image. We can obtain a clean intermediate state z 0|t by removing the noise from z t (modifications on z 0|t can be applied to affect the subsequent denoising process). (b) Compute latent image z t-1 at the next time step t -1, which is a linear combination of z t and z 0|t using time-step-related coefficients. 3. Decode the fully denoised z 0 with the VAE decoder D to obtain the output image x = D(z 0 ).\nIn addition to text conditioning using built-in attention mechanisms, several external encoders and adapters [24, 41] have been designed to enable diffusion models to condition on other modalities. ControlNet as one of these methods, allows diffusion models to generate images conditioned on screen-space depth or normal maps." }, { "figure_ref": [ "fig_0" ], "heading": "Synchronized Multi-View Diffusion", "publication_ref": [], "table_ref": [], "text": "With the given object geometry and a known camera, the ground truth depth map or normal map can be rendered to condition the above 2D image generation, making it possible to generate 2D views of the desired textured 3D object. Therefore, we design an object texturing framework as follow, in order to perform zero-shot texture generation using a pretrained T2I diffusion model without texture domain knowledge. We first surround the target 3D object m with multiple cameras {v 1 , v 2 , • • • v n }, each covers part of the object. Then a T2I diffusion process is assigned to synthesize each of these 2D views {z\n(v1) t , z (v2) t , • • • , z (vn) t\n}, with text prompt y as guidance and conditional images (depth or normal map rendered from the corresponding viewpoints) as the condition. With sufficient views, we can obtain the complete texture map covering the whole object, by projecting the generated pixels from each view onto the object surface, which in turn can be mapped to the texture domain (UV space), as illustrated in Fig. 2.\nThe key challenge here is on how to generate views that are inconsistent to each other. Texture map with severe fragmentation and seams may be generated when directly combining each view generated through a separate diffusion process (Fig. 1b). One can use postprocessing smoothing to reduce the obvious seams, but with the trade-off of overblurriness. Our major contribution lies on how to ensure the consistency among the generated views, and hence prevent obvious fragmentation and seams, so that sharp and highly detailed textures can be obtained.\nThe root cause of the inconsistency among generated views, is that the corresponding diffusion processes of views are performed in a more-or-less independent fashion. Note that the rendered conditional images are insufficient to ensure the consistency of textured views in medium or fine scales. In order to foster a consistent texture at different scales, we need to share the latent information among views since the beginning of the denoising process (from time step T ). To do so, the latent values of views are shared among each other between every denoising step. We called this synchronized multi-view diffusion (MVD). Such latent value sharing allows the diffusion processes of all views to reach a consensus of the generated texture in terms of overall structure and color distribution in the early stage of the denoising process. Fig. 1a visualizes how fast the consensus is reached in generating the texture, in comparing to the one without information sharing in Fig. 1b." }, { "figure_ref": [ "fig_1" ], "heading": "Multi-View Diffusion in Texture Domain", "publication_ref": [], "table_ref": [], "text": "Our Multi-View Diffusion module utilizes the overlapping region in UV texture space for synchronization. Instead of pairwise screen-space warping, we choose to use the UV mapping as a pre-established connection among all views. This mapping allows us to align predictions from multiple views to this canonical texture layout, and distribute the synchronized results to each individual views through rendering. The detailed procedures are as follows.\nAt initial time step T , we first initialize a latent texture W T with standard normal distribution. Then we render the source mesh from views V = {v i } N i=1 using this map as the texture, to obtain 3D-consistent initial views\nZ T = {z (vi)\nT } N i=1 of the object. Background pixels are ran- domly sample with the same noise distribution, and then composited with the rendered foreground object using the object mask.\nAt each time step t, we can compute the 3D-consistent Z t-1 of the next time step based on noiseless view\nZ 0|t = {z (vi) 0|t } N\ni=1 estimated by the model. But this computation is done separately for each view. To guarantee the consistency among views, we need to project Z 0|t to UV texture space for computation, like in the initialization step. Projecting Z 0|t to UV space yields the partially covered textures W 0|t = UV(Z 0|t ). Here UV denotes the mapping from screen space to texture space, and UV -1 denotes mapping from texture space back to screen space. The currentstep clean texture can be obtained through averaging these partial textures at each overlapping texel:\nŴ0|t = N i=1 w (vi) 0|t N i=1 M (v i ) + γ(1)\nw (vi)\n0|t denotes the partial texture projected from view v i , M (v i ) denotes a triangle visibility mask which marks the regions in w (vi) 0|t visible in view v i , such that only visible texels are taken into account for averaging; and γ is a small constant to avoid division-by-zero in the masked-out regions.\nBased on the diffusion sampling method introduced in Sec. 3, the texture in next time step W t-1 can be sampled using Ŵ0|t and W t (additional noise should be added in UV space as the variance term in diffusion sampling ). Finally, this texture is mapped back to corresponding views Z t-1 = UV -1 (W t-1 ). Fig. 3 illustrates this MVD process. These generated views should have consistent textures on corresponding regions. To assign background with valid latent noise, we encode a random solid color image and add proper noise based the current time step.\nOur method implements the screen-to-texture projection UV using back-propagation supported by a differentiable renderer. This implementation has a drawback that only texels visible on screen can receive non-zero gradients, such that texels with valid latent values appear as disconnected dots instead of complete patches on the partial latent textures, as pixels are forwardly projected onto UV space. Information exchange will be compromised if multiple such textures are aggregated, due to the sparsity. To tackle this issue, we simply apply a Voronoi-based filling [2] to propagate the latent texels to fill up all empty regions in the UV space. " }, { "figure_ref": [], "heading": "Self-attention Reuse", "publication_ref": [ "b0" ], "table_ref": [], "text": "Due to the fact that multiple views of the same object tend to develop similar looks when they are concatenated into a single image during denoising [36], we modify the behavior of batched self-attention computation in the pre-trained denoising U-Net to enable attention to other views in the same batch. Benefiting from the pre-trained self-attention layers, spatially distant views can now develop highly-related appearances even overlap in UV-space is small or unavailable. Nevertheless, we observed certain degradation of generated details when pairwise view attentions are used. Based on the observation, we propose two attention schemes to achieve the appearance harmonization in different stages of denoising.\nAttention(v i ) =          β • SA(v i , v {i-1,i,i+1} ) + (1 -β) • SA(v i , v ref ) for t > t ref SA(v i , v {i-1,i,i+1} ) otherwise(2)\nwhere SA(v m , v n ) denotes that we use the pre-trained selfattention layer to attend to view v n when calculating the attention results of view v m in the denoising U-Net. We use t ref to denote the time step where we switch between two attention schemes. To enforce a global harmonization in the early stage of denoising, we apply two attention components before time step t ref : (1) attention to the view itself, its left, and its right neighbors; and (2) attention to a reference view v ref , which can be the default front view or a manually selected view. These two components are balanced by a weight factor β. In the remaining denoising steps, we disable the reference view attention to avoid content invisible from the reference view v ref , from being impaired by enforcing attention to the reference view." }, { "figure_ref": [], "heading": "Finalizing the Texture", "publication_ref": [ "b3", "b29" ], "table_ref": [], "text": "Despite that we can denoise a latent texture to a noiseless state to obtain the final texture, we choose not to do so, due to the following two reasons. Firstly, a latent texture generated through projecting screen-space latent to a texture is not viable for directly decoding to the final RGB texture, as stretching and rotation in the latent texture are likely to cause mismatching with the trained distribution of the decoder. Secondly, we observed that sharpness of the generated results could drop during the last few denoising steps (in which high-frequency details are forming) when Multi-View Diffusion is enforced. Therefore, we choose to conduct the final phase of denoising in screen space, with self-attention reuse enabled. The final RGB texture can then be extracted from fully denoised views Z 0 = {z (vi) 0 } N i=1 through decoding, projecting to texture space, and aggregation. Note that although latent texture guarantees the 3D consistency of latents, the high frequency details in the decoded images are not consistent at pixel-level. In order to retain the high-frequency details in the unified result, we propose to perform per-texel weighted combination based on their geometric contribution. Inspired by [4,30], we utilize the cosine similarities between per-pixel normal vectors and the viewing direction to determine the weight of each contributing pixel, as geometries facing away from the camera are less reliable. Hence, we compute the combined color value as follows.\nTexture(Z 0 ) = N i=1 D(z (vi) 0 ) ⊙ UV(θ (vi) ) α N i=1 UV(θ (vi) ) α + γ (3)\nwhere ⊙ denotes element-wise (texel-wise) multiplication, α is an exponent to adjust the smoothness of blending and γ is a small constant to avoid division-by-zero in the maskedout regions; θ (vi) is the weight map for view v i and it is calculated as the cosine similarity between the viewing direction v i and the normal vector, with each element calculated as,\nθ (vi) (p) = ⃗ v i (p) • ⃗ n m (p) ∥⃗ v i (p)∥∥⃗ n m (p)∥ ,(4)\nwhere p donates the 3D surface point on object m, corresponding to the screen pixel of interest, ⃗ v i (p) is the viewing direction from p to view v i and ⃗ n m (p) is the normal vector at p. The formulation of cosine similarity intuitively means the view facing directly to the surface point is more important. In all our experiments, we set α = 6 and disable the Voronoi-based filling during the texture project for sharpness preservation." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "In all our experiments, we have 10 cameras (views) to cover the object. Eight orthographic cameras are placed along the equator surrounding the object with a 45 • interval, and two additional cameras at elevated locations pointing towards the top of the object. Each view has a latent resolution of 96×96 to reduce the aliasing. The UV texture has a high latent resolution of 1,536×1,536 to prevent color blocks in rendered latent views in case the mesh has low texeldensity regions. To encourage the model to generate views with expected orientation, we append directional keyword Figure 5. Gallery of objects textured by our method. Text prompts from top to bottom: \"photo of Batman, sitting on a rock\", \"teddy bear wearing superman costume\", \"photo of a beautiful magpie\", \"A cute shiba inu dog\" and \"Blue and white pottery style lucky cat with intricate patterns\".\n(e.g., \"front view\") to the text prompt of each view automatically, based on its camera position. Our method takes around 60 to 150 seconds to denoise the above mentioned views, depending on the number of denoising steps. We developed our diffusion pipeline based on Stable Diffusion v1-5 and ControlNet module v1-1 (normal and depth) of Huggingface Diffusers library, and our projection functions are implemented using Pytorch3D." }, { "figure_ref": [ "fig_6", "fig_8", "fig_6", "fig_10" ], "heading": "Comparison with State-of-the-Art Methods", "publication_ref": [ "b29", "b3", "b1", "b29", "b3", "b1", "b29", "b3", "b1" ], "table_ref": [], "text": "In this section, we compare our results with four state-ofthe-art methods, including TEXTure [30], Text2Tex [4], Meshy [22], and TexFusion [3]. For the former three meth-ods with public released code or service, we textured a given mesh with the same prompt, and rendered the mesh with texture produced by each method. Fig. 5 visually compares our results with these three methods. TexFusion, on the other hand, have not released their code by the time this paper is prepared, thus we run our method with the same text prompt on a similar objects and render with a similar configuration (Fig. 6). More results of our method are shown in Fig. 7 to illustrate the effectiveness and generalizability of our method.\nComparison with TEXTure and Text2Tex. TEXTure [30] and Text2Tex [4] are two progressive inpainting methods that performs texturing by iteratively warping painted tex- ture to new views for inpainting. Despite that their inpainting reworks part of the existing textured region to reduce seams, their methods are still suffer from overfragmentation and obvious seams, especially at the back of the object, as in Fig. 5. On the other hand, by reusing self attention to attend to other views, our methods significantly improves the visual quality and style consistency of the texture as viewed from different angles.\nComparison with Meshy. A commercial software named Meshy [22] has shown better consistency in many test cases compared to TEXTure and Text2Tex. Their method produces highly contrasted content than ours. However, it also tends to generate over-saturated colors and misinterpretation of prompts, possibly related to aggressive hyperparameters and prompt engineering in the background Fig. 5. Their method also shows significant inconsistency or blank regions probably due to self-occlusion.\nComparison with TexFusion. Although TexFusion [3] adopts a similar latent texture approach, we observed better view consistency and finer details in our results, compared to the limited sample output currently available (Fig. 6). We hypothesize that our method benefited from our real synchronized denoising (i.e., non auto-regressive), reinforced by self attention reuse. However, more in-depth comparison and quantitative evaluation can only be conducted when their code is released. Quantitative Evaluation.\nIn view of the availability of source codes, we can only select TEXTure [30], Text2Tex [4] and Meshy [22] for quantitative comparison. Here we compute their Fréchet Inception Distance (FID) in our evaluation. FID measures the difference between the output distribution of original Stable Diffusion with ControlNet, and the rendering of textured objects with textures generated by each method, similar to the metric used in [3]. Ideally, such difference should be as small as possible, because textures are derived from the pretrained T2I model. The quantitative evaluation shows that our method achieves the best FID score compared to other methods. Self-attention Reuse We also examined the effect of selfattention reuse. When MVD is performed without attention reuse, consensus on the object appearance some times could be difficult to reach in the early diffusion process, leaving conflict appearance in the end result. In certain extreme cases, this also leads to redundant or incomplete content on the object surface. For instance, the suit of the Jackie Chan figure is colored differently for its front and back sides. (Fig. 9). Therefore self-attention reuse can compensate the MVD in reaching consensus for certain cases, especially when the overlap among views is insufficient. This is the case of Jackie Chan figure in which the overlap between front and back views is limited." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Despite our method performs well in generating consistent and high-quality textures, we observe the following limitations. Firstly, the pre-trained model has a bias towards generating common views (e.g., front view) of the object specified in the text prompt, making it unlikely to generate proper bottom view of the object (e.g., it may paint a front view of a shoe on the bottom face of a shoe mesh). We believe this limitation is inherited from the pre-trained model, and is hard to fully circumvent using techniques like directional prompt and depth-guided generation adopted in our method. Secondly, our method does not guarantee perfect boundaries at depth discontinuities in the denoised views (e.g., boundaries of self-occluding geometry or between foreground and background), this may lead to problems during RGB texture extraction and cause colors bleeding to unconnected regions. A potential solution is to introduce optimization based extraction method with perceptual losses or mask out the unreliable boundaries before projecting to the texture." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we present Synchronized Multi-View Diffusion, which synthesizes consistent and high-quality tex- tures in a zero-shot setting, by utilizing the pre-trained T2I latent diffusion models. With sharing information among views in each intermediate denoising steps through overlapping regions in texture space, our method solves the over-fragmentation and seam problems of existing progressive inpainting methods. Furthermore, our method utilizes the pretrained self attention layers as an additional assurance for consistency, further eliminating inconsistent results. Both qualitative and quantitative results demonstrate the effectiveness of our method. Our method meets the need of visual consistency and quality for practical use, with comparable or even better inference time compared to that of existing methods. We hope our work can draw interest in exploring different designs in achieving synchronized diffusion, further addressing the limitations of current methods." } ]
Figure 1. Visualization of RGB textures at different intermediate denoising steps. (a) With the proposed MVD, consensus on surface color layouts are reached from the early stage of the diffusion process. (b) Without MVD and lack of consensus, obvious seams, ghosting, fragmentation artifacts are observed at the end as views are inconsistent to each other.
Text-Guided Texturing by Synchronized Multi-View Diffusion
[ { "figure_caption": "Figure 2 .2Figure 2. Left: An illustration of information exchange in overlapped region (pink region) at intermediate steps of diffusion. Without information exchange, denoising result in different views of the same object could diverge into different directions, leading to seams when projecting to an output texture. Right: To tackle this issue, we propose a Multi-View Diffusion module that fuses intermediate steps of the denoising process, basing the next denoising step on a consensus of the current step. Here, we illustrate how MVD synchronizes and fuses view information from timesteps T to 0.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. A zoom-in diagram of MVD module. Here, denoised views are first projected to partial textures in the UV texture domain and aggregated into a complete clean latent texture. Then we can sample the latent texture of the next time step based on this clean texture, and project to screen space to get consistent views.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 2(right) illustrates how information among views are shared and synchronized with MVD from the denoising time-steps T to 0. This sharing can be done via the overlapping among views (Fig. 2(left) ) in the texture domain. The latent values from different views on the overlapping area can be blended with appropriate weights, and combined values are then used for next round of denoising.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. An illustration of how the forwardly projected pixels are disjoint in UV space, and how filling and masking are applied to obtain a partial textures with large patches of valid texels.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig 4 shows the latent texture which filled with Voronoi diagram. The fully-filled latent texture are then masked according to a triangle visibility mask M (v i ), such that the propagation will not exceed the boundary of regions visible to view v i , as shown in Fig 4. These filled textures are now ready for aggregation since they are free of the sparsity problem mentioned above.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Comparison with TexFusion. Same prompts with similar meshes are used as input to our method. A similar lighting setup is used to render the images. Please refer to Fig. 1 & 6 in [3] for comparison. Results of [3] are not included as we have not yet received the permission to use.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "\"Publicity photo of a 60s movie, full color.\" \"A photo of a Chinese dragon sculpture, glazed facing, vivid colors.\" \"Blue and white pottery style lucky cat with intricate patterns.\" \"A photo of a robot hand with mechanical joints.\" \"A Japanese demon mask.\" \"A Jackie Chan figure.\" \"A photo of Thomas the tank engine.\" \"A photo of a gray and black Nike Airforce high top sneakers.\"", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Gallery of objects textured by our method. Corresponding text prompt is underneath each textured object.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8. Ablation study of Multi-View Diffusion module (MVD).", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Ablation study of Self-attention reuse (SAR).", "figure_data": "", "figure_id": "fig_10", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure 10. Failure cases. The tail of the dog does not have a clear silhouette in the end diffusion result, which cause incorrect projection to other part of the geometry.", "figure_data": "", "figure_id": "fig_11", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative evaluation. View Diffusion Module Fig.8compares the results with and without our Multi-View Diffusion module. In the experiment without MVD module, we initialize 3D consistent initial latent noise and denoise each view individually using default Stable Diffusion and ControlNet pipeline. The final texture of the case without MVD exhibits severe ghosting artifact, because the generated appearances from different views are significantly different. Without MVD, there is no consensus in content among different views. This proves that MVD module is essential for alignment and localization of surface details on the textured object.", "figure_data": "Methods FID score ↓Meshy92.1418Text2Tex74.9667TEXTure78.6683Ours61.65505.3. Ablation StudyMulti-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Yuxin Liu; Minshan Xie; Hanyuan Liu; Tien-Tsin Wong
[ { "authors": " Michael Ashikhmin", "journal": "", "ref_id": "b0", "title": "Synthesizing natural textures", "year": "2001" }, { "authors": "Franz Aurenhammer", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b1", "title": "Voronoi diagrams-a survey of a fundamental geometric data structure", "year": "1991" }, { "authors": "Tianshi Cao; Karsten Kreis; Sanja Fidler; Nicholas Sharp; Kangxue Yin", "journal": "", "ref_id": "b2", "title": "Texfusion: Synthesizing 3d textures with text-guided image diffusion models", "year": "2023" }, { "authors": "Dave Zhenyu; Chen ; Yawar Siddiqui; Hsin-Ying Lee; Sergey Tulyakov; Matthias Nießner", "journal": "", "ref_id": "b3", "title": "Text2tex: Text-driven texture synthesis via diffusion models", "year": "2007" }, { "authors": "S David; Ebert", "journal": "Morgan Kaufmann", "ref_id": "b4", "title": "Texturing & modeling: a procedural approach", "year": "2003" }, { "authors": "David S Ebert; F Kenton Musgrave; Darwyn Peachey; Ken Perlin; Steve Worley", "journal": "Morgan Kaufmann Publishers", "ref_id": "b5", "title": "Texturing and Modeling: A Procedural Approach", "year": "2003" }, { "authors": "Alexei A Efros; Thomas K Leung", "journal": "IEEE", "ref_id": "b6", "title": "Texture synthesis by non-parametric sampling", "year": "1999" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Advances in neural information processing systems", "ref_id": "b7", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Anchit Gupta; Wenhan Xiong; Yixin Nie; Ian Jones; Barlas Oguz", "journal": "", "ref_id": "b8", "title": "3dgen: Triplane latent diffusion for textured mesh generation", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b10", "title": "Zero-shot text-guided object generation with dream fields", "year": "2022" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b11", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Mohammad Nasir; Tianhao Khalid; Eugene Xie; Tiberiu Belilovsky; Popa", "journal": "", "ref_id": "b12", "title": "Clip-mesh: Generating textured meshes from text using pretrained image-text models", "year": "2022" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b13", "title": "Auto-encoding variational bayes", "year": "" }, { "authors": "Johannes Kopf; Chi-Wing Fu; Daniel Cohen-Or; Oliver Deussen; Dani Lischinski; Tien-Tsin Wong", "journal": "ACM Trans. Graph", "ref_id": "b14", "title": "Solid texture synthesis from 2d exemplars", "year": "2007" }, { "authors": "Vivek Kwatra; Irfan Essa; Aaron Bobick; Nipun Kwatra", "journal": "", "ref_id": "b15", "title": "Texture optimization for example-based synthesis", "year": "2005" }, { "authors": "Muheng Li; Yueqi Duan; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b16", "title": "Diffusionsdf: Text-to-shape via voxelized diffusion", "year": "2022" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b17", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Jonathan Lorraine; Kevin Xie; Xiaohui Zeng; Chen-Hsuan Lin; Towaki Takikawa; Nicholas Sharp; Tsung-Yi Lin; Ming-Yu Liu; Sanja Fidler; James Lucas", "journal": "", "ref_id": "b18", "title": "Att3d: Amortized text-to-3d object synthesis", "year": "2023" }, { "authors": "Jianye Lu; Athinodoros S Georghiades; Andreas Glaser; Hongzhi Wu; Li-Yi Wei; Baining Guo; Julie Dorsey; Holly Rushmeier", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b19", "title": "Context-aware textures", "year": "2007" }, { "authors": "Tom Mertens; Jan Kautz; Jiawen Chen; Philippe Bekaert; Frédo Durand", "journal": "Rendering Techniques", "ref_id": "b20", "title": "Texture transfer using geometry correlation", "year": "2006" }, { "authors": " Meshy", "journal": "", "ref_id": "b21", "title": "Meshy -3d ai generator", "year": "2023" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b22", "title": "Latent-nerf for shape-guided generation of 3d shapes and textures", "year": "2023" }, { "authors": "Chong Mou; Xintao Wang; Liangbin Xie; Jian Zhang; Zhongang Qi; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b23", "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models", "year": "2023" }, { "authors": "Gimin Nam; Mariem Khlifi; Andrew Rodriguez; Alberto Tono; Linqi Zhou; Paul Guerrero", "journal": "", "ref_id": "b24", "title": "3d-ldm: Neural implicit 3d shape generation with latent diffusion models", "year": "2022" }, { "authors": "Alex Nichol; Heewoo Jun; Prafulla Dhariwal; Pamela Mishkin; Mark Chen", "journal": "", "ref_id": "b25", "title": "Point-e: A system for generating 3d point clouds from complex prompts", "year": "2022" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b26", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Emil Praun; Adam Finkelstein; Hugues Hoppe", "journal": "", "ref_id": "b27", "title": "Lapped textures", "year": "2000" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b28", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Elad Richardson; Gal Metzer; Yuval Alaluf; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b29", "title": "Texture: Text-guided texturing of 3d shapes", "year": "2007" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b30", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Aditya Sanghi; Hang Chu; Joseph G Lambourne; Ye Wang; Chin-Yi Cheng; Marco Fumero; Kamal Rahimi Malekshan", "journal": "", "ref_id": "b31", "title": "Clip-forge: Towards zero-shot text-to-shape generation", "year": "2022" }, { "authors": "Aditya Sanghi; Hang Chu; Ye Joseph G Lambourne; Chin-Yi Wang; Marco Cheng; Kamal Fumero; Rahimi Malekshan", "journal": "", "ref_id": "b32", "title": "Clip-forge: Towards zero-shot text-to-shape generation", "year": "2022" }, { "authors": "J Edward; David Smith; Meger", "journal": "PMLR", "ref_id": "b33", "title": "Improved adversarial systems for 3d object generation and reconstruction", "year": "2017" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b34", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Christina Tsalicoglou; Fabian Manhardt; Alessio Tonioni; Michael Niemeyer; Federico Tombari", "journal": "", "ref_id": "b35", "title": "Textmesh: Generation of realistic 3d meshes from text prompts", "year": "2023" }, { "authors": "Aaron Van Den; Oriol Oord; Vinyals", "journal": "Advances in neural information processing systems", "ref_id": "b36", "title": "Neural discrete representation learning", "year": "2017" }, { "authors": "Tien-Tsin Wong; Wai-Yin Ng; Pheng-Ann Heng", "journal": "Springer", "ref_id": "b37", "title": "A geometry dependent texture generation framework for simulating surface imperfections", "year": "1997" }, { "authors": "Chaohui Yu; Qiang Zhou; Jingliang Li; Zhe Zhang; Zhibin Wang; Fan Wang", "journal": "", "ref_id": "b38", "title": "Points-to-3d: Bridging the gap between sparse points and shape-controllable text-to-3d generation", "year": "2023" }, { "authors": "Xiaohui Zeng; Arash Vahdat; Francis Williams; Zan Gojcic; Or Litany; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b39", "title": "Lion: Latent point diffusion models for 3d shape generation", "year": "2022" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b40", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Linqi Zhou; Yilun Du; Jiajun Wu", "journal": "", "ref_id": "b41", "title": "3d shape generation and completion through point-voxel diffusion", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 433.37, 433.78, 83.1, 13.74 ], "formula_id": "formula_0", "formula_text": "(v1) t , z (v2) t , • • • , z (vn) t" }, { "formula_coordinates": [ 4, 50.11, 701.06, 50.27, 12.79 ], "formula_id": "formula_1", "formula_text": "Z T = {z (vi)" }, { "formula_coordinates": [ 4, 308.86, 220.41, 236.25, 26.49 ], "formula_id": "formula_2", "formula_text": "Z 0|t = {z (vi) 0|t } N" }, { "formula_coordinates": [ 4, 377.37, 365.75, 167.74, 31.98 ], "formula_id": "formula_3", "formula_text": "Ŵ0|t = N i=1 w (vi) 0|t N i=1 M (v i ) + γ(1)" }, { "formula_coordinates": [ 4, 308.86, 409.45, 20.77, 11.87 ], "formula_id": "formula_4", "formula_text": "w (vi)" }, { "formula_coordinates": [ 5, 65.56, 367.3, 220.8, 46.19 ], "formula_id": "formula_5", "formula_text": "Attention(v i ) =          β • SA(v i , v {i-1,i,i+1} ) + (1 -β) • SA(v i , v ref ) for t > t ref SA(v i , v {i-1,i,i+1} ) otherwise(2)" }, { "formula_coordinates": [ 5, 325.79, 298.91, 219.32, 29.83 ], "formula_id": "formula_6", "formula_text": "Texture(Z 0 ) = N i=1 D(z (vi) 0 ) ⊙ UV(θ (vi) ) α N i=1 UV(θ (vi) ) α + γ (3)" }, { "formula_coordinates": [ 5, 368.41, 420.95, 176.71, 23.22 ], "formula_id": "formula_7", "formula_text": "θ (vi) (p) = ⃗ v i (p) • ⃗ n m (p) ∥⃗ v i (p)∥∥⃗ n m (p)∥ ,(4)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b4", "b13", "b24", "b21", "b26", "b20", "b25", "b11", "b38", "b27" ], "table_ref": [], "text": "Heavily pre-trained and fine-tuned Large Language Models (LLMs) have demonstrated exceptional performance on zero-shot (Kojima et al. 2022) and few-shot tasks (Brown et al. 2020). The ability of these models to generalize, combined with their costly pretraining, has shifted the focus from training ad-hoc models to perform specific tasks to utilizing these general-purpose foundational models for a wide variety of use-cases (Eloundou et al. 2023;OpenAI 2023). These pre-trained models lack knowledge of private contexts or recent events.\nTo provide these LLMs with up-to-date or relevant information, methods such as Retrieval Augmented Generation (RAG) (Lewis et al. 2020;Karpukhin et al. 2020;Mao et al. 2020) are used to include external information into a generation process without needing fine-tuning on new data. This process allows LLMs to first query an external data source, retrieve relevant information (with respect to a given prompt), and then use both the prompt and the retrieved data as input to the inference phase of the LLM.\nSimilar to the problem of federated learning (Kairouz et al. 2019), it is valuable to aggregate sensitive data from multiple (perhaps many) data owners. To do that, each party should be able to guarantee that their own private data remains private even when it is utilized. On the other hand, model users should be able to query these data from many data owners without needing to share what questions they are asking.\nIn this work we argue that LLMs require a new model for sharing data for AI tasks. Compared to federated learning, which focuses on the training phase, LLMs should focus on the (i) retrieval phase; (ii) inference phase. Guaranteeing privacy of both the query and any private documents residing in the retrieval database require that both phases utilize privacy-preserving techniques and are chained together.\nAlas, to the best of our knowledge all existing works only tackle the LLM inference problem (Li et al. 2022;Dong et al. 2023;South et al. 2023;Mo et al. 2020), but provide no secure solution when retrieval is involved. In this work, we close this gap by introducing Private Retrieval Augmented Generation (PRAG). PRAG allows users to privately search a database, which in itself is private, then send the augmented query privately to any secure (or otherwise trusted) LLM, creating an end-to-end secure solution.\nOur approach and contributions. In this paper, we propose Private Retrieval Augmented Generation (PRAG), a secure approach to augment neural information retrieval that hides both query vectors and the retrieval database. We use a retrieval database split across a set of servers, and we ensure data remains private by using secure multi-party computation (MPC) techniques. To the best of our knowledge, we are the first to consider the problem of secure distributed retrieval in the context of LLMs, and more broadly, are the first to propose a solution for private similarity search that can protect both the query and a secret-shared (or encrypted) database. This approach can be deployed with any standard neural information retrieval (IR) embedding model to augment distance calculations (e.g., cosine, dot, euclidean) and top-k retrieval over federated vector stores, scaling to medium-size databases with very little accuracy loss (99% accuracy on real data).\nWe further scale the approach to much larger databases using an approximate k-nearest-neighbors approach inside MPC, replicating the accuracy of the state of the art in ap-proximate retrieval using a first-of-its kind inverted files index inside MPC, providing significant speed improvements for retrieval. Our approach provides both theoretical and empirical improvements of value. We achieve constant communication on the client's side and sublinear communication on the servers' side --the bottleneck in MPC approaches. This work is the first IR approach to work across more than two servers with minimal additional costs. We further present a 'leaky' version of the protocol that allows for partial privacy of queries under a privacy budget with significant improvements to speed.\nWe evaluate PRAG across a range of data distributions, both real and synthetic, to show it broadly maintains the performance characteristics of non-secure IR approaches. We provide a pytorch-native implementation of our system using the Crypten MPC engine and retrieval hooks for langchain and BEIR." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we present the Private Retrieval Augment Generation (PRAG) framework. The method builds from secret sharing and MPC friendly exact top-k calculations to a new MPC design of an inverted file index for efficient approximate top-k calculation." }, { "figure_ref": [], "heading": "Overview and Trust Model", "publication_ref": [ "b25", "b11", "b1", "b8", "b15", "b3" ], "table_ref": [], "text": "Although a wide array of approaches exist for training document embedding models and augmenting generation with retrieved models, most neural information retrieval methods are underpinned by a step where a querier sends a query embedding to a server to calculate the distance / similarity between the query vector and the database, in order to return a document either as an embedding vector for concatenation or with the document tokens for use in LLM inference. This setup offloads the storage of large databases and their associated calculations to a more powerful server.\nRecently, a significant body of research has been focusing on the problem of secure inference, which ensures that a query remains private at all times. Whether secure inference is achieved through cryptographic techniques (e.g., (Li et al. 2022;Dong et al. 2023;Akimoto et al. 2023;Chen et al. 2022;Gupta et al. 2023)), or by running the model locally (Arora and Ré 2022), if the inference pipeline includes an external retrieval phase (as is often the case), then security does not hold as the query itself is leaked to the database operator.\nSimilarly, the database may itself hold private information, collected by many different data owners. The only way to protect their data is by making sure both the client and the vector database server(s) remain oblivious to its content.\nTo formalize this, we assume our system has n clients clients sending queries and n owners data owners. Both clients and data owners interact with a set of n servers vector database operators. We assume that all parties in the system are semi-honest (i.e., they follow the protocol) and that at most t < nservers 2 of the servers are corrupt (the honest majority setting). In this work, we do not focus on the n owners data owners privately building the server, and we assume that at some point in the past these data owners have secretshared their data to the servers. Instead, we are focused on the inference stage, a much more frequent and real-time operation." }, { "figure_ref": [], "heading": "Exact MPC Tools", "publication_ref": [ "b34", "b16", "b31", "b22", "b28", "b14", "b10", "b6", "b30", "b22", "b9", "b0", "b7", "b22" ], "table_ref": [], "text": "We assume all values are shared using Shamir secret sharing (Shamir 1979) over a prime field F p where p = 32 or 64 bits. We note that our protocols could work using other secret sharing schemes suitable for the honest-majority setting (e.g., replicated secret sharing (Ito, Saito, and Nishizeki 1989) over the ring Z 2 32 or Z 2 64 ), but Shamir is the ideal choice in our setting, as it requires the least amount of space and scales well to a large number of servers.\nWe further assume, as is common in secure machine learning literature (Riazi et al. 2018;Knott et al. 2021), that there is a trusted dealer that generates shared random values. However, other techniques could distribute this (Damgård et al. 2013;Orsini, Smart, and Vercauteren 2020;Escudero et al. 2020). As in other works, since these protocols happen offline in a preprocessing phase and do not impact the online performance of serving a query, we do not benchmark their performance.\nWe denote arithmetic secret-shared values by [x]. A share for a specific server i is denoted as [x] i . When sharings may appear once as a t-degree sharing and another as a 2t-degree sharing, we occasionally distinguish these sharings with a superscript (e.g., [x] (2t) ). We use [x] := SS.Share(x) and x := SS.Reveal ([x]) for sharing and revealing secret shared items.\nAs is well known, all linear operations over secret-shared values require no interaction between the servers. For multiplication, a single round of interaction is required. Given our setting, we find the multiplication protocol by Damgård and Nielsen (Damgård and Nielsen 2007) to be the most suitable.\nSince in this work we operate in the semi-honest, honestmajority setting, we encode real numbers into a field, we use the common technique of representing all underlying values as fixed-point integers (Catrina and Saxena 2010). In practice, this means that for any real value x ∈ R, we encode it as a fixed-point integer ⌊x2 f ⌋ ∈ Z with precision f . Note that multiplying two encoded values results in a value with 2fprecision. Therefore, truncation is needed after every multiplication to avoid causing an overflow inside the field, which would distort results.\nDistance calculations While there is some heterogeneity in distance measures used in neural information retrieval, the majority use dot products, cosine similarity, or L2 norms (euclidean distance). We provide MPC friendly implementations of all three.\nA naive implementation of a dot product between a vector and a matrix can be provided by running the secure multiplication protocol in parallel. Both the communication and the computation complexity scale linearly with the size of the database N and embedding dimension size d e , the latter of which is fixed in almost all cases. Round complexity remains the same (constant) regardless. Extending the dot product gives us cosine similarity, the predominant distance measure in sentence transformer style models (Reimers and Gurevych 2019). To save on expensive MPC computations, we pre-normalize the input vectors and matrices prior to secret sharing into MPC, allowing for cosine similarity to reduce to a simple dot product. Computing Euclidean distance can also be achieved directly through MPC, but we observe that this is a much more expensive operation, as it requires computing square roots inside the MPC circuit. For example, Crypten (Knott et al. 2021), which we use in our implementation, uses a slow Newton-Raphson approach for computing square roots, requiring multiple rounds of communication.\nHowever, we make the observation that given that topk calculations are the end goal of distance calculations, the monotonic square root step in L2 can be ignored completely before looking for the top-k elements in the distance vector, removing the need to compute the square root securely.\nFast secure dot product Computing the dot product of two vectors x, y requires computing the sum of their pointwise products z := d j=1 x j y j . This can be achieved in MPC naively by using a secure multiplication protocol in parallel. However, for vectors of size N , this requires pre-processing and communicating O(N ) elements per dot product. This further compounds as we try to securely multiply matrices together, as in our case.\nHowever, as was observed by previously (Chida et al. 2018) and leveraged in works such as Blinder (Abraham, Pinkas, and Yanai 2020), we can reduce the communication complexity of computing a dot product from N elements to a single element, by first having each party first locally compute the sum of point-wise products (instead of each product independently), and only masking the final result, as is shown in Protocol 2 in the appendix. Repeating this across a dimension of a matrix, we can use this for efficient matrix multiplication.\nRelation to private information retrieval A well-known method of privately reading a specific entry in a database is by computing the dot product between a one-hot-vector with a non-zero element at the index of interest. Assuming i is the index of interest from some arbitrary vector or matrix x, one can privately retrieve the data at row i, without leaking any information as [0, . . . , 1, . . . , 0] • [x 1 , . . . , x i , . . . , x N ] T = [x i ]. To read several rows at once, we can first sum across several one-hot-vectors to obtain a single vector. This simple oblivious private retrieval from a database allows us to extract any top-k elements from a database matrix that has been secret shared. This allows us to extract either database embedding vectors or token arrays from inside the distributed database for return. In essence, rather than securely returning top-k indices and asking the user to separately extract them, we can return the original tokens from a secret shared database directly in MPC. This oblivious retrieval is used extensively throughout our protocols below, such as in extracting candidate vectors from clusters.\nExact top-k for retrieval Retrieving the most similar documents to a query requires first ranking all documents by some similarity metric (as above) and then picking the top k documents that are closest to the query.\nOur solution is conceptually similar to secure top-k cir-cuits designed in other works (Chen et al. 2020), where O(kN ) comparisons are needed. These circuits operate by successively keeping an ordered list of k items, and then computing each value in the array with the minimum value in the (much smaller) sorted list. Unfortunately, this solution also requires O(N ) rounds for MPC based on secretsharing. Instead, our protocol iterates k times over a secret-shared vector [x]. In each iteration, we run argmax ([x]) to extract the current minimum's index in the vector. We then obliviously scale down the selected value enough so it will be ignored in future iterations.\nThere are many ways to implement an MPC protocol for argmax ([x]). Our description above assumes a recursive tree-reduction based protocol as in Crypten (Knott et al. 2021), having O(log 2 (N )) rounds and O(N log 2 (N )) total communication. This leads to an exact top-k round complexity of O(k log 2 (N )) and O(kN log 2 (N )) overall communication.\nBy combining this with distance calculations and oblivious private retrieval from a database we can provide an end-to-end exhaustive exact algorithm to return the top-k nearest documents to a query from a database of embeddings (and a database of tokens for exact document return)." }, { "figure_ref": [], "heading": "Secret share query embedding", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MPC distance calculation", "publication_ref": [], "table_ref": [], "text": "Secret shared distance vector MPC top-k Oblivious database retrieval" }, { "figure_ref": [], "heading": "Document return", "publication_ref": [ "b18" ], "table_ref": [], "text": "Nearest Neighbors and Inverted Files (IVF)\nAt its core, the information retrieval task of top-k closest points is exactly the task of solving the k-nearest-neighbors (kNN) problem, which requires finding the k points in a database that are nearest to the given data point (the query).\nWhile the above exact approach achieves this, it does so at a significant speed cost (both with or without MPC), motivating the creation of approximate nearest neighbors algorithms, which only require a sublinear amount of work. These algorithms operate by first computing a compact representation of the dataset called the index, and then executing queries on the index. Many approximate nearest neighbors techniques exist, and one that is particularly amenable to MPC is the inverted files index (IVF) (Johnson, Douze, and Jégou 2017; Jégou, Douze, and Schmid 2011). This technique works by first using a clustering algorithm (e.g., k-means) over the data set to find its n c centroids. Then, each centroid represents a cluster holding all points associated with that cluster. In other words, this process splits the database into n c buckets.\nAfter this one-time step, querying the data starts by computing the nearest neighbors of the query with respect to all centroids. Then, the n probe nearest inverted files are searched, looking for the k nearest neighbors among them.\nDuring IVF generation, parameter choices in how the index is built affect the downstream performance of the queries. We choose the number of clusters to be n c = α √ N to get sublinear complexity, where α is a free parameter that can be tuned. During query time, we find the distance to all n c centroids, and select the top n probe clusters to inspect further. As we will see during experiments, this choice of n probe increases the recall performance of the model, and indeed at n probe = n c , all clusters are inspected and the search becomes exact. However, the nature of the IVF clustering allows a smaller n probe to be chosen while still achieving high accuracy." }, { "figure_ref": [], "heading": "Efficient approximate vector nearest neighbor search in MPC", "publication_ref": [], "table_ref": [], "text": "Bringing this into MPC, the protocol Π IVFQuery securely computes the approximate nearest neighbors using an inverted file index. The protocol assumes the servers precomputed the secret-shared inverted index [IVF], which consists of n c lists of size m, both of which are of size O( √ N ), ensuring the overall communication complexity is sublinear. We use the MPC distance measures established above to calculate the distance between the query vector and each of the n c cluster means.\nThe parties then run a secure protocol of exact top k as described earlier to identify the n probe most similar clusters. Unlike non-MPC protocols, it is critical that the servers remain oblivious as to which are the top clusters for this query. Otherwise, information about both the query and database would leak. For this reason, we require the top-k protocol to return each index as a one-hot-vector of size n c which are collectively stored in [closest buckets].\nThen, the parties perform an exact-match private information retrieval to get all the vectors in the closest buckets. These [candidates] can be obliviously found through a product of [closest buckets], a mapping of centroids indices to cluster indices in the database, [IVF indices], and the entire [IVF] vector database. By obliviously reducing the entire vector database into a much smaller search space that only includes vectors from the n probe nearest clusters, we are able to achieve sublinear overall communication.\nAt this stage, [candidates] holds a reduced (n probe ×m)× d vector matrix (where d is the embedding dimension). [candidates indices] will similarly store the mapping from each candidate to the original database index. We proceed by running an exact nearest neighbor search again, which computes the distances between the query and all candidates and then securely gets the top-k entries. Using [candidates indices], these top-k entries are mapped back to the original database records, where documents can be obviously retrieved." }, { "figure_ref": [], "heading": "Sublinear Communication Complexity", "publication_ref": [], "table_ref": [], "text": "The client maintains an optimal communication complexity of O(1), as it only needs to communicate a share of the query vector to each server.\nAs to the servers, in lines 5-7 a total of n c := O( √ N ) elements are communicated. Computing the exact top-k over these n c distances requires O(k • log 2 (n c )) communication. 1) elements (in fact, she communicates exactly d elements, as is the size of the input vector). This holds true so long as n probe remains small enough to be considered a constant. As the number of candidate clusters to be probed becomes n c , the overall complexity of the approach becomes O(\n[centroid distance i ] := SumProd([x], [cluster]); 7 [centroid distances] := {[centroid distance 1 ] (t) , . . . , [centroid distance nc ] (t\n√ N • √ N ) = O(N )\n, which is no better than exact search but with additional overhead operations. Hence, n probe should be kept low as we will see in the experimental settings." }, { "figure_ref": [ "fig_2" ], "heading": "Sacrificing Privacy for Speed in MPC IVF", "publication_ref": [], "table_ref": [], "text": "The fast secure dot product trick above helps significantly improve the speed of the step wherein we reduce the full database to only the n probe clusters vectors relevant to the query. However, this step is still extremely costly, requiring the manipulation of a large database of vectors for lookup when the clusters are stored in a large matrix.\nInstead, we can take an alternate approach, where each cluster is stored in its own secret shared database, with an exposed lookup table. The centroids of the database still remain secret shared and private, but during query time, the n probe closest clusters (shuffled to avoid exposing order) are decrypted by each server to retrieve the relevant secret shared cluster matrices, which can then be concatenated before passing into the second distance-top-k calculation. This has large speed implications, dramatically decreasing the data access time and allowing for speed more competitive with non-MPC IVF.\nHowever, this does come at the cost of privacy. Each server will now know the n probe closest clusters to the query, which leaks the area in the embedding space where the query is coming from. Indeed, while the centroids are secret shared, knowing the lookup table and what a user accesses would allow an actor to determine an average point across those centroids with more queries.\nTo mitigate this, a query could be noised according to a privacy budget similar to differential privacy, as for sufficiently large n probe , even a high noised query would likely contain the relevant closest clusters nearby. One slight advantage here is that larger choices of n probe provide more privacy (and more capacity for noising), while also increasing the overall accuracy of the search (as we see in Figure 3).\nIn general, this final methodological change differs from above by no longer being fully private, but is presented as part of the spectrum from slow but exact private search to fast approximate search, and finally to fastest but leaky approximate search." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b39" ], "table_ref": [], "text": "To demonstrate the performance of these models we run a series of experiments on both synthetic and real data to determine performance properties of the implementations of these methods above.\nWe benchmark the retrieval accuracy and speed across a range of embedding sizes (256 to 8192), synthetic embedding distributions (N (0, 0.05), N (0, 1), U (-1, 1), Binary), distance functions (cosine, dot product, euclidean), top-k values, IVF parameters, and database sizes. We perform MPC experiments on a single 2.2GHz Intel Xeon Silver CPU using Crypten's built-in communication code to spawn processes for each server.\nFurther to this, we test the approaches on retrieval of real neural embedding datasets from BEIR (Thakur et al. 2021) using the same environment, this collection of datasets uses a range of textual document types and sizes, all of which we use a standard off-the-shelf embedding on. While there are several parallelization improvements that can be made locally within each server for MPC, our implementations of each algorithm above remain unoptimized." }, { "figure_ref": [ "fig_1" ], "heading": "Exact Search", "publication_ref": [], "table_ref": [], "text": "Each step of the exact search approach is extremely accurate, with small numerical errors introduced during MPC. For distance measures, MPC vectors have a mean squared error difference from pytorch calculated distances of less than 10 -5 for euclidean and 10 -8 for cosine, going as low as 10 -11 for euclidean distance on N (0, 0.05). These errors do not change with database size, and are introduced at the numerical level of the elements.\nThe exact top-k approach using tree reduction applied interactive k times suffers from similar small numerical errors. For distance vectors drawn N (0, 0.05), where outliers are often standalone, top-k elements are picked out with 0.99 or above recall and precision. For uniform distributions (unrealistic for embedding distance vectors) the f1 accuracy is lower for top-1 (0.842) and top-k (0.96) with recall and precision climbing for higher k. This is explained by the small distances present between the max and its nearest value when drawn from a uniform distribution, leading numerical errors to induce a loss of accuracy. Fortunately, the nature of real distance distributions means performance is high in real contexts. For small values of k, this approach can be relatively fast but increasing the choice of k dramatically increases the time cost due to communication complexity in the interactive argmax looping.\nPutting distance calculations, top-k, and oblivious retrieval together, the exact search approach in MPC can identify the top-1 (argmax) most similar vector to a query with 97.5% accuracy and top-50 with 98.6% F1 score, with accuracy independent of database sizes tested up to 5 × 10 5 . The constraint on the use of this MPC exact approach is the speed, taking up to 10 seconds for top-1 and top-5 for a 10 5 size database, and increasing dramatically for larger k as in Figure 2. " }, { "figure_ref": [ "fig_1" ], "heading": "Approximate Search", "publication_ref": [ "b30" ], "table_ref": [], "text": "Our MPC IVF implementation, using both fully secure and partially leaky clustering, returns the elements as the standard IVF implementation with an average of over 99% recall on both synthetic and real embedding data, with errors explained by numerical errors at runtime. For real data, we use embeddings from msmarco-distilbert-base-v3 from SBERT (Reimers and Gurevych 2019). These numerical errors partly flow through from the exact search above, which is used at various points in the IVF MPC algorithm. This accuracy of the MPC IVF to non-IVF is stable across choices of n probe and n c . While the MPC IVF matches the recall performance of the standard IVF, the underlying approximate nature of the IVF provides tradeoffs between accuracy and speed. As shown in Figure 2, increasing the value of n probe increases the proportion of the full database that is inspected at query time, in turn increasing the overall runtime. The benefit of IVF is that we can achieve high accuracy for even a low value of n probe , dramatically reducing query time at the cost of accuracy. " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b19", "b18", "b7", "b43", "b33", "b32", "b37", "b7", "b25", "b11", "b40", "b17", "b41", "b42", "b29", "b12" ], "table_ref": [], "text": "Drawing on the ideas in private federated learning, we can maintain privacy when doing public queries (Arora et al. 2022) and move beyond in-context learning (Arora and Ré 2022).\nWe bring privacy to this idea through augmenting existing non-private retrieval methods, ranging from exact search on small datasets to large scale approximate retrieval (Johnson, Douze, and Jégou 2017;Jégou, Douze, and Schmid 2011). While several other works have examined the problem of secure similarity search (Chen et al. 2020;Zuber and Sirdey 2021;Servan-Schreiber, Langowski, and Devadas 2022;Asharov et al. 2017;Schoppmann, Gascón, and Balle 2018;Shaul, Feldman, and Rus 2018a,b;Songhori et al. 2015), to the best of our knowledge we are the first to examine a model where the database is secret shared as well, and where an arbitrary number of servers and database owners can be supported. A comparison to the state-of-the-art protocols (Servan-Schreiber, Langowski, and Devadas 2022; Chen et al. 2020) is available in Table 1.\nThese approaches can augment other pieces of privacyfirst ML infrastructure from fully secure LLM inference (Li et al. 2022;Dong et al. 2023) and federated or privacy preserving K-means clustering (Vaidya and Clifton 2003;Jagannathan and Wright 2005). We choose to focus on MPC techniques in this paper, as opposed to secure retrieval schemes that rely trusted execution environments (TEEs) (Wang et al. 2006;Yang et al. 2008;Papadopoulos, Bakiras, and Papadias 2010;Drean et al. 2023), as TEEs have been known to suffer from privacy-breaching attacks." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduced PRAG, a novel approach for secure, distributed information retrieval for large language models. PRAG uniquely safeguards both query vectors and a multiowner database using multi-party computation (MPC). Key contributions include an MPC-friendly protocol for inverted file approximate search, allowing for rapid document retrieval with sublinear communication complexity; analysis of exact search performance on language embeddings; and a version of the protocol that offers a trade-off between speed and partial privacy, under a predefined privacy budget. These tools allow for a new mechanism of neural information retrieval, which when combined with secure inference of LLMs, is a stepping stone towards fully secure foundation model agent pipelines. However, much like secure execution of LLMs, the approach put forward here has significant computational costs and speed limitations, especially for large databases and high accuracy demands. Future work should explore optimizing communication costs, enhancing protocol robustness against collusion, and integrating PRAG into larger secure machine learning frameworks." }, { "figure_ref": [], "heading": "Appendix Secure Sum of Products Protocol", "publication_ref": [], "table_ref": [], "text": "Below we introduce the complete Sum Product protocol used in this work." }, { "figure_ref": [], "heading": "Algorithm 2: Π SumProd", "publication_ref": [], "table_ref": [], "text": "Input: Public Parameters: t, d Input: [x] (t) , [y] (t) two input vectors of size d given as t-sharings Preprocessed:\n(Re-randomize and reduce sharing);\n3 Return [z] (t) ;\nSpeed ratios between MPC and non-MPC methods " } ]
While the flexible capabilities of large language models (LLMs) allow them to answer a range of queries based on existing learned knowledge, information retrieval to augment generation is an important tool to allow LLMs to answer questions on information not included in pre-training data. Such private information is increasingly being generated in a wide array of distributed contexts by organizations and individuals. Performing such information retrieval using neural embeddings of queries and documents always leaked information about queries and database content unless both were stored locally. We present Private Retrieval Augmented Generation (PRAG), an approach that uses multi-party computation (MPC) to securely transmit queries to a distributed set of servers containing a privately constructed database to return top-k and approximate top-k documents. This is a first-of-its kind approach to dense information retrieval that ensures no server observes a client's query or can see the database content. The approach introduces a novel MPC friendly protocol for inverted file approximate search (IVF) that allows for fast document search over distributed and private data in sublinear communication complexity. This work presents new avenues through which data for use in LLMs can be accessed and used without needing to centralize or forgo privacy.
Don't forget private retrieval: distributed private similarity search for large language models
[ { "figure_caption": "Figure 1 :1Figure 1: Overview of PRAG architecture using a distributed, secret-shared inverted file index (IVF), for retrieving document token vectors closely matching a privately-generated query vector in LLM-based question answering.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Time taken to retrieve top-k closest vectors in the database for end-to-end MPC exact search across increasing synthetic database sizes.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Information retrieval using IVF improves accuracy with increased n probe (top left) but increases query time as a larger proportion of the index ( n probe nc ) must be searched (bottom left). These retrieval approaches (both IVF and exact) scale favorably across multiple servers (right).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 1: Π IVFQuery Input: Public Parameters: n, k, n c , n probe , m, d Client: query x ∈ R d Server: Secret-shared inverted file clusters [IVF clusters]∈ R nc×d , Inverted file index values [IVF] ∈ R nc×m×d , Inverted file index indices [IVF indices] Send each server i its share [x] i ; 4 Servers computation: 5 in parallel Iterate over [cluster] ∈ [IVF clusters];", "figure_data": "∈ R nc×m Output: k-nearest-neighbors (approximate)1 Client computation:2 [x] := SS.Share(x);3 6", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "With our choice of parameters, n probe and d are constant, and", "figure_data": "Reducing the dataset obliviously costs O(n probe m d). m = N √ N , yielding O( √ N ) communication. This gives a candidate dataset that is approximately of size n probe √ N . Finally, we can compute the distances and exact top-k on this reduced dataset, but as it now only contains O( √ N ), √ N )). the overall communication of that step is O(k • log 2 ( Overall, we see that end-to-end the servers communicate O( √ N +log 2 ( √ N )) field elements while the client commu-nicates O(14 Return [database top-k indices] documents via private retrieval.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
Guy Zyskind; Tobin South; Alex ' Sandy' Pentland
[ { "authors": "I Abraham; B Pinkas; A Yanai", "journal": "", "ref_id": "b0", "title": "Blinder-Scalable, Robust Anonymous Committed Broadcast", "year": "2020" }, { "authors": "Y Akimoto; K Fukuchi; Y Akimoto; J Sakuma", "journal": "IEEE", "ref_id": "b1", "title": "Privformer: Privacy-preserving transformer with mpc", "year": "2023" }, { "authors": "S Arora; P Lewis; A Fan; J Kahn; C ", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b2", "title": "Reasoning over Public and Private Data in Retrieval-Based Systems", "year": "2022" }, { "authors": "S Arora; C Ré; G Halevi; S Lindell; Y Rabin; T ", "journal": "Cryptology ePrint Archive", "ref_id": "b3", "title": "Can Foundation Models Help Us Achieve Perfect Secrecy? Asharov", "year": "2017" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "", "ref_id": "b4", "title": "Language Models are Few-Shot Learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b5", "title": "", "year": "" }, { "authors": "O Catrina; A Saxena", "journal": "Springer", "ref_id": "b6", "title": "Secure computation with fixed-point numbers", "year": "2010-01-25" }, { "authors": "H Chen; I Chillotti; Y Dong; O Poburinnaya; I Razenshteyn; M S Riazi", "journal": "", "ref_id": "b7", "title": "{SANNS}: Scaling up secure approximate {k-Nearest} neighbors search", "year": "2020" }, { "authors": "T Chen; H Bao; S Huang; L Dong; B Jiao; D Jiang; H Zhou; J Li; F Wei", "journal": "", "ref_id": "b8", "title": "The-x: Privacy-preserving transformer inference with homomorphic encryption", "year": "2022" }, { "authors": "K Chida; D Genkin; K Hamada; D Ikarashi; R Kikuchi; Y Lindell; A Nof", "journal": "Springer", "ref_id": "b9", "title": "Practical covertly secure MPC for dishonest majority-or: breaking the SPDZ limits", "year": "2013-09-09" }, { "authors": "I Damgård; J B Nielsen", "journal": "Springer", "ref_id": "b10", "title": "Scalable and unconditionally secure multiparty computation", "year": "2007" }, { "authors": "Y Dong; W Lu; Y Zheng; H Wu; D Zhao; J Tan; Z Huang; C Hong; T Wei; W.-C Cheng", "journal": "", "ref_id": "b11", "title": "PUMA: Secure Inference of LLaMA-7B in Five Minutes", "year": "2023" }, { "authors": "J Drean; M Gomez-Garcia; T Bourgeat; S Devadas", "journal": "", "ref_id": "b12", "title": "Citadel: Enclaves with Strong Microarchitectural Isolation and Secure Shared Memory on a Speculative Outof-Order Processor", "year": "2023" }, { "authors": "T Eloundou; S Manning; P Mishkin; D Rock", "journal": "", "ref_id": "b13", "title": "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models", "year": "2023" }, { "authors": "D Escudero; S Ghosh; M Keller; R Rachuri; P Scholl", "journal": "", "ref_id": "b14", "title": "Improved primitives for MPC over mixed arithmetic-binary circuits", "year": "2020-08-17" }, { "authors": "K Gupta; N Jawalkar; A Mukherjee; N Chandran; D Gupta; A Panwar; R Sharma", "journal": "", "ref_id": "b15", "title": "SIGMA: Secure GPT Inference with Function Secret Sharing", "year": "2023" }, { "authors": "M Ito; A Saito; T Nishizeki", "journal": "Electronics and Communications in Japan (Part III: Fundamental Electronic Science)", "ref_id": "b16", "title": "Secret sharing scheme realizing general access structure", "year": "1989" }, { "authors": "G Jagannathan; R N Wright", "journal": "", "ref_id": "b17", "title": "Privacypreserving distributed k-means clustering over arbitrarily partitioned data", "year": "2005" }, { "authors": "H Jégou; M Douze; C Schmid", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b18", "title": "Product Quantization for Nearest Neighbor Search", "year": "2011" }, { "authors": "J Johnson; M Douze; H Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b19", "title": "Billion-Scale Similarity Search with GPUs", "year": "2017" }, { "authors": "P Kairouz; H B Mcmahan; B Avent; A Bellet; M Bennis; A N Bhagoji; K Bonawitz; Z B Charles; G Cormode; R Cummings; R G L D'oliveira; S Y E Rouayheb; D Evans; J Gardner; Z Garrett; A Gascón; B Ghazi; P B Gibbons; M Gruteser; Z Harchaoui; C He; L He; Z Huo; B Hutchinson; J Hsu; M Jaggi; T Javidi; G Joshi; M Khodak; J Konecný; A Korolova; F Koushanfar; O Koyejo; T Lepoint; Y Liu; P Mittal; M Mohri; R Nock; A Özgür; R Pagh; M Raykova; H Qi; D Ramage; R Raskar; D X Song; W Song; S U Stich; Z Sun; A T Suresh; F Tramèr; P Vepakomma; J Wang; L Xiong; Z Xu; Q Yang; F X Yu; H Yu; S Zhao", "journal": "Found. Trends Mach. Learn", "ref_id": "b20", "title": "Advances and Open Problems in Federated Learning", "year": "2019" }, { "authors": "V Karpukhin; B Oguz; S Min; P Lewis; L Y Wu; S Edunov; D Chen; W Yih", "journal": "", "ref_id": "b21", "title": "Dense Passage Retrieval for Open-Domain Question Answering", "year": "2020" }, { "authors": "B Knott; S Venkataraman; A Hannun; S Sengupta; M Ibrahim; L Van Der Maaten", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Crypten: Secure multi-party computation meets machine learning", "year": "2021" }, { "authors": "T Kojima; S S Gu; M Reid; Y Matsuo; Y Iwasawa", "journal": "", "ref_id": "b23", "title": "Large Language Models are Zero-Shot Reasoners", "year": "2022" }, { "authors": "P Lewis; E Perez; A Piktus; F Petroni; V Karpukhin; N Goyal; H Kuttler; M Lewis; W Tau Yih; T Rocktäschel; S Riedel; D Kiela", "journal": "", "ref_id": "b24", "title": "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks", "year": "2020" }, { "authors": "D Li; R Shao; H Wang; H Guo; E P Xing; H Zhang", "journal": "", "ref_id": "b25", "title": "MPCFormer: fast, performant and private Transformer inference with MPC", "year": "2022" }, { "authors": "Y Mao; P He; X Liu; Y Shen; J Gao; J Han; W Chen", "journal": "", "ref_id": "b26", "title": "Generation-Augmented Retrieval for Open-Domain Question Answering", "year": "2020" }, { "authors": "F Mo; A S Shamsabadi; K Katevas; S Demetriou; I Leontiadis; A Cavallaro; H Haddadi", "journal": "Applications, and Services. OpenAI", "ref_id": "b27", "title": "Dark-neTZ: towards model privacy at the edge using trusted execution environments", "year": "2020" }, { "authors": "E Orsini; N P Smart; F Vercauteren", "journal": "Springer", "ref_id": "b28", "title": "Overdrive2k: efficient secure MPC over from somewhat homomorphic encryption", "year": "2020" }, { "authors": "S Papadopoulos; S Bakiras; D Papadias", "journal": "Proceedings of the VLDB Endowment", "ref_id": "b29", "title": "Nearest neighbor search with strong location privacy", "year": "2010" }, { "authors": "N Reimers; I Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", "year": "2019" }, { "authors": "M S Riazi; C Weinert; O Tkachenko; E M Songhori; T Schneider; F Koushanfar", "journal": "", "ref_id": "b31", "title": "Chameleon: A hybrid secure computation framework for machine learning applications", "year": "2018" }, { "authors": "P Schoppmann; A Gascón; B Balle", "journal": "IACR Cryptol. ePrint Arch", "ref_id": "b32", "title": "Private Nearest Neighbors Classification in Federated Databases", "year": "2018" }, { "authors": "S Servan-Schreiber; S Langowski; S Devadas", "journal": "IEEE", "ref_id": "b33", "title": "Private approximate nearest neighbor search with sublinear communication", "year": "2022" }, { "authors": "A Shamir", "journal": "Communications of the ACM", "ref_id": "b34", "title": "How to share a secret", "year": "1979" }, { "authors": "H Shaul; D Feldman; D Rus", "journal": "", "ref_id": "b35", "title": "Scalable secure computation of statistical functions with applications to knearest neighbors", "year": "2018" }, { "authors": "H Shaul; D Feldman; D Rus", "journal": "", "ref_id": "b36", "title": "Secure k-ish Nearest Neighbors Classifier", "year": "2018" }, { "authors": "E M Songhori; S U Hussain; A.-R Sadeghi; F Koushanfar", "journal": "", "ref_id": "b37", "title": "Compacting privacy-preserving knearest neighbor search using logic synthesis", "year": "2015" }, { "authors": "T South; G Zuskind; R Mahari; T Hardjono", "journal": "", "ref_id": "b38", "title": "Secure Community Transformers: Private Pooled Data for LLMs", "year": "2023" }, { "authors": "N Thakur; N Reimers; A Rücklé; A Srivastava; I Gurevych", "journal": "", "ref_id": "b39", "title": "BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models", "year": "2021" }, { "authors": "J Vaidya; C Clifton", "journal": "", "ref_id": "b40", "title": "Privacy-preserving kmeans clustering over vertically partitioned data", "year": "2003" }, { "authors": "S Wang; X Ding; R H Deng; F Bao", "journal": "IACR Cryptology ePrint Archive", "ref_id": "b41", "title": "Private Information Retrieval Using Trusted Hardware", "year": "2006" }, { "authors": "Y Yang; X Ding; R H Deng; F Bao", "journal": "", "ref_id": "b42", "title": "An Efficient PIR Construction Using Trusted Hardware", "year": "2008" }, { "authors": "M Zuber; R Sirdey", "journal": "Proc. Priv. Enhancing Technol", "ref_id": "b43", "title": "Efficient homomorphic evaluation of k-NN classifiers", "year": "2021" } ]
[ { "formula_coordinates": [ 5, 55.49, 340.82, 216.73, 36.53 ], "formula_id": "formula_0", "formula_text": "[centroid distance i ] := SumProd([x], [cluster]); 7 [centroid distances] := {[centroid distance 1 ] (t) , . . . , [centroid distance nc ] (t" }, { "formula_coordinates": [ 5, 368.24, 211.87, 80.46, 17.17 ], "formula_id": "formula_1", "formula_text": "√ N • √ N ) = O(N )" } ]
2024-03-30
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b2", "b8", "b4", "b5", "b9" ], "table_ref": [], "text": "Robustly evaluating deep image classifiers is challenging, as existing standard test sets such as Im-ageNet (Deng et al., 2009) often feature simpler image compositions (Recht et al., 2019) and \"spurious features\" (Geirhos et al., 2019), which can lead to an overestimate of model performance.\nTo address this issue, Hendrycks et al. (2021) introduce \"Natural Adversarial Examples\" (NAEs), where they employ adversarial filtration over extensive real images to pinpoint challenging natural images that deceive classifiers. NAEs are valuable in assessing worst-case performance and uncovering model limitations. Their approach to obtaining NAEs, however, is limited by its passive nature and lack of control over the selection of specific types of challenging examples, thereby restricting the ability to fully explore classifier weaknesses in diverse scenarios.\nIn this work, we propose to actively synthesize NAEs with a controlled optimization process. Leveraging a class-conditional generative model, particularly Stable Diffusion (Rombach et al., 2021), we optimize the class token embedding in the condition embedding space. This process is guided by the gradients of classification loss from the target image classifier to ensure the adversarial nature of the generated examples. Our method, termed SD-NAE (Stable Diffusion for Natural Adversarial Example), not only achieves a non-trivial fooling rate against an ImageNet classifier but also offers greater flexibility and control compared to previous methods, highlighting SD-NAE's potential as a tool for evaluating and enhancing model robustness." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "METHODOLOGY", "publication_ref": [ "b13" ], "table_ref": [], "text": "We introduce the SD-NAE method (Figure 1), which is motivated by the concept of NAEs and uses Stable Diffusion to approximate natural-looking images (see Appendix A for background introduction). Our exploration focuses on how adversarial optimization can enhance this approach, comparable to the creation of pixel-perturbed adversarial examples (Szegedy et al., 2013). The core of SD-NAE lies in the strategic optimization of class-relevant token embedding to trick the classifier into misclassifying the generated image. Consider, for instance, an image of a cat generated by Stable Diffusion G following the text condition \"A high-quality image of a cat\". Initially, the image can be correctly identified as a cat by the classifier F , given Stable Diffusion's accurate generation capability. However, through subtle alterations of the token embedding of \"cat\", we expect to induce misclassification (e.g., towards a target class y, y ̸ =\"cat\") over the generated image while maintaining its ground-truth as \"cat\". This process is governed by the optimization equation:\nmin êk token L(F (G(z; e text )), y) + λ • R(ê k token , e k token )\n, where e text = E(e 0 token , ..., êk token , ..., e K-1 token ).\n(1)\nWe mark the notations in Figure 1, while leaving a detailed discussion of Equation (1) in Appendix B. In general, the optimization variable êk token is the class-relevant token embedding (corresponding to \"cat\" in our example). The first term encourages the produced image to be adversarial, while the second term makes sure that the perturbation on êk token is only moderate, retaining the natural appearance of the generated image and preserving its ground-truth label as \"cat\"." }, { "figure_ref": [ "fig_1" ], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "We evaluate SD-NAE using a carefully designed experiment. Please see details in Appendix C. Essentially, we focus on 10 categories of ImageNet whose semantics is clear. We take them as the ground-truth and generate 20 samples using SD-NAE for each of the 10 classes, resulting in 20 * 10 = 200 total optimization processes. An optimization is deemed successful if the image at any optimization step gets misclassified by the target classifier, which is a ResNet-50 pretrained on ImageNet. Importantly, we make sure that all initialization images (prior to optimization by SD-NAE) are correctly classified. In such a setting, our SD-NAE achieves a noteworthy success rate of 43.5% (which is actually a lower bound; see Appendix C), demonstrating its capability to effectively generate NAEs. Furthermore, as can be seen in Figure 2, the images generated by SD-NAE display variations in color, background, view angle, and style, underscoring its potential as a tool for examining model generalization among various covariate shifts. For comparison with the prior work of Song et al. (2018), please see Appendix D." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "SD-NAE, by leveraging Stable Diffusion, effectively generates Natural Adversarial Examples and demonstrates its significant potential in the field of robustness research. As deep learning models continue to evolve, we believe that SD-NAE presents a novel approach for evaluating and understanding these complex systems, thereby emphasizing its profound role in future research. We discuss related works and limitations of SD-NAE in Appendix A and Appendix E, respectively." }, { "figure_ref": [], "heading": "APPENDIX A BACKGROUND AND RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we provide background information and discuss related works to facilitate the presentation of our method. We first introduce the definition of NAEs, then distinguish our study from prior works on generating NAEs, and finally give an overview of the functionality of Stable Diffusion." }, { "figure_ref": [], "heading": "A.1 NATURAL ADVERSARIAL EXAMPLES", "publication_ref": [ "b5", "b13" ], "table_ref": [], "text": "Formally, NAEs are defined as a set of samples w.r.t. a target classifier F (Hendrycks et al., 2021;Song et al., 2018):\nA ≜ {x ∈ S|O(x) ̸ = F (x)}.(2)\nHere, S contains all images that naturally arise in the real world and look realistic to humans. O is an oracle that yields the ground-truth semantic category of the image and relies on human evaluation.\nNotice that NAEs are less restricted than pixel-perturbed adversarial examples (often referred to as \"adversarial examples\") (Szegedy et al., 2013), which are samples artificially crafted by adding minor perturbations to image pixels. Both NAEs and adversarial examples (AEs) can expose the vulnerability of a given classifier. However, since AEs are artificial rather than natural, they are mostly studied in the security context where there is assumed to be an attacker that intentionally attempts to compromise a model. In contrast, studies of NAEs, including ours, focus on a broader setting where the samples naturally occur within the environment but are misclassified by the classifier." }, { "figure_ref": [], "heading": "A.2 GENERATING NAES", "publication_ref": [ "b5", "b1", "b1", "b3", "b6" ], "table_ref": [], "text": "As previously mentioned, we contend that the passive filtering of NAEs from real images, as demonstrated by Hendrycks et al. (2021), lacks flexibility; in contrast, we utilize a generative model to synthesize NAEs. In this regard, our work is closely related to but also has essential distinctions with the work by Song et al. (2018). While they optimize the latent of a class-conditional GAN with a fixed condition, we propose perturbing the condition while keeping the latent fixed within Stable Diffusion. In fact, it is later found that GAN can be sensitive to the latents, and generated images may be of low quality when the optimized latents land outside the well-defined support (Dai et al., 2023). We will show that applying their concept to Stable Diffusion is less effective in producing NAEs compared to our method.\nBuilding upon this comparison, it is pertinent to discuss a concurrent work by Dai et al. (2023), who also apply the diffusion model to generate NAEs. However, their method is to enforce classifier guidance (Dhariwal & Nichol, 2021) to be adversarial, which requires sophisticated modification to the default classifier-free guidance sampling (Ho & Salimans, 2021) and may need extra care to adapt to different samplers. In contrast, our method can readily generalize to various diffusion models as it only perturbs the condition embedding without interfering with the sampling process." }, { "figure_ref": [ "fig_2" ], "heading": "A.3 STABLE DIFFUSION", "publication_ref": [ "b9", "b14", "b14" ], "table_ref": [], "text": "Stable Diffusion represents a family of latent diffusion models (Rombach et al., 2021) with the capability of conditional generation. Using G to represent the Stable Diffusion model, the formulation that best describes its functionality in the context of our work is\nx = G(z; e text ),(3)\nwhere x is the synthesized image, z is a (random) latent vector, and e text is the text embedding which serves as the condition for the generation. More specifically, e text is the output of a transformer-based text encoder E where the input is the token embedding sequence e 0:K-1 token corresponding to the raw text description after tokenization, i.e., e text = E(e 0 token , e 1 token , ..., e K-1 token ).\n(4)\nwhere K denotes the maximum padded length specified by the encoder E. B EXPANDED DISCUSSION ON SD-NAE First, we provide a more detailed explanation of our optimization objective in Equation ( 1). Let us explain the variables with a concrete example for clarity. Suppose we want to generate an image of a cat with a text condition being \"A high-quality image of a cat\" (other prompts can also work here as long as it contains the keyword \"cat\"), such that the image is misclassified by F as some category other than \"cat\". Equivalently and more formally, the goal is O(G(z; e text )) = \"cat\" ̸ = F (G(z; e text )).\nThe optimization variable, denoted as êk token in Equation (1), corresponds to the token embedding of the word \"cat\" We initialize êk token with the original token embedding e k token and optimize it with two terms. The first term aims to encourage the generated sample G(z; e text ) to deceive the classifier F into making an incorrect prediction. Specifically, if we want to induce a targeted misclassification towards a specific class y (y ̸ = \"cat\"), we can simply use the cross-entropy loss as L. For untargeted misclassification, one can set y to \"cat\" and use negative cross-entropy as L (i.e., maximizing the classification loss of class \"cat\").\nThe second term serves to regularize êk token , ensuring that the ground truth O(G(z; e text )) remains unchanged during optimization; otherwise, the unintended equality O(G(z; e text )) = F (G(z; e text )) may occur, contradicting the definition of NAEs. To achieve this, we can let the regularization R be a distance metric (e.g., Euclidean distance or cosine similarity) to enforce êk token to stay in the vicinity of its unmodified counterpart e k token . In other words, we are only inducing moderate perturbation to the token embedding, which intuitively helps O(G(z; e text )) remain unchanged (e.g., being \"cat\" all the time in our example). We empirically justify this intuition and design in Figure 3. The λ that accompanies the second term is just a weighting factor. It is worth noting, however, that in practice, when the number of optimization steps is small, it often suffices to set λ = 0 since the overall magnitude of the perturbation (i.e., the distance between êk token and e k token ) is bounded. Why token embedding? We next discuss why we choose the token embedding e k token instead of the latent z or the text embedding e text as the optimization variable here. Empirically, we find that perturbing the latent or text embedding is significantly less effective and efficient in generating NAEs compared to perturbing the token embedding, which has fooling rates of 10%, 20%, and 43.5% respectively (refer to Appendix C for the definition of fooling rate and experiment setup). Our hypothesis for this observation is as follows. Firstly, in a diffusion model, the latent undergoes an iterative multi-step reverse diffusion process, potentially impeding the gradient flow from the classifier back to the latent. Secondly, as the text embedding integrates all tokens, perturbations on the text embedding might disperse across all tokens. Intuitively, perturbing some class-irrelevant tokens (e.g., the meaningless padding tokens) is not likely to induce significant change to the image content, meaning that there is less chance the generated sample will fool the classifier. In contrast, perturbing the class-relevant token (i.e., e k token ) directly targets the semantic-related content of the image, which we will further demonstrate to be effective with the following experiment results.\nOther application scenarios. Lastly, notice that SD-NAE can be easily adapted to create other types of NAEs beyond in-distribution (ID) misclassification. For instance, a deployed model in the real world will inevitably encounter Out-of-Distribution (OOD) samples, which are samples not belonging to any known category (Zhang et al., 2023a;b), necessitating an accurate OOD detector to flag unknown inputs. With SD-NAE, one can generate NAEs that fool the OOD detector into ID → OOD or OOD → ID misclassification by playing with the loss L in Equation ( 1). Specifically, notice that an OOD detector often operates by thresholding the maximum softmax probability. To generate an OOD image predicted as ID by the detector, we can employ cross-entropy loss as L and use any one-hot label as y, thereby encouraging the classifier to make a confident prediction on the synthesized image. Reversely, the ID → OOD misclassification is also achievable if we minimize the maximum classification probability by minimizing the cross-entropy between the softmax probability distribution and uniform distribution (Zhang et al., 2023a). Overall, SD-NAE demonstrates flexibility in producing NAEs for various purposes." }, { "figure_ref": [], "heading": "C EXPERIMENT DETAILS", "publication_ref": [], "table_ref": [], "text": "Models and setup. The target classifier, which we aim for the NAEs to fool is an ImageNetpretrained ResNet-50 hosted by Microsoft on Hugging Face (model tag: \"microsoft/resnet-50\"). We utilize a nano version of Stable Diffusion, finetuned from the official 2.1 release (model tag: \"bguisard/stable-diffusion-nano-2-1\"). We generate 128x128 images to ensure the optimization is manageable with a single 24GB GPU; these images are then resized to 224x224 before being fed to the classifier, matching its default resolution. DDIM sampler with 20 sampling steps is used for Stable Diffusion. The guidance scale is set to the default value of 7.5. In the optimization of SD-NAE, we use Adam as the optimizer with a learning rate of 0.001. The number of iterations or gradient steps is 20.\nWorkflow and metric. We use fooling rate as the quantitative metric for SD-NAE. To ensure a fair and meaningful evaluation, we first do several careful preprocessing as follows. We start with 100 random classes from ImageNet. For each class, we generate 100 samples from Stable Diffusion with random latent and measure the accuracy of the classifier on those samples. Subsequently, we remove the classes whose accuracy is lower than 90%, which leaves us 25 classes. After that, we manually pick ten classes whose semantics are clear and unambiguous to our human evaluators (the authors of this work) to make it easy for later human inspection (to get the oracle prediction O(x)). The selected classes are broccoli, candle, forklift, fountain, gorilla, strawberry, hamster, jellyfish, lion, and microphone. Finally, for each of the ten classes, we prepare 20 different random latent vectors z, with which the generated image G(z; e text ) (without SD-NAE optimization yet) is correctly classified by the ResNet-50. This step ensures that the initialized sample is not already an NAE, allowing us to isolate the effect and confidently attribute the NAEs to our SD-NAE optimization rather than to other factors inherent in the generative model.\nAfter the preprocessing, we perform 20 * 10 = 200 optimization processes, each corresponding to one class and one prepared latent. In each multi-step optimization process, if any one of the sample x generated at a certain step satisfies O(x) = current desired class ̸ = F (x), we count this optimization as success in fooling the classifier. The final fooling rate is calculated as the ratio of successful deceptions to the total number of optimizations, amounting to 200 in our study. It is noteworthy that we adopt a stricter definition of NAE than that in Equation (2) to explicitly demonstrate the efficacy of SD-NAE. In practice, one does not need to enforce O(x) = current desired class: Even if O(x) deviates from the expected class, e.g., the synthesized image is not a broccoli while the current text prompt is \"An image of a broccoli\", x is still a valid NAE as long as O(x) ̸ = F (x). Therefore, the fooling rate reported in our experiment represents only a lower bound of the actual fooling rate.\nResult. As discussed in the main text, SD-NAE achieves a non-trivial 43.5% fooling rate/success rate. The generated NAEs are visualized in Figure 4." }, { "figure_ref": [ "fig_3" ], "heading": "D COMPARISON WITH PRIOR WORK", "publication_ref": [ "b0", "b1" ], "table_ref": [], "text": "Here, we compare our SD-NAE with the method proposed by Song et al. (2018). As mentioned in Appendix A.2, they perturb the latent vector of class-conditional GANs to curate NAEs, while our design is to optimize the conditional token embedding of Stable Diffusion models (and keep the latent fixed). In Appendix C, we have shown by empirical results that directly applying the previous method (i.e., updating the latent of Stable Diffusion) yields a much worse attack success rate/fooling rate, indirectly justifying our design. Here, we perform a straight comparison between SD-NAE and the work of Song et al. (2018).\nSetup. We use a class-conditional BigGAN (Brock et al., 2019) pre-trained on ImageNet, which to our knowledge is one of the most powerful GANs for ImageNet. The experiment workflow remains the same as with our method: Denoting the GAN as G and the target ResNet-50 classifier we want to attack as F , for each image category we prepare 20 randomly-initialized latent vectors z where the prediction on each generated image F (G(z)) matches the ground-truth or the oracle prediction O(G(z)). Then using the same loss function as in Song et al. (2018), we optimize the latent vector of the GAN to generate NAEs. We try our best to vary the hyperparameters and report the best result that we observe. It is also worth noting that in the original work of Song et al. (2018), they only did experiments on small-scale and simple datasets like MNIST, SVHN, and CelebA, while here we are looking at ImageNet with images consisting complex, real-world objects/scenes.\nResult. The best fooling rate/attack success rate of Song et al. (2018) is 14.0%, which is much lower than ours (43.5%). Specifically, in some cases the optimized image does not change much from the initialization and thus fails to deceive the classifier. In other cases, the optimization goes wild and leads to nonsensical images. The latter case is in line with the finding that GANs can be sensitive to perturbed latents (Dai et al., 2023), since they might be \"out-of-distribution\" w.r.t. the well-regularized latent distribution that the model sees during the training. We visualize the synthesized samples in Figure 5. Qualitatively, SD-NAE results in higher-quality samples than the compared method." }, { "figure_ref": [], "heading": "E LIMITATIONS", "publication_ref": [ "b7", "b10" ], "table_ref": [], "text": "Since SD-NAE is based upon Stable Diffusion, it inherits a few limitations from its underlying framework. First, the computational cost of SD-NAE could be high and the optimization could be slow. For instance, generating a single 128x128 natural adversarial example with SD-NAE under our experiment setting (i.e., 20 steps for diffusion sampling and 20 steps for SD-NAE's optimization) requires approximately 22GB of GPU memory and takes about 1 minute. However, we note that both the memory footprint and time cost can be significantly reduced if sampling-efficient diffusion models are used, e.g., Latent Consistency Models (Luo et al., 2023) and SD-turbo (Sauer et al., 2023) which only require 1 to 4 diffusion sampling steps. Meanwhile, empirically we find that SD-NAE Figure 4: Examples generated by SD-NAE. From top to bottom, the ground-truth is broccoli, candle, forklift, fountain, gorilla, strawberry, hamster, jellyfish, lion, and microphone, respectively. In each pair, the left one is generated with the initialized token embedding. Importantly, we make sure that all left images are correctly classified by the ImageNet ResNet-50 model in the first place. The right ones are the result of SD-NAE optimization when using the corresponding left one as initialization, and we mark the classifier's prediction in red above the image.\ndoes not really require as many as 20 optimization steps to succeed: In our experiment, the average number of steps for finding the first adversarial example is around 10 (9.66).\nSecond, in some cases, we find that the generated image is absurd and diverges significantly from a natural appearance. Such cases can arise either inherently from Stable Diffusion or from our SD-NAE optimization process. Taking the category broccoli as an example, out of the 100 initialization images (generated by Stable Diffusion with random latents), there are 8 of them which exhibits a weird, unnatural looking of a broccoli (an 8% \"failure\" rate). Then, during SD-NAE optimization, there are 24 out of 400 images that fail to present the normal looking of a broccoli (an examples that we count as successfully generated NAEs, whereas the second row shows failure cases where the optimized images exhibit unnatural looking. Note that some successful NAEs here actually do not look that natural, and the quality in general lags behind those generated by SD-NAE (Figure 4). Still, despite counting them as success, we observe a mere 14.0% success rate compared with 43.5% achieved by SD-NAE.\n6% \"failure\" rate). However, we remark that having unnatural images at a few steps does not mean that SD-NAE is compromised; instead, it can be considered success as long as there is at least one natural-looking adversarial example produced during the multi-step optimization, which is typically the case in our experiment." }, { "figure_ref": [], "heading": "URM STATEMENT", "publication_ref": [], "table_ref": [], "text": "The authors acknowledge that at least one key author of this work meets the URM criteria of the ICLR 2024 Tiny Papers Track." } ]
Natural Adversarial Examples (NAEs), images arising naturally from the environment and capable of deceiving classifiers, are instrumental in robustly evaluating and identifying vulnerabilities in trained models. In this work, unlike prior works that passively collect NAEs from real images, we propose to actively synthesize NAEs using the state-of-the-art Stable Diffusion. Specifically, our method formulates a controlled optimization process, where we perturb the token embedding that corresponds to a specified class to generate NAEs. This generation process is guided by the gradient of loss from the target classifier, ensuring that the created image closely mimics the ground-truth class yet fools the classifier. Named SD-NAE (Stable Diffusion for Natural Adversarial Examples), our innovative method is effective in producing valid and useful NAEs, which is demonstrated through a meticulously designed experiment.
SD-NAE: GENERATING NATURAL ADVERSARIAL EXAMPLES WITH STABLE DIFFUSION
[ { "figure_caption": "Figure 1 :1Figure 1: Guided by the loss gradient backpropagated from the classifier, SD-NAE generates NAEs by optimizing only the class-related token embedding, while keeping all models frozen. The letters in the parentheses are notations used in Equation (1).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: NAEs generated by SD-NAE. In each pair, the left initialization image is correctly classified by the model, yet the right one optimized by our method gets misclassified with the wrong prediction marked in red. See more samples in Figure 4.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Empirical evidence that constraining the magnitude of token embedding perturbation can help preserve the image ground-truth. From left to right of each row, we move the initialized class token embedding along a random yet fixed Rademacher vector (i.e., each element has equal probability of being +1 or -1) with increasing magnitude. The bottom axis denotes the relative magnitude of the perturbation, and the real magnitude has a factor of 1e-3. It can be noticed that the image semantic is well-preserved when the perturbation is small.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Examples generated by Song et al. (2018) using GAN. Following our experiment setup, each initialized image is correctly classified by the target ResNet-50 classifier. The first row showsexamples that we count as successfully generated NAEs, whereas the second row shows failure cases where the optimized images exhibit unnatural looking. Note that some successful NAEs here actually do not look that natural, and the quality in general lags behind those generated by SD-NAE (Figure4). Still, despite counting them as success, we observe a mere 14.0% success rate compared with 43.5% achieved by SD-NAE.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" } ]
Yueqian Lin; Jingyang Zhang; Yiran Chen; Hai Li
[ { "authors": "Andrew Brock; Jeff Donahue; Karen Simonyan", "journal": "", "ref_id": "b0", "title": "Large scale GAN training for high fidelity natural image synthesis", "year": "2019" }, { "authors": "Xuelong Dai; Kaisheng Liang; Bin Xiao", "journal": "", "ref_id": "b1", "title": "Advdiff: Generating unrestricted adversarial examples using diffusion models", "year": "2023" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "IEEE", "ref_id": "b2", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Robert Geirhos; Patricia Rubisch; Claudio Michaelis; Matthias Bethge; Felix A Wichmann; Wieland Brendel", "journal": "", "ref_id": "b4", "title": "Imagenet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness", "year": "2019" }, { "authors": "Dan Hendrycks; Kevin Zhao; Steven Basart; Jacob Steinhardt; Dawn Song", "journal": "", "ref_id": "b5", "title": "Natural adversarial examples", "year": "2021-06" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b6", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Simian Luo; Yiqin Tan; Longbo Huang; Jian Li; Hang Zhao", "journal": "", "ref_id": "b7", "title": "Latent consistency models: Synthesizing high-resolution images with few-step inference", "year": "2023" }, { "authors": "Benjamin Recht; Rebecca Roelofs; Ludwig Schmidt; Vaishaal Shankar", "journal": "PMLR", "ref_id": "b8", "title": "Do imagenet classifiers generalize to imagenet?", "year": "2019" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b9", "title": "Highresolution image synthesis with latent diffusion models", "year": "2021" }, { "authors": "Axel Sauer; Dominik Lorenz; Andreas Blattmann; Robin Rombach", "journal": "", "ref_id": "b10", "title": "Adversarial diffusion distillation", "year": "2023" }, { "authors": "Yang Song; Rui Shu; Nate Kushman; Stefano Ermon", "journal": "", "ref_id": "b11", "title": "Constructing unrestricted adversarial examples with generative models", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b12", "title": "", "year": "2018" }, { "authors": "Christian Szegedy; Wojciech Zaremba; Ilya Sutskever; Joan Bruna; Dumitru Erhan; Ian Goodfellow; Rob Fergus", "journal": "", "ref_id": "b13", "title": "Intriguing properties of neural networks", "year": "2013" }, { "authors": "Jingyang Zhang; Nathan Inkawhich; Randolph Linderman; Yiran Chen; Hai Li", "journal": "", "ref_id": "b14", "title": "Mixture outlier exposure: Towards out-of-distribution detection in fine-grained environments", "year": "2023-01" }, { "authors": "Jingyang Zhang; Jingkang Yang; Pengyun Wang; Haoqi Wang; Yueqian Lin; Haoran Zhang; Yiyou Sun; Xuefeng Du; Kaiyang Zhou; Wayne Zhang", "journal": "", "ref_id": "b15", "title": "Openood v1. 5: Enhanced benchmark for out-of-distribution detection", "year": "2023" } ]
[ { "formula_coordinates": [ 2, 116.11, 385.58, 187.66, 19.82 ], "formula_id": "formula_0", "formula_text": "min êk token L(F (G(z; e text )), y) + λ • R(ê k token , e k token )" }, { "formula_coordinates": [ 4, 246.08, 229.07, 257.92, 9.3 ], "formula_id": "formula_1", "formula_text": "A ≜ {x ∈ S|O(x) ̸ = F (x)}.(2)" }, { "formula_coordinates": [ 4, 273.53, 631.84, 230.47, 9.84 ], "formula_id": "formula_2", "formula_text": "x = G(z; e text ),(3)" } ]
10.1007/s10956-023-10042-3
2023-12-24
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b36", "b16", "b37", "b9", "b0", "b11", "b8", "b36", "b4", "b20", "b20", "b22", "b37", "b16", "b19", "b21", "b23" ], "table_ref": [], "text": "Science centers \"modeling the world,\" and thus engaging students in scientific modeling practices is critical to preparing students' science competence (National Research Council, 2012;Zhai, Haudek, & Ma, 2022). During modeling practices, students can draw models and visualize their ideas and explanations of phenomena, providing an avenue for students with diverse language proficiency to learn science in an equitable manner. These models are particularly valuable for delving into students' thinking, given the enriched information encapsulated in these models. However, teachers find them time-and effort-consuming to score, which presents challenges to introducing modeling practices in classrooms. Therefore, researchers have leveraged supervised machine learning to develop deep learning algorithmic models to score student responses (Lee, Lee, & Hong, 2023;Zhai, He, & Krajcik, 2022).\nUsing this approach, human experts have to assign scores to student-drawn models using well-developed scoring rubrics. Usually, more than two human experts are needed, and their consensus and interrater reliability are considered essential to reduce rater errors and potential bias. Hogan and Murphy (2007) suggest additional considerations such as scoring one item at a time, using a scoring rubric or ideal answer, and reading samples before scoring others. The human-scored data would be used to train machines to develop scoring models. Though researchers reported satisfied scoring accuracy, the entire algorithmic model development is usually costly and timeconsuming. It is thus urgent to develop approaches to reduce the efforts of developing scoring models.\nWith the recent development of GPT-4V, OpenAI (2023) provides opportunities to employ prompt engineering to reduce machine training time and costs. GPT-4V takes an image and a natural-language prompt about the image as input and can produce a natural-language answer as the output (Antol et al., 2015). The capability of providing explainable answers in natural language to users' questions about the image is considered a milestone development for visual question answering (Joshi, Walambe, & Kotecha, 2021). This technology is deemed useful to science education researchers and practitioners, where constructed-response items that assess studentdrawn models are frequently used. However, until now, there has been scarce research on image processing or visual question answering for automatic scoring in the science education field.\nGiven this research gap, we developed a method NERIF (Notation-Enhanced Rubric Instruction for Few-shot Learning) employing instructional note and rubrics to prompt GPT-4V to score students' drawn models for science phenomena. This study answers two research questions: 1) How accurate is GPT-4V in automatically scoring student-drawn models? 2) How does GPT-4V automatically assign scores to student-drawn models?\n2 Automatic Scoring of Scientific Modeling Practices Scientific modeling is a cornerstone of science education as it serves as a bridge between mental models and real-world phenomena. Engaging students in scientific modeling practices fosters a deeper understanding of the nature of science as an iterative and predictive process during which students use knowledge to explain phenomena (Hestenes, 2013). Models provide a framework for students to conceptualize phenomena, deploy scientific knowledge, and represent ideas and explanations (Zhai, Haudek, & Ma, 2022). This active involvement in the construction, testing, revision, and deploying of models also supports the development of critical thinking and problem-solving skills. Moreover, modeling equips students with the ability to communicate and justify their reasoning, reflecting the collaborative and communicative nature of scientific inquiry (Clement, 2008). By embodying the practices of scientists, students gain a more authentic understanding of the nature of science, thus enhancing their ability to apply scientific knowledge to novel situations, a skill increasingly critical in our rapidly evolving world.\nDespite the importance of modeling practices in science learning (National Research Council, 2012), drawn models are rarely employed in science classrooms for assessment practices, partially because drawn models are challenging to score, and thus students rarely receive timely feedback for their specific models. One solution to this problem is employing automated scoring technologies to grade students' scientific modeling. Recent studies have leveraged machine learning (ML) techniques for the automatic evaluation of student-generated models. Notably, Smith et al. (2018) focused on scoring models constructed by middle schoolers concerning magnetic concepts. Their methodology involved the use of pre-determined elements within a structured digital environment, allowing students to manipulate these elements spatially. They utilized a topology-centric scoring method that recognized spatial relationships-proximity, distance, and containment-between elements. Additionally, specific rules constrained possible relations to refine the scoring accuracy. However, Smith et al. (2018) technique has limitations, particularly in handling the unpredictability inherent in free-form drawing tasks.\nAddressing the challenges presented by unstructured drawings, von Davier, Tyack, and Khorramdel (2023) implemented advanced deep learning strategies to assess an exercise from the Trends in International Mathematics and Science Study (TIMSS). This task provided a gridded canvas for students to depict their conceptual understanding through drawing, with the grid serving as a constraint to reduce variation in the drawings. von Davier et al. (2023) adopted convolutional neural networks (CNNs) and feed-forward neural networks (FFNs), two distinct artificial neural network architectures with different data processing and learning capabilities. Their findings indicated the superior performance of CNNs over FFNs in scoring accuracy, and notably, CNNs demonstrated an ability to outperform human scoring in certain instances where humans had misjudged responses, highlighting the potential of ML in educational assessments. Given the progress, the models in von Davier et al. (2023) are not free-drawing, which limits the usability in science classrooms.\nIn their study, Zhai, He, and Krajcik (2022) developed six assessment free-drawn modeling tasks. These tasks are embedded in computer-simulated environments and provide a drawing pad for students to represent their models. The tasks \"ask students to draw a model to make sense of the phenomena using online tools,\" targeting an NGSS performance expectation for middle school students: \"MS-PS1-4. Develop a model that predicts and describes changes in particle motion, temperature, and state of a pure substance when thermal energy is added or removed \" (p. 1774). The analytic scoring rubric provides principles to categorize students' drawn models into 'Beginning,' 'Developing,' or 'Proficient' levels. They employed ResNet-50 V2 CNN to develop scoring models and tested the scoring models with more than 250 new student-drawn models for each of the six tasks. The research reported machine-human scoring agreement of accuracy in .79-.89 and Cohen's Kappa in .64-.82.\nTo be noted, the above research leveraged computers and asked students to draw models on computers. In real classroom settings, teachers often ask students to draw models on papers, which adds more degree of freedom and can be more challenging for automatic scoring. Lee et al. (2023) developed a model that automatically assesses elementary, middle, and high school students' responses to the two items adopted from Test About Particles in a Gas (Novick & Nussbaum, 1981). The authors embedded students' hand drawings using Inception-v3 pre-trained model (Szegedy, Vanhoucke, Ioffe, Shlens, & Wojna, 2016), and tried various machine learning algorithms such as k-nearest neighbor, decision tree, random forest, support vector machine, neural network, logistic regression as the final classifier layer. As results with 206 test cases, they reported that their model performance reached a high machine-human agreement (kappa = 0.732-0.926, accuracy = 0.820-0.942, precision = 0.817-0.941, recall = 0.820-0.942, F1 = 0.818-0.941, and area under the curve = 0.906-0.990).\nAlso, C. Wang, Zhai, and Shen (2024) employed 2D convolutional neural networks to automate the assessment of high school students' hand-drawn models on the topic of optics. They analyzed 758 student-created models explaining the refraction phenomenon. Employing a sequential ML model composed of four convolutional layers, they attained a commendable average accuracy rate of 89% (SD=9%). Further, nested cross-validation yielded an average testing accuracy of 62% (SD=5%), with notable accuracy discrepancies observed across groups with varying modeling proficiency levels. Intriguingly, models from students with lower performance proved more difficult for the ML algorithms to score accurately. They conducted a comparative analysis of the models, distinguishing between those consistently scored correctly and those frequently misjudged by the machine. Their investigation revealed that certain characteristics inherent in the students' drawn models were influential in the machine's scoring precision.\nThese previous studies exemplify that there it is possible to automatically assess students' drawing models on natural phenomena by applying prominent ML techniques with computer vision. However, techniques used in their studies, such as ResNet 50 V2 or Inception-V3 pre-trained model, can be technical barriers to researchers with less machine learning expertise. Therefore, it is necessary to explore ways to broaden the usability of computer vision techniques to the larger group of the education community, and a visual language model such as GPT-4V is a potential candidate for that initiative. Moreover, the supervised approach for scoring model development is time and cost-consuming and needs new methods to overcome these challenges." }, { "figure_ref": [], "heading": "GPT-4V for Image Processing", "publication_ref": [ "b6", "b17", "b33", "b35", "b15", "b5", "b13", "b28", "b27", "b24", "b32", "b40" ], "table_ref": [], "text": "ChatGPT, a state-of-the-art large language model, has had tremendous impacts on and changed education. Among the many applications in education (Grassini, 2023;Lo, 2023), ChatGPT and GPT API have shown significant advantages in automatic scoring to facilitate timely feedback and personalized learning (Zhai, 2022(Zhai, , 2023a(Zhai, , 2023b)). For example, Latif and Zhai (2023) have leveraged the powerful natural language processing, understanding, and generating ability of the GPT family and fine-tuned ChatGPT-3.5 turbo to accurately score student written explanations for science phenomena, which shows 9.1% average increase of scoring accuracy compared to BERT. In addition, GPT's powerful generative ability could help solve challenging automatic scoring problems such as data imbalance. Unbalanced training data can introduce scoring uncertainty and result in biased outcomes. Fang, Lee, and Zhai (2023) employed GPT-4 to augment unbalanced training data and found the GPTgenerated responses yield identical outcomes compared to authentic student-written responses. Using this data augmentation method, they reported an increase of 3.5% for accuracy, 30.6% for precision, 21.1% for recall, and 24.2% for F1 score. Also, Kieser, Wulff, Kuhn, and Küchemann (2023) showed that ChatGPT can emulate university students' written answers on Force Concept Inventory, scoring almost equivalent to those.\nOne notable development of OpenAI is the release of GPT-4V, which integrates an image processing module to GPT-4, enabling visual question answering (i.e., receiving textual prompt and image input and answering the user's question about the image in natural language). Visual ChatGPT, the predecessor of GPT-4V, was developed by Microsoft developers (C. Wu, Yin, et al., 2023), which incorporated various Visual Foundation Models such as Visual Transformers or Stable Diffusion to ChatGPT. They reported that Visual ChatGPT could generate images, extract features from the input image, change an object in the image with re-drawing, etc., following the user's natural language-based queries. After the release of the GPT-4V model, researchers from Microsoft (Z. Yang, Li, et al., 2023) made an initial but comprehensive report on the image processing ability of GPT-4V. Z. Yang, Li, et al. (2023) introduces GPT-4V's working modes and prompting techniques, GPT-4V's vision-language capability, temporal understanding, intelligence and emotional quotient tests, etc. Y. Wu et al. (2023) further presented varying abilities of GPT-4V in domains such as visual understanding, language understanding, and visual puzzle solving. However, their focus was not on the image classification performance of models.\nScholars have reported the application of GPT-4V in various problem domains, although there is no open GPT-4V API as of now (November 5th, 2023). Most of these early reports were made by medical scholars in medical subdomains. For example, C. Wu, Lei, et al. (2023) evaluated GPT-4V's diagnosing ability in human body systems such as the central nervous system, cardiac, obstetrics, etc. They descriptively suggested that GPT-4V demonstrated proficiency in distinguishing medical image modalities and anatomy and showed difficulties in diagnosing symptoms. J. Wang, Ye, Liu, Guo, and Hu (2023) reported that GPT-4V can read Bioinformatics illustrations such as sequencing data analysis, multimodal network-based drug repositioning, and tumor clonal evolution -however, it showed weakness in quantitative counting of visual elements. R. Chen et al. ( 2023)'s study is noteworthy in that they provided quantitative results of GPT-4V's classification of images. They used the Kaggle COVID-19 lung Xray dataset for a binary classification problem (COVID-19 vs normal cases). According to R. Chen et al. (2023), GPT-4V (accuracy = .72-.83) outperformed ResNet (accuracy = .74) and VGG models (accuracy = .80) in 6-shot learning situation. However, both models trained with the full dataset performed better than GPT-4V. Z. Yang, Yao, et al. (2023) also demonstrated the performance of multimodal GPT-4V on USMLE question bank, which marked 86.2% accuracy.\nIn other fields, Y. Chen, Mendes, Das, Xu, and Ritter (2023) tested whether GPT-4V completely masks personal information and found that it starts to reveal the correct location of a building in the given image. L. Chen et al. (2023) examined GPT-4V's decision-making ability in automatic driving, housing robot, and open-world game domains. Also, Zhou, Liu, Zagar, Yurtsever, and Knoll (2023) showed the possibility that GPT-4V could be used for evaluating traffic anomaly scenes. Meanwhile, Shi et al. ( 2023) examined the optical character recognition performance of GPT-4V and confirmed that it recognized and understood Latin-alphabet content well but was struggled in recognizing other character systems-written content.\nTo sum up, there have been very few studies that tested the classification performance of GPT-4V. Further, no study has explored the possibility of applying GPT-4V for educational studies, particularly for the automatic scoring of multinomial items." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data Set", "publication_ref": [ "b37", "b7" ], "table_ref": [], "text": "This study secondarily analyzed student-drawn scientific models from a dataset adapted from a parent study by Zhai, He, and Krajcik (2022). The items developed by the NGSA team (Harris, Krajcik, & Pellegrino, 2024) target the NGSS (NGSS Lead States, 2013) performance expectations, to implement a 3-Dimensional assessment that incorporates disciplinary core ideas, cross-cutting concepts, and science and engineering practices.\nWe conducted an experiment with the six items. Per each item, we randomly sampled 9 example human evaluation cases, nine validation cases, and 150 test cases. The validation and test datasets were perfectly balanced throughout the categories (1/3 for 'Proficient,' 1/3 for 'Developing,' and 1/3 for 'Beginning')." }, { "figure_ref": [ "fig_0" ], "heading": "Experimental Design", "publication_ref": [ "b25" ], "table_ref": [], "text": "The goal of the experiment was to develop a method, NERIF (Figure 1), that can help users utilize GPT-4V for automatic classification of image data and test it with student-drawn scientific models.\nWrite Prompt. To achieve this goal, we used a few-shot learning approach (Y. Wang, Yao, Kwok, & Ni, 2020;Y. Wu et al., 2023) with 9 example evaluations to instruct GPT-4V to correctly categorize student-drawn visual answers according to the scoring rubric. The task GPT-4V should solve is a multinomial classification in step with the given data.\nValidation. We confirmed that our prompt and input image data could instruct ChatGPT to read the given images and categorize student-drawn models with the validation dataset (N = 54). The validation step also served for heuristic prompt engineering of notation-enhanced scoring rubric.\nTest. After validation, we repeatedly ran a GPT-4V session to automatically score student-drawn images in the test dataset (N = 900). By setting the temperature to " }, { "figure_ref": [ "fig_1", "fig_2", "fig_1", "fig_2" ], "heading": "NERIF: Notation-Enhanced Rubric Instruction for Few-shot Learning", "publication_ref": [], "table_ref": [], "text": "We gave ChatGPT two images with a prompt as input for the automatic scoring of student-drawn models. The first images includes a problem statement, and 9 example human coders' assessments on student-drawn models for few-shot learning. The second image included three student-drawn models that ChatGPT was instructed to assess.\nOur prompt for single-turn conversation consisted of 7 components. An example of our prompt and input image for processing students' scientific modeling is presented in Figures 23, respectively.\n(1) Role designates in what position ChatGPT should answer to the query. Role was given as \"You will be a science teacher who categorizes student responses to science items for proficiency.\"\n(2) Task explains what ChatGPT is requested to do. ChatGPT's task is to categorize models drawn by students that model why a particular phenomenon occurs in the given problem context. Also, ChatGPT's categorization must depend on the rubrics. ChatGPT should learn how to categorize the models and provide the 'rationale for proficiency' from the human coders' demonstration, which is given in example. After instructing that: Task requires ChatGPT to retrieve problem context, rubric, and 'rationale for proficiency' of one random example from example. Note that problem context and example are given in the first image attached. Task then requests ChatGPT to categorize models drawn by students, with its 'rational for profeciency' as like in example.\n(3) Problem context is given in the first image attached, and ChatGPT has to retrieve it. For example, Task M3-1 is contextualized in a scenario which students are heating a solid butter until the state of the butter changes. The item requires students to construct a model that shows before and after thermal energy is transferred to the solid butter by heating.\n(4) Notation-Enhanced Scoring Rubrics includes three components to guide GPT-4V for automatic scoring:\nHuman experts identifying scoring aspects. We asked human subject matter experts to specify the aspects that should be considered to assess student-drawn models. The scoring rubric for items used in this study considers up to 2-4 components. For example, Task M3-1 considers four components for scoring (see Figures 23).\nGPT-4 defining scoring rules aligned with proficiency levels. Proficiency defines the rule to categorize student-drawn models, synthesizing the aspects the drawing includes. The proficiency level is trinomial categories of 'Proficient', 'Developing', and 'Beginning'. We first ask GPT-4 to identify which aspect(s) are included in the scoring rule for each specific proficiency level, to help ChatGPT categorize the test case according to the Proficiency rules. For example, in Task M3-1, the student's answer is considered 'Proficient' only when all the four components are included, 'Developing' when at least two but not all of the four are included, and 'Beginning' when one or none of those are included. " }, { "figure_ref": [], "heading": "Data Analysis", "publication_ref": [], "table_ref": [], "text": "We repeatedly ran GPT-4V sessions to analyze the test cases. GPT-4V assessed three student-drawn models per query we sent, which included the two images and prompt. We opened a new session for every three students' drawn models, which ChatGPT assesses in a turn of conversation, lest GPT-4V's memorization of conversation affect the assessment of later test cases. After collecting GPT-4V's assessment of images drawn by students, accuracy, precision, recall, F1, and Fleiss' Kappa were calculated by comparing GPT-4V scores with the human scores. Further, the two researchers of this study, who are experts in science education and automatic assessment, inductively identified the characteristics of GPT-4V's behaviors to uncover the scoring process during the experiment." }, { "figure_ref": [], "heading": "Findings", "publication_ref": [], "table_ref": [], "text": "In this section, we first present scoring accuracy, including the accuracy parameters for both validation and testing processes. Then we report the GPT-4V scoring processes with examples, and uncover notable behavior patterns of GPT-4V in scoring the models." }, { "figure_ref": [], "heading": "GPT-4V Scoring Accuracy on Student Drawn Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Validation Scoring Accuracy", "publication_ref": [], "table_ref": [], "text": "The validation process was used to help researchers develop and revise the prompts, which iteratively changed as we improve the prompts. Here we reported the final validation accuracy for our prompts for the six items (seeTable 1). The accuracy for the 'Beginning' examples was .78, for 'Developing' was .67, and for 'Proficient' was .56. On average, our prompts showed validation accuracy of .67 in assessing student-drawn models. " }, { "figure_ref": [], "heading": "Test Scoring Accuracy", "publication_ref": [ "b14" ], "table_ref": [ "tab_1", "tab_1" ], "text": "To examine scoring accuracy, we tested the prompts with new samples that GPT-4V did not see during the prompt development phase. Table 2 shows the test accuracy for the 6 items. On average for the six tasks, GPT-4V yielded a test scoring accuracy of .51. (SD = .037), with the average value of Precision =.58 (SD = .04), Recall = .51 (SD = .037), and F1 = .49 (SD = .047). Fleiss' Kappa (quadratic weighted) ranged from .32 to.51, which is considered 'Fair' to 'Moderate' accuracy (Landis & Koch, 1977). We also found that the scoring accuracy vary by scoring category. Specifically, the accuracy for the 'Beginning' cases was .64, for 'Developing' cases was .61, and for 'Proficient' cases was .26 (see Table 2). To understand the variations, we delve into the confusion matrices of two example Tasks J2-1 and J6-2 (see Table 3). For both items, GPT-4V predicted most of the true 'Beginning' and 'Developing' cases correctly. However, GPT-4V predicted the majority of 'Proficient' cases to be 'Developing'." }, { "figure_ref": [ "fig_3", "fig_1", "fig_2", "fig_4", "fig_4", "fig_3", "fig_4", "fig_2", "fig_3", "fig_4", "fig_4", "fig_4", "fig_5", "fig_4", "fig_5", "fig_4", "fig_4" ], "heading": "Unpacking How GPT-4V Score Student Models", "publication_ref": [], "table_ref": [], "text": "To uncover GPT-4V's scoring process, we present an example in Figure 4, which shows examples of GPT-4V scoring on three students' drawn models. The input given to ChatGPT was the prompt presented in Figure 2, problem context and scoring examples presented in Figure 3, and three images of student-drawn scientific models, which were concatenated into one image file (Figure 5).\nIn the example response, ChatGPT performs the task defined in the prompt (Figure2). It first briefly explains the problem context and rubric, and then retrieves a random example of human coders' evaluation of the student's scientific model.\nChatGPT's categorization of the three student-drawn models follows, which is structured as the 'rationale for proficiency' in the examples. It determines whether each component defined in the rubric is included in the test cases, summarizes its evaluation, and deduces the final categorization (one of 'Beginning,' 'Developing,' and 'Proficient'). This again shows that GPT-4V can retrieve images given as input and separately process them according to the user's query.\nThe example prediction of GPT-4V on the three 'Developing' test cases is presented in Figure 5. It is observed that GPT-4V correctly predicted two of the three as 'Developing.' and one as 'Beginning.' Further contemplation of GPT-4V's analysis of student-drawn models is presented in the section 5.2. Below we present sevearl features of GPT-4V identified in our qualitative analyses.\nGPT-4V can recognize and retrieve information from the questions presented in images. We found that GPT-4V can successfully access the input images and process information encapsulated (see examples in Figures 4 and5). Both the problem context and example are provided in the format of the image and GPT-4V successfully retrieved the questions and answers in the input image. Particularly, GPT-4V stringently followed our instruction to retrieve one random example from the input image (Figure 3) and treated it as evidence of its processing of the given image (Figure 4). This precise retrieval of information strengthens the GPT-4V's possible use for automatic scoring.\nGPT-4V can catch the characteristics of student-drawn models. Figure 5 exemplifies that GPT-4V is able to capture characteristics of student-drawn scientific models. It read the printed student descriptions of their models, such as \"butter particles,\" \"fire molecules,\" and \"spreading.\" Also, it rightly pointed out there is an arrow in the student-drawn image, which signifies the motion of particles.\nAnother example of GPT-4V's understanding of students' models is presented in Figure 6. GPT-4V takes the \"longer arrows after heating\" in the image as evidence for component (A). GPT4-V interprets the image as \"the structured arrangement of particles before heating compared to a more scattered arrangement after heating,\" which denotes component (B). GPT-4V reads \"Butter molecules\" that label (gray) circle in the image to decide whether it includes a component (C). And GPT-4V read \"arrows indicate the motion with a descriptor \"Thermal energy being transferred\" and \"Amount of movement.\" Consequently, GPT-4V correctly predicted the image as 'Proficient. ' These examples show especially that GPT-4V not only extracts features from the image but also represents it in natural language, which the human user can understand why it made such a decision.\nGPT-4V can assess student-drawn models according to the rubric. Figures 4 and 5 also show that GPT-4V can assess student's visual answers according to the given rubric. GPT-4V relates the features that are extracted from the image to the appropriate components given in the rubric. For example, in Figure 5, GPT-4V identified the changes in the butter particle's state (Component (A)), labels for butter particle (Component (C)), and keys such as \"spreading\" or arrows that describe butter particles' motion (Component (D)). However, it could not identify the changes in the organization of butter particles before and after the heating of butter (Component (B)). GPT-4V summarized that \"the model includes components (A), (C), and (D), but not (B), the proficiency level is \"Developing.\"\", which corresponds to the scoring rubric and the human-coded category. This is apparent evidence that GPT-4V assesses student-drawn scientific model images as the given aspects and synthesizes them according to the rule.\nExamples and notes can improve GPT-4V performance on scoring . What we have found during this study is that GPT-4V can be instructed to increase the quality of its inference on student-drawn images by example and notes. Figure 7 presents the GPT-4V's prediction on the three 'Developing' test cases, which are same as those of Figure 5.\nThe top of Figure 7 shows that when the example is not provided, the response of GPT-4V becomes very short, like \"not present\" and \"present,\" which does not provide much information on the student-drawn images. It predicts that all the cases belong to the 'Beginning' category, which dramatically decreases the test accuracy. However, when there was example, the response of GPT-4V was more elaborated, and it correctly predicted 2/3 cases to be 'Developing' (Figure 5). This is clear evidence that example provided in the attached image work as few-shot learning examples, which instructs GPT-4V how to assess student-drawn images.\nThe bottom of Figure 7 shows that when the notes is not provided, GPT-4V's prediction on each component defined in the rubric could be changed, and thus also the final predicted label. For example, GPT-4V decided that there was a component (C) in the second and third drawings when there was notes (Figure 5). However, when there was no notes, GPT-4V's decision changed, and it judged that the drawings do not include component (C)). Consequently, the predicted label of the third drawing became 'Beginning,' which belongs to 'Developing' according to human coders or when there is notes. This is a concrete example of guiding GPT4-V to appropriately process and categorize images by natural language-based instruction.\nGPT-4V sometimes makes incorrect but interpretative inferences. Although Figure 6 shows that GPT-4V can correctly predict a student-drawn image's label, it also includes some interesting points that allow us to figure out how it made such a decision. GPT-4V insisted that \"longer arrows after heating,\" which is indeed in the image, denotes \"that butter particles move faster after heating.\" However, the sign of the butter particle's movement in the particular image in Figure 6 was double lines (=) rather than arrows (→), as indicated by the student's explanation. Further, GPT-4V identified the arrows and double lines as the same symbol, saying that \"arrows indicate the motion with a descriptor,\" \"Thermal energy being transferred,\" and \"Amount of movement.\" However, this is not true when we carefully read the student's comment in the image. It seems that the student intended to signify the transfer of thermal energy from the heat source by arrows and the movement of butter particles by double lines. Nevertheless, GPT-4V's inference is plausible to some extent because the arrows can work as the sign of the particle's motion as in Figure 5. This shows that GPT-4V could make incorrect but understandable inferences on the given image and that GPT-4V might have difficulty in processing too contextualized or too sophisticated semantics." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b16", "b37", "b39", "b12", "b16", "b37", "b26", "b28", "b37", "b16" ], "table_ref": [ "tab_1" ], "text": "To our knowledge, this study is one of the first attempts to examine the performance of GPT-4V on multinomial classification tasks, particularly for student-drawn image responses to science items. We developed an approach, NERIF, leveraging prompt engineering to apply GPT-4V to score student drawn models. The results of this study show that GPT-4V processes images and assesses student-drawn models with a varying degree of accuracy. The strength of GPT-4V, which affords visual answer questioning, sharply distinguishes itself from previous approaches in the following aspects.\nFirst, GPT-4V provides a paradigm change in the application of computer vision technologies in educational settings. We found that users can instruct GPT-4V to assess student-drawn models via its powerful image classification only by providing it with problem context, noted rubrics, and scoring examples. That is, GPT-4V requires no programming skills for users in preparing automatic scoring models for visual scientific models. This change is made available due to the prompt engineering approach brought with the progress of AI. In contrast, the previous approaches used by researchers required sophisticated machine learning techniques to train and validate the machine algorithmic models (Lee et al., 2023;Zhai, He, & Krajcik, 2022). Also, while previous reports on GPT-4V provided few-shot learning images in a multi-turn conversation (Y. Wu et al., 2023), this study showed that it is possible to give GPT-4V multiple training examples in a single-turn conversation. This implies that automatic scoring of student-drawn images could be broadly used in educational studies in the near future, tackling the existing technical barrier.\nSecond, GPT-4V provides interpretative and transparent information that uncovers the \"black box\" of automatic scoring. In this study, we found that GPT-4V generated answers written in natural language to the modeling tasks, which is understandable to human users and provides rationales for its thoughts on the components defined in rubric. This is a significant contribution to automatic scoring, since no automatic image scoring research has provided explainable description of the models. This advantages can help science education researchers to scale up the use of automatic scoring with teachers and students in the near future. The natural language-represented student's image answers and could be a cornerstone that enables timely feedback to students, not only based on the labels of their answers but also the detailed aspects of their drawings described in natural language (Zhai & Nehm, 2023). The explainability of GPT-4V's scores on automatic grading of image models could be considered more prominent when considering that the explainable AI and scoring model is becoming increasingly important in terms of ethics and transparency of AI in education (Khosravi et al., 2022).\nThird, we found that GPT-4V requires very few training data on image scoring, which significantly reduced human efforts of labeling training data compared with traditional machine learning approaches. It is notable that previous studies usually sampled a large portion of labeled student responses to train machine. For example, Lee et al. (2023); Zhai, He, and Krajcik (2022) both sampled around 1,000 student drawn models for each task and hired human experts to score the data, which required a substantial time and cost. In contrast, the automatic scoring of drawn models using GPT-4V in this study only required nine training examples for few-shot learning. This shows that visual language models could also reduce users' burden of data collection that has been mostly used for model training, allowing more data to be used as test cases.\nFourth, this study contributes a new prompting engineering methods (i.e., NERIF) in automatic scoring, which can potentially be generalized for other computer vision tasks. Even though prompt engineering has shown power in many automatic tasks, strategies for specific types of tasks are found essential to improve efficiency. In our case, zero-shot learning approach employed at the beginning of the project showed very low accuracy, which prompted us to explore novel approaches of prompt engineering. We found that the heuristical instructional notes supplementary to the scoring rubrics were beneficial in automatic scoring or other educational tasks. This finding is inconsistent with prior research by R. Chen et al. (2023), who reported that \"supplementing the GPT-4V prompts with reasons underlying the classifications does not yield an improvement in results.\" We believe that the alignment between the provided reasoning and the model's processing capabilities may differentiated the two studies that yielded contradictory conclusions. In addition, our methods also referred to Chain-of-Thought (Wei et al., 2022;C. Wu, Yin, et al., 2023) when developing the 'Rationale for Proficiency.' We also appreciate the ideas provided by Z. Yang, Li, et al. (2023) and J. Yang et al. (2023), which suggested segmentation and marking on the input image to enhance GPT-4V's image understanding.\nDespite the promise of GPT-4V shown in this study, we found limitations. First, our NERIF performed significantly lower with student-drawn models at the 'Proficient' level, as compared to other levels (Table 3). We suspect that this may be because the scoring rubric requires a student's visual answer to include all the components (up to four) to be graded as 'Proficient.' One overly rigorous decision on a component can potentially lead GPT-4V to treat a proficient-like response to be 'Developing.' Future research should develop approaches to mitigating this issue. For example, GPT-4V could be instructed to be less strict and consider as many symbols represented in the image as the indicator of each component. Yet, users should not cautious of risking the GPT-4V to be overly lenient, e.g., failing to differentiating 'Beginning' from 'Developing' answers. Researchers could also test a case for 3 or 5 times and apply some systematic procedures that allow GPT-4V to vote for the ideal solutions, which may increase the accuracy of grading 'Proficient' cases. In all, we suggest that future research should explore more effective ways to instruct GPT-4V to catch and follow the human coders' implicit assumptions working in scoring images need to be sought.\nWe also find that GPT-4V's conversational function may contaminate the scoring tasks as GPT-4V's memorization of cases intervene its ability to score other cases. As of November 2nd, 2023, the ChatGPT interface allows uploading up to 4 images at once with text prompts. However, when we gave GPT-4V four images at once, it often processed the first two items and ignored the third and fourth images. Therefore, we had to design prompts to use only two images, one for noted rubrics and another for test cases. This issue is expected to be resolved in future developments, especially when the API for GPT-4V is released.\nIn addition, the scoring accuracy of GPT-4V's reported in this study are not ready to be applied in real classroom settings. Specifically, human-machine scoring agreements indicated by weighted Kappa (.32-.51)(Table 2)are not comparable to the previous studies which achieved weighted Kappa of .54-.82 (Zhai, He, & Krajcik, 2022) or .73-93 (Lee et al., 2023). Therefore, future studies should explore ways to increase the classification performance of GPT-4V to exploit its full potential for educational studies." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b10", "b38" ], "table_ref": [], "text": "This study developed a novel prompt engineering approach-NERIF to leverage GPT-4V in scoring students' scientific models. By testing its image classification performance, this study demonstrated the potential of GPT-4V in scoring student-drawn models . To test the scoring accuracy, we used perfectly balanced data from six assessment tasks in middle school science that include students' drawn models and the scores assigned by human experts. Our findings suggest that GPT-4V could score student drawn-models with low to medium accuracy, varying across student proficiency levels. The study further uncovers how GPT-4V assigned scores in an interpretable way according to NERIF. We found that GPT-4V can retrieve information from input images in terms of the problem context, example evaluations provided by human coders, and students' drawings. GPT-4V catches and describes the characteristics of student-drawn models in natural language and can classify student-drawn models according to the given rubrics. Our approach highlights the few-shot learning with heuristically added \"instructional Notes\" which improve GPT-4V's performance. In addition, even though GPT-4V made errors, some cases are interpretive to content experts, which indicates space to improve the scoring accuracy. The results of this study show that utilizing GPT-4V in automatic scoring of student-drawn models in science education is promising, leaving gaps to improve the scoring accuracy.\nIt is expected that OpenAI will soon release the GPT-4V API (Hu & Tong, 2023), which will enable developers to utilize GPT-4V in more precise, reliable, and efficient ways. The design and development of a prompt that is fed to GPT-4V API with image inputs are expected to resolve the limitations of this study, opening the possibilities to increase image classification accuracy and optimize the memory issue of GPT-4V. Lastly, we would like to highlight that automatic scoring contributes to the validity of assessment uses in education, but in no cases users should rely on solo sources to determine how to use assessment results (Zhai, Krajcik, & Pellegrino, 2021). Further studies on the ways to apply GPT-4V in education studies, including but not limited to automatic scoring, are strongly recommended." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [ "b37" ], "table_ref": [], "text": "This study was funded by the National Science Foundation(NSF) (Award no. 2101104, 2138854). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF. The authors thank the NGSA team and the researchers involved in the parental study (Zhai, He, and Krajcik, 2022) and those who coded the student-drawn models. The authors specifically thank Joon Kum, who helped develop the prompts and make predictions on the test data." } ]
Engaging students in scientific modeling practice in the classroom is critical to improving students' competence in using scientific knowledge to explain phenomena or design solutions. However, scoring student-drawn models is timeconsuming. The recently released GPT-4V provides a unique opportunity to advance scientific modeling practices by leveraging the powerful image classification capability. To test this ability specifically for automatic scoring, we developed a method NERIF (Notation-Enhanced Rubric Instruction for Few-shot Learning) employing instructional note and rubrics to prompt GPT-4V to score students' drawn models for science phenomena. We randomly selected a set of balanced data (N = 900) from a parental study that includes student-drawn models for six modeling assessment tasks. Each model received a score from GPT-4V ranging at three levels: 'Beginning,' 'Developing,' or 'Proficient' according to scoring rubrics. GPT-4V scores were compared with human experts' consent scores to calculate scoring accuracy. Results show that GPT-4V's average scoring accuracy was mean =.51, SD = .037, with varying accuracy across scoring categories. Specifically, average scoring accuracy was .64 for the 'Beginning' class, .62 for the 'Developing' class, and .26 for the 'Proficient' class, indicating that more proficient models are more challenging to score. Further qualitative study reveals how GPT-4V retrieves information from image input, including problem context, example evaluations provided by human coders, and students' drawing models. We also uncovered how GPT-4V catches the characteristics of student-drawn models and narrates them in natural language. At last, we demonstrated how GPT-4V assigns scores to student-drawn models according to the given scoring rubric and instructional notes. Our findings suggest that the NERIF method is an effective approach for employing GPT-4V to score drawn models. Even though there is space for GPT-4V to improve scoring accuracy, some mis-assigned scores seemed interpretable to science content experts. The results of this study show that utilizing GPT-4V for automatic scoring of student-drawn models in science education is promising, but there remains a challenging gap to further improve scoring accuracy.
NERIF: GPT-4V for Automatic Scoring of Drawn Models
[ { "figure_caption": "Fig. 11Fig. 1 The Process of Notation-Enhanced Rubric Instruction for Few-shot Learning", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 22Fig. 2 Example Prompt (Task M3-1)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 33Fig. 3 Example Input Image 1 (Task M3-1) -Problem Context and Scoring Examples", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 44Fig. 4 Example Response of GPT-4V (Task M3-1)", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 55Fig. 5 Example Prediction of GPT-4V (Task M3-1) on Three 'Developing' Test Cases", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 77Fig. 7 Example Prediction of GPT-4V (Task M3-1) on Three 'Developing' Test Cases, without notes (top) or notes (bottom)", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Validation Accuracy of GPT-4V for drawing assessment", "figure_data": "ItemOverall (N = 9) Beginning (n = 3) Developing (n = 3) Proficient (n = 3)R1-10.781.001.000.33J2-10.671.000.670.33M3-10.560.670.330.67H4-10.890.671.001.00H5-10.670.670.670.67J6-10.440.670.330.33Mean0.670.780.670.56", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Testing Scoring Accuracy of GPT-4V for drawing assessment", "figure_data": "ItemAccuracyAcc Beg Acc Dev Acc Prof Precision RecallF1KappaR1-10.500.500.660.340.560.500.500.44J2-10.450.680.560.120.620.450.410.32M3-10.530.820.400.360.530.530.510.51H4-10.570.640.680.380.610.570.560.51H5-10.470.620.580.220.530.470.460.43J6-10.530.620.840.120.620.530.480.38Mean0.510.650.620.260.580.510.490.43Table 3 Confusion Matrix of Tasks J2-1 and J6-1True LabelTask J2-1 Beginning Developing Proficient Beginning Developing Proficient Task J6-1Beginning3416031190Developing222806422Proficient123268366", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Gyeong-Geon Lee; Xiaoming Zhai
[ { "authors": "S Antol; A Agrawal; J Lu; M Mitchell; D Batra; C L Zitnick; D Parikh", "journal": "", "ref_id": "b0", "title": "Vqa: Visual question answering", "year": "2015-12" }, { "authors": "L Chen; Y Zhang; S Ren; H Zhao; Z Cai; Y Wang; B Chang", "journal": "", "ref_id": "b1", "title": "Towards end-to-end embodied decision making via multi-modal large language model: Explorations with gpt4-vision and beyond", "year": "2023" }, { "authors": "R Chen; T Xiong; Y Wu; G Liu; Z Hu; L Chen; H Huang", "journal": "", "ref_id": "b2", "title": "Gpt-4 vision on medical image classification-a case study on covid-19 dataset", "year": "2023" }, { "authors": "Y Chen; E Mendes; S Das; W Xu; A Ritter", "journal": "", "ref_id": "b3", "title": "Can language models be instructed to protect personal information?", "year": "2023" }, { "authors": "J J Clement", "journal": "Springer", "ref_id": "b4", "title": "Creative model construction in scientists and students", "year": "2008" }, { "authors": "L Fang; G Lee; X Zhai", "journal": "", "ref_id": "b5", "title": "Using gpt-4 to augment unbalanced data for automatic scoring", "year": "2023" }, { "authors": "S Grassini", "journal": "Education Sciences", "ref_id": "b6", "title": "Shaping the future of education: exploring the potential and consequences of ai and chatgpt in educational settings", "year": "2023" }, { "authors": "C J Harris; J S Krajcik; J W Pellegrino", "journal": "NSTA Press", "ref_id": "b7", "title": "Creating and using instructionally supportive assessments in ngss classrooms", "year": "2024" }, { "authors": "D Hestenes", "journal": "Springer", "ref_id": "b8", "title": "Modeling theory for math and science education", "year": "2013" }, { "authors": "T P Hogan; G Murphy", "journal": "Applied Measurement in Education", "ref_id": "b9", "title": "Recommendations for preparing and scoring constructed-response items: What the experts say", "year": "2007" }, { "authors": "K Hu; A Tong", "journal": "", "ref_id": "b10", "title": "Exclusive: Openai plans major updates to lure developers with lower costs, sources say", "year": "2023-10-12" }, { "authors": "G Joshi; R Walambe; K Kotecha", "journal": "IEEE Access", "ref_id": "b11", "title": "A review on explainability in multimodal deep neural nets", "year": "2021" }, { "authors": "H Khosravi; S B Shum; G Chen; C Conati; Y.-S Tsai; J Kay", "journal": "Computers and Education: Artificial Intelligence", "ref_id": "b12", "title": "Explainable artificial intelligence in education", "year": "2022" }, { "authors": "F Kieser; P Wulff; J Kuhn; S Küchemann", "journal": "Physical Review Physics Education Research", "ref_id": "b13", "title": "Educational data augmentation in physics education research using chatgpt", "year": "2023" }, { "authors": "J R Landis; G G Koch", "journal": "Biometrics", "ref_id": "b14", "title": "An application of hierarchical kappa-type statistics in the assessment of majority agreement among multiple observers", "year": "1977" }, { "authors": "E Latif; X Zhai", "journal": "", "ref_id": "b15", "title": "Fine-tuning chatgpt for automatic scoring", "year": "2023" }, { "authors": "J Lee; G G Lee; H G Hong", "journal": "Journal of Science Education and Technology", "ref_id": "b16", "title": "Automated assessment of student hand drawings in free-response items on the particulate nature of matter", "year": "2023" }, { "authors": "C K Lo", "journal": "National Academies Press", "ref_id": "b17", "title": "What is the impact of chatgpt on education? a rapid review of the literature", "year": "2012" }, { "authors": "", "journal": "National Academies Press", "ref_id": "b18", "title": "Next generation science standards: For states, by states", "year": "2013" }, { "authors": "S Novick; J Nussbaum; Y Shi; D Peng; W Liao", "journal": "", "ref_id": "b19", "title": "Pupils' understanding of the particulate nature of matter: A cross-age study", "year": "1981" }, { "authors": "A Smith; S Leeman-Munk; A Shelton; B Mott; E Wiebe; J Lester", "journal": "IEEE Transactions on Learning Technologies", "ref_id": "b20", "title": "A multimodal assessment framework for integrating student writing and drawing in elementary science learning", "year": "2018" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b21", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "M Von Davier; L Tyack; L Khorramdel", "journal": "Educational and Psychological Measurement", "ref_id": "b22", "title": "Scoring graphical responses in timss 2019 using artificial neural networks", "year": "2023" }, { "authors": "C Wang; X Zhai; J Shen", "journal": "Oxford University Press", "ref_id": "b23", "title": "Applying machine learning to assess paperpencil drawn models of optics", "year": "2024" }, { "authors": "J Wang; Q Ye; L Liu; N L Guo; G Hu", "journal": "", "ref_id": "b24", "title": "Bioinformatics illustrations decoded by chatgpt: The good, the bad, and the ugly", "year": "2023" }, { "authors": "Y Wang; Q Yao; J T Kwok; L M Ni", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b25", "title": "Generalizing from a few examples: A survey on few-shot learning", "year": "2020" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; F Xia; E Chi; . . Zhou; D ", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "C Wu; J Lei; Q Zheng; W Zhao; W Lin; X Zhang; W Xie", "journal": "", "ref_id": "b27", "title": "Can gpt-4v (ision) serve medical applications? case studies on gpt-4v for multimodal medical diagnosis", "year": "2023" }, { "authors": "C Wu; S Yin; W Qi; X Wang; Z Tang; N Duan", "journal": "", "ref_id": "b28", "title": "Visual chatgpt: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "Y Wu; S Wang; H Yang; T Zheng; H Zhang; Y Zhao; B Qin", "journal": "", "ref_id": "b29", "title": "An early evaluation of gpt-4v (ision)", "year": "2023" }, { "authors": "J Yang; H Zhang; F Li; X Zou; C Li; J Gao", "journal": "", "ref_id": "b30", "title": "Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v", "year": "2023" }, { "authors": "Z Yang; L Li; K Lin; J Wang; C C Lin; Z Liu; L Wang", "journal": "", "ref_id": "b31", "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision)", "year": "2023" }, { "authors": "Z Yang; Z Yao; M Tasmin; P Vashisht; W S Jang; F Ouyang; . . Yu; H ", "journal": "", "ref_id": "b32", "title": "Performance of multimodal gpt-4v on usmle with image: Potential for imaging diagnostic support with explanations", "year": "2023" }, { "authors": "X Zhai", "journal": "", "ref_id": "b33", "title": "Chatgpt user experience: Implications for education", "year": "2022" }, { "authors": "X Zhai", "journal": "", "ref_id": "b34", "title": "Chatgpt and ai: The game changer for education", "year": "2023" }, { "authors": "X Zhai", "journal": "", "ref_id": "b35", "title": "Chatgpt for next generation science learning", "year": "2023" }, { "authors": "X Zhai; K C Haudek; W Ma", "journal": "Research in Science Education", "ref_id": "b36", "title": "Assessing argumentation using machine learning and cognitive diagnostic modeling", "year": "2022" }, { "authors": "X Zhai; P He; J Krajcik", "journal": "Journal of Research in Science Teaching", "ref_id": "b37", "title": "Applying machine learning to automatically assess scientific models", "year": "2022" }, { "authors": "X Zhai; J Krajcik; J W Pellegrino", "journal": "Journal of Science Education and Technology", "ref_id": "b38", "title": "On the validity of machine learningbased next generation science assessments: A validity inferential network", "year": "2021" }, { "authors": "X Zhai; R H Nehm", "journal": "Journal of Research in Science Teaching", "ref_id": "b39", "title": "Ai and formative assessment: The train has left the station", "year": "2023" }, { "authors": "X Zhou; M Liu; B L Zagar; E Yurtsever; A C Knoll", "journal": "", "ref_id": "b40", "title": "Vision language models in autonomous driving and intelligent transportation systems", "year": "2023" } ]
[]
10.1239/jap/1324046020
2024-02-05
[ { "figure_ref": [ "fig_31", "fig_31" ], "heading": "Introduction", "publication_ref": [ "b67", "b68", "b9", "b71", "b88", "b83", "b84", "b8", "b22", "b77", "b69", "b30", "b81", "b28", "b29", "b75", "b10", "b6", "b13", "b40", "b47", "b66", "b51", "b79", "b78", "b25", "b47", "b10", "b10", "b50", "b23", "b72", "b12", "b7", "b98", "b52", "b37", "b31", "b88", "b74", "b49", "b18", "b79", "b23", "b63", "b38" ], "table_ref": [ "tab_3" ], "text": "Large scale Transformers pretrained on huge text corpora have revolutionized machine learning in recent years (Radford et al., 2018(Radford et al., , 2019;;Brown et al., 2020;Sanh et al., 2021;Wei et al., 2021;Thoppilan et al., 2022;Touvron et al., 2023). Due to an ever-increasing interest in adopting these models in our daily lives, evaluating and predicting their capabilities has become increasingly important (Bommasani et al., 2021;Ganguli et al., 2022;Shevlane et al., 2023;Rae et al., 2021;Hoffmann et al., 2022;Tay et al., 2022;Henighan et al., 2020;Hernandez et al., 2021;Sharma & Kaplan, 2020). Motivated by this, recent works have performed extensive empirical analyses to understand the possibilities and limitations of using these models in practical tasks of interest. For example, such works show large language models (LLMs) can generate coherent text completions based on a provided context, perform code generation and debugging, use online APIs and tools in an automated manner, and even solve multimodal problems such as image captioning (Wei et al., 2022a;Bubeck et al., 2023;Austin et al., 2021;Chen et al., 2021;Lee et al., 2023;Liang et al., 2022;Qin et al., 2023;Liu et al., 2023;Suzgun et al., 2022;Srivastava et al., 2022). While such benchmarking of pretrained models is extremely valuable, it often focuses on evaluating rather \"narrow\" or \"atomic\" capabilities; for example, the ability to identify whether a given passage of text is biased or toxic (Gehman et al., 2020;Liang et al., 2022). However, given the compositional nature of training data (such as language), a model could To find the sum of the digits of this answer, add up the individual digits: 2 + 6 + 2 + 1 + 4 + 4 = 19 So, the sum of the digits of the square of the cube of 8 is 19.\nWhat is the sum of the digits of the square of the cube of 8.\nGive the answer step-by-step. What is the sum of the digits of the square of the cube of 8. Give the answer directly.\nFigure 1: Signatures of compositionality. ChatGPT (Bubeck et al., 2023) correctly responds to prompts that require composition of atomic arithmetic capabilities (sum, cube, square)-we argue these prompts are unlikely to be in the training data. However, the model does not always compose reliably (top-right panel). This motivates us to study the extent to which a Transformer can learn to compose its capabilities by mere pretraining on a compositional domain.\nlearn to compose its atomic capabilities and perform complex tasks that it was never explicitly trained for. This can lead to an underestimation of the capabilities of the model; vice versa, if the model does not learn to compose, we can be certain that benchmarking for atomic capabilities is sufficient to characterize the model.\nMotivated by the above, we analyze if a Transformer trained on a compositional data-generating process, without any special modifications to the usual training pipeline, can learn both relevant atomic capabilities and an ability to compose those capabilities. Bubeck et al. (2023) recently show that LLMs exhibit \"sparks\" of such compositionality, e.g., generating text that merges content of varying styles or evaluate mathematical expressions through the application of a sequence of functions (Fig. 1). However, due to their black-box nature, it is unclear if an LLM actually learns to compose capabilities or merely memorizes relevant samples from its training data. Moreover, while interacting with an LLM, it can be difficult to guarantee that we are utilizing a prompt that will appropriately guide the model to use the capabilities we desire, let alone compose them.\nTo circumvent challenges faced with LLMs pretrained on real world data and focus on our specific motivation, \"can an autoregressive Transformer trained on compositional data learn to compose its capabilities\", we choose to limit the purview of this work to a well-defined synthetic domain. This is similar in spirit to recent works that utilize synthetic datasets generated using objects like first-order logic machines, context-free grammars, linear regressors, modular arithmetic, and even board games to establish and understand phenomenology of modern neural networks (Liu et al., 2022;Allen-Zhu & Li, 2023c,a,b;Garg et al., 2022;Li et al., 2023c;Saparov & He, 2022;Chan et al., 2022;Bhattamishra et al., 2020;Zhou et al., 2023;Nanda et al., 2023a,b;Li et al., 2023a;Lubana et al., 2023;Jones, 2021). The goal of such works, including ours, is to develop interpretable demonstrations and mechanistic hypotheses that enable a characterization of the target phenomenology in a controlled setting. Accordingly, we emphasize that we do not intend to develop novel protocols for improving Transformers' ability to compositionally generalize, but rather to demonstrate its existence and understand what drives it. Overall, we make the following contributions.\n• A minimal synthetic setup for characterizing Transformers' ability to compose. We propose a minimal setup involving compositions of predefined functions F (bijections and permutations) that operate on a string of arbitrary tokens (Section 3), which allows us to precisely study the ability of Transformers to compose functions. Motivated by instruction induction and tuning in LLMs (Honovich et al., 2022;Wei et al., 2021), we instantiate a notion of \"task tokens\" which specify what functions are to be applied to the input string. This helps us avoid any ambiguity in task-specification (Shah et al., 2022).\nprotocols for improving compositional generalization in a Transformer; instead, we show that Transformers can learn to compose its capabilities and perform tasks it was never explicitly trained on, with autoregressive training on tokens from a compositional data-generating process. To this end, we define a synthetic task that allows for perfect task specification and which avoids ambiguity from prompt misspecification. While similar to the compositional table lookup task used in prior work (Liška et al., 2018;Csordás et al., 2022), our task involves a much larger set of capabilities to train and test for (3125 or 4 million, depending on the setup, compared to 128 capabilities in prior work). Second, we aim to understand the extent of compositional generalization in a Transformer trained on our proposed domain, i.e., what kind of compositions does the model fail to perform and when. We define a framework to precisely characterize these failures modes and use the popular linear probing protocol for understanding model internals to show the critical role of attention layers in enabling compositionality (Li et al., 2023a). Finally, we analyze the impact of step-wise inference protocols, wherein intermediate outputs generated by the model are recursively passed to it as inputs, and which has been used for solving several challenging benchmark tasks recently (Suzgun et al., 2022;Wei et al., 2022b). Similar to our work, Li et al. (2023c) study step-wise inference in Transformers trained on synthetic data from a compositional data generating process. However, there are notable differences-we show that Transformers compositionally generalize to combinatorially many new functions and carefully controlling the training data allows us to highlight the benefit of step-wise inference. Furthermore, Li et al. (2023b) study compositionality with prompts used for in-context learning (Garg et al., 2022), while our synthetic setup avoids ambiguity in specifying the compositions. Many other works that study whether Transformers can compositionally generalize (Csordás et al., 2021a;Ontanón et al., 2021), focus on compositionality within a single forward pass, i.e., the model is not allowed to recursively process its inputs. We find the use of intermediate outputs significantly simplifies the problem and, given its popularity in practical scenarios (Kojima et al., 2022;Wei et al., 2022b), our results serve as a demonstration that inference protocols that allow Transformers to recursively refine their outputs can lead to a wide range of capabilities, especially ones that we never explicitly train the model for." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Formalizing capabilities and compositions", "publication_ref": [ "b35", "b70" ], "table_ref": [], "text": "As noted by Hupkes et al. (2020), despite extensive work exploring compositionality in neural networks, the term is often used for several related concepts. To avoid ambiguity, we thus present a definition of a \"compositional model\" that captures our intended notion and, correspondingly, describe the data-generating process used in this work to understand Transformers' ability to compose. Let F denote a set of predefined automorphisms, i.e., any given function F from the set defines a map between points from its input space to the same space. This is motivated by the fact that the input and output domain of a language model are generally the same. We define an input x as a combination of two strings [x f , x d ], where x f ∈ X L f is a sequence of L tokens that specify a series of L functions from F, and x d ∈ X K d denotes a sequence of K tokens to which the series of L functions are applied to. We refer to x f as task tokens and to x d as data tokens. For example, let x Fi be the identifier that denotes that function F i is applied to the data tokens and x d k denote the k th token from the vocabulary X d . Assume L = 2 and k = 1 and define a sample\nx = [x F1 , x F2 , x d1 ]. Then, a model M : X L f × X K d → X K d that takes x as input, is expected to produce the output F 2 • F 1 (x d1 ). We use [L] to denote the ordered set (1, 2, . . . , L).\nA capability, in our setup, is defined as the ability of a model to accurately represent a function F ∈ F. We emphasize that we do not expect pretrained models in practice to perfectly implement an arbitrary function; however, this idealized definition affords us precision and allows us to use accuracy over a random set of inputs to claim a model possesses a certain capability. Based on this definition, we intend to understand the set of capabilities-or the set of functions-that a Transformer can implement by composing them. We formalize this as follows.\nDefinition 3.1 (Compositionality.). We say a model M (.) compositionally generalizes if, for any subset of functions In-order composition\nF i ∈ F, where i ∈ [L], M ([x F1 , x F2 , • • • x F L , x d ]) = F L • • • • • F 2 • F 1 (x d ). I F (5) 1 F (5) 2 F (5) 3 F (5) 4 F (4) 1 F (4) 2 F (4) 3 F (4) 4 F (3) 1 F (3) 2 F (3) 3 F (3) 4 F (2) 1 F (2) 2 F (2) 3 F (2) 4 F (1) 1 F (1) 2 F (1) 3 F (1)\nF (5) 2 F (5) 4 F (5) 2 F (4) 1 F (4) 3 F (3) 3 F (3) 2 F (3) 2 F (3) 1 F (2) 4 F (2) 2 F (2) 3 F (1) 4 F (1) 2 (x) (x) (x) (x) # of compositions 5 2 3 4 # of displacements F (5) 2 F (4) 3 F (2) 2 F (1) 4 I I I I F (5) 4 F (4) 3 F (3) 4 F (2) 2 F (1) 4 F (1)\n4 In practical scenarios, we would not expect the pretraining data to present a capability in all possible scenarios that it can be used in. For example, simple arithmetic tasks like multiplication are often only seen in the context of numbers with 1-3 digits in web-crawled data (Razeghi et al., 2022), which leads to an inability of the model to perform multiplication in higher order numbers. To model this in our setup, we create a spurious correlation between a subset of the functions from F and the position of their identifiers in the task tokens x f . Specifically, we define F (l) ⊂ F as the set of functions that are allowed at the position l in the task tokens x f . We let |F (l) | = N for all locations l, i.e., F is partitioned into equally sized subsets and |F| = N × L. The notation F (l) i , where i ∈ [N ] and l ∈ [L], is used to denote the i th possible function at position l. Based on the above, we define two ways to compose L functions: in-order and out-of-order (see Fig. 2). Definition 3.2 (In-order vs. out-of-order Compositions.). Consider the composition ), where l i ∈ [L]. Denote the ordered set (l 1 , l 2 , . . . , l L ) as order( F ). If order( F ) equals the set [L], we say F is an in-order composition; else, we say it is out-of-order.\nF (4) 1 F (4) 1 F (5) 2 F (5) 4 F (3) 1 F (1) 1 (x) (x) (x) (x)\nF = F (l1) • • • • • F (l2) • F (l L ) (.\nConsider a model M that perfectly encodes all N × L functions from the set F. If the model can generalize to in-order compositions of these functions, then its set of capabilities will in fact grow to exponentially many functions-N L , to be precise. Further, the ability to compose out-of-order can increase this set combinatorially, i.e., proportional to (N × L) L , growing even more quickly compared to the set of in-order compositions. Such an \"explosion of capabilities\" would imply that it is difficult to characterize the set of all tasks that a pretrained model can perform, especially since the pretraining data used for training a model is generally unknown and hence it is hard to even characterize what \"atomic\" capabilities the model possesses. In our experiments, we find that while Transformers can generalize to both in-order and out-of-order compositions, the pretraining dataset for enabling out-of-order generalization must exhibit some-albeit not huge-diversity (we quantify this further when discussing our results). To empirically characterize out-of-order compositions and discuss the failure modes thereof, we find it useful to define the following notion of displacement (see Fig. 2). Definition 3.3 (Displacement.). Let D(s, s ′ ) denote the hamming distance between two ordered sets s and s ′ . Then, the displacement of a composition F is defined as D(order( F ), [L]). \nF (5) 3 F (4) 3 F (3) 2 F (2) 4 F (1) 1 (x) (b).\nStep-by-step prompting:\n(a). Direct prompting: \nF (1" }, { "figure_ref": [ "fig_31", "fig_3", "fig_2" ], "heading": "Experimental Setup and Data-Generating process", "publication_ref": [ "b38", "b59" ], "table_ref": [], "text": "Having defined our notion of compositionality in a pre-trained model, we now briefly discuss the experimental setup used in this work (see Appendix A for details). Specifically, our data-generating process yields inputs consisting of a sequence of 6 data tokens, x d ∈ X 6 d , where each token is drawn from a vocabulary of size |X d | = 10. Each of the 6 elements are drawn uniformly at random, with replacement, from X d . We consider two families of functions defined over these data tokens: bijections and permutations (see Fig. 10). Specifically, the set F b (which we refer to as bijections) consists of all functions that apply a bijection on each of the 6 tokens in an element-wise manner. The number of such functions is the number of bijections on a single token: there are 10! such functions when |X d | = 10. The second set is F p , which is the set of all permutations of 6 elements (|F p | = 6!). The rationale for selecting these function families is that both F b and F p are groups with function composition as the group operator. Consequently, the composition of two functions is also a group element.\nWe consider two formats for representing a sample (see Fig. 3). Both formats start with task tokens x f , that specify the sequence of functions to compose, followed by the data tokens x d . The direct prompt format follows this with the final output of the function composition, while the step-by-step prompt format follows this with all intermediate outputs of the function composition, similar to chain-of-thought and related protocols (Kojima et al., 2022;Nye et al., 2021;Wei et al., 2022b).\nWe also control the set of task tokens seen during training. In particular, we control compositions in the training data to either only contain in-order compositions, or also include out-of-order compositions. The training data for random contains task tokens corresponding to a random subset of the set of all possible in-order compositions. The training data for base contains task tokens where at most one position in the composition is not the identity function. For example, if we consider N = 4 and L = 5 like in Fig. 2, then base contains compositions of functions where at least four of the five positions are identity, totalling to overall 21 functions. The set of functions base helps us assess whether mere learning of \"atomic\" capabilities is sufficient to yield compositionality in a model. (See Appendix A.2)\nWe generate 100K samples using the process above for a given prompt format (step-by-step or direct) and with restrictions on the task tokens (in-order, out-of-order, base, random). The model is autoregressively trained on this data using the cross-entropy loss (see Appendix A). After training, we evaluate whether the model possesses a capability corresponding to a set of composition of functions, by computing the accuracy of the model completion on 1000 different data tokens. The accuracy of a completion is the average accuracy over the last 6 tokens." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b85" ], "table_ref": [], "text": "In this section, we systematically investigate the capabilities of an autoregressive Transformer trained on synthetic tasks with compositional structure. Broadly, we would like to understand how this structure in the data manifests in the network. We focus on addressing the following questions:\n(1) Do Transformers compostionally generalize to functions not present in the training data and to what extent do they exhibit in-order and out-of-order generalization?\n(2) How do properties of the training data influence in-order and out-of-order generalization?\n(3) Are there differences between direct and step-by-step prompt formats?\n(4) Do Transformers first learn to compose fewer functions before learning to compose many of them?\n(5) What is the role of the attention and feed-forward layers?\n(6) Can another popularly used architecture for autoregressive modeling, e.g., LSTMs, compositionally generalize in our setup?\nWe use nanoGPT (Appendix A), a Transformer with 12 layers with each Transformer block identical to the one in Vaswani et al. (2017). We use the same architecture across all our experiments in this section, but provide ablations that vary the number of layers, attention heads, and embedding dimension in Appendix B.1." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Combinatorial explosion and Exponential growth in capabilities", "publication_ref": [], "table_ref": [], "text": "Do Transformers only generalize to functions present in the training data or do they reflect compositional structure present in data? In Fig. 4, we train on data consisting of a small subset of in-order compositions of bijections F b , in the step-by-step prompt format. We consider the composition of 5 functions in both Figs. 4a and4b. Each position of the composition can be one of four choices, with the four choices at different positions being different in Fig. 4a and the same in Fig. 4b. In addition, any position can also be selected to be identity.\nWe find that Transformers can capture the compositional structure in data and generalize to exponential and combinatorial sets of functions in Figs. 4a and4b, despite being trained on an extremely small subset of function compositions. For example, a Transformer trained on 30-100 function compositions, generalizes to 3125 unseen compositions of these functions almost perfectly. In contrast, we note that LSTMs fail to compositionally generalize in this same setup (Appendix B.2), while Transformers with different numbers of layers and attention heads show compositional generalization (Appendix B.1). This indicates that the inductive bias of the architecture contributes to compositional generalization and any autoregressive model is not guaranteed to succeed. We also observe that base-which serves as a null model that only trains on the atomic capabilities (or functions)-does not compositionally generalize. Overall, then, we note that compositional generalization occurs with the step-by-step prompt format, provided the right architecture and training data are used." }, { "figure_ref": [ "fig_4", "fig_6", "fig_6" ], "heading": "In-order vs. Out-of-order generalization", "publication_ref": [], "table_ref": [], "text": "How do biases in the training data influence a Transformer's ability to compose? Are Transformers capable of both in-order and out-of-order generalization or does it depend on the nature of training data? For the functions in Fig. 4a, the number of in-order compositions is 5 5 = 3125 and the number of out-of-order compositions is a whopping (21) 5 = 4084101; essentially all of these functions are different from the ones seen in the training data. Like in Section 4.1, we only consider Transformers trained with the step-by-step prompt format on functions from the set of bijections F b . In Fig. 5, we consider the training data to have functions from base, some in-order and some out-of-order compositions. We fail to see in-order or out-of-order generalization unless the data also includes in-order or out-of-order compositions respectively. However, a small number of The choice of 5 functions are identical across all 5 positions of the composition which means there are 3125 different ways to compose them; only 1365 of them are unique. Both figures are evidence that one can train on a small number of compositions of functions (around 31-100) and generalize to exponentially (a) and combinatorially (b) many functions that would be considered \"out-of-distribution\".\nin-order (10 of them) or out-of-order compositions (100 of them) in the training data results in in-order generalization or limited out-of-order generalization. All scenarios in Fig. 5 do not fully generalize to out-of-order compositions. This indicates that out-of-order compositions may require a lot more data compared to in-order compositions." }, { "figure_ref": [ "fig_7", "fig_31", "fig_7", "fig_4", "fig_4", "fig_7", "fig_2" ], "heading": "Direct vs. step-by-step compositions", "publication_ref": [], "table_ref": [], "text": "Both Sections 4.1 and 4.2 discuss experiments using the step-by-step prompt format, but do these results also hold for direct prompting? Fig. 6 (left) and Fig. 15 answer this in the negative. Specifically, in Fig. 6 (left), we consider a setup identical to Fig. 4a and train on a different number of random functions. Transformers fail to generalize to new in-order compositions with direct prompting when we consider compositions of bijections from F b . We observe this failure even if we train on 2000 of the 3125 possible in-order compositions of functions, i.e., even if the data has high diversity. In contrast, in Fig. 4a, a mere 100 compositions in the step-by-step format suffices to generalize to all possible in-order compositions.\nOn the other hand, we see in-order generalization if a Transformer is trained on a composition of a a permutation function from F p and a bijection function from F b . In Fig. 6 (right), we train on compositions of two functions, where one position is one of 25 bijections, and the other is one of 25 permutations. We vary the number of compositions seen in the training data and find that 250 compositions in the training data are enough for the model to generalize to all 625 possible compositions of the two functions. We note that bijections and permutations operate on orthogonal features of the input: bijections operate on the value of the token while permutations operate on the position of the token. We speculate that this is important for compositional generalization in the direct prompt format. ) to generate the training data and we evaluate them on combinatorial set of functions generated from 20+1 functions (one of them being identity). The x-axis varies the number of displacements and the y-axis varies the number of compositions-equivalently the number of functions that are not identity. We make the following observations: (1) A Transformer trained on just 31 functions (top-middle) generalize to nearly exponentially many or 3125 compositions of functions. (2) All the above configurations do not generalize perfectly to the entire combinatorial set. They however partially generalize to nearly 4 million compositions of functions. The generalization is worse if we increase the number of compositions or displacements (see Fig. 2 for pictorial description of displacements)." }, { "figure_ref": [ "fig_31", "fig_7" ], "heading": "Why is compositional generalization harder for direct prompts? (Appendix C.3)", "publication_ref": [ "b54" ], "table_ref": [], "text": "The ability to run multiple forward passes through the model allows us tackle a richer class of problems (Merrill & Sabharwal, 2023). The step-by-step and direct prompt formats differ because the former allows L forward passes through the model, while the latter only allows one forward pass. As a result, we expect for the direct prompt format to enable compositional generalization, it must compute the L steps of the composition in the intermediate layers of the model within a single forward pass itself. For example, consider a model that computes the functions F and G, and is able to compositionally generalize to function G • F . Since G • F is computed using a single forward pass, G must occur in a layer after F (see also Fig. 11b). However, this model may not generalize to F • G, since that will require F to occur after G in it model's layers. Hence, to compositionally generalize to both combinations of F and G, a model may have to learn copies of F and G at multiple layers. This will likely require training data with large amounts of data diversity so that most combinations of functions are seen by the model during training itself.\nWe further formalize the intuition above in Appendix C. Specifically, in Appendix C.3, we argue that a (Right.) We train a Transformer using the direct prompt forlat on a composition of two functions, with one function being one of 25 bijections and the other function being one of 25 permutations (totalling to 625 compositions). The model is able to compose previously unseen combinations of functions when trained on 250 of these functions in this scenario. model trained with the direct prompt format requires more compositions in the training data, by a factor of O(L), compared to a model trained with the step-by-step format. In Theorem C.2, we prove that there exists an L-layer Transformer that can compositionally generalize with direct prompting. However, empirically, we find that even with the additional training data, the direct prompt format fails to generalize in Fig. 6 (left). This is because the existence of a solution need not guarantee that a Transformer trained with gradient descent converges to that particular minima. The weights can instead converge to a minima that only memorizes compositions present in the training data." }, { "figure_ref": [ "fig_4", "fig_9", "fig_4", "fig_20", "fig_9", "fig_9", "fig_20", "fig_9", "fig_11", "fig_4" ], "heading": "Towards a mechanistic understanding", "publication_ref": [ "b58", "b87", "b15", "b44", "b61", "b62", "b26", "b60" ], "table_ref": [], "text": "In this section, we try to uncover the underlying mechanism for compositional generalization exhibited by Transformers in our setup-particularly for compositions of bijections in the step-by-step prompt format. Prior work on mechanistic interpretability often studies smaller neural networks to extract insights for larger networks (Nelson et al., 2021;Wang et al., 2022;Chughtai et al., 2023). The rationale relates to the universaility hypothesis (Li et al., 2015;Olah et al., 2020), which states that networks of different scales are likely to learn similar functions when trained on the same data. In line with this direction, we attempt to understand a 1-layer Transformer1 trained on our data generating process.\nTo develop a hypothesis for our mechanistic evaluation, we first show in Appendix C.1 the existence of 1-layer Transformers that can compositionally generalize to a simplified version of our task via the step-by-step prompt format. In particular, our construction uses the attention layer to copy the relevant task token-similar to an induction head (Olsson et al., 2022)-and the feed-forward layer to compute an single step of the function composition. The model is run L times serially, where each run computes one step of the function composition. The attention layer uses a position encoding as the key and query to determine which tokens to attend to and propagates the task token as the value.\nWe next evaluate if the theoretical construction, even though a simplification, lines up with empirical 1In fact, we use a deeper model in most experiments in the main paper to elicit maximal performance when using the direct format; the step-by-step format, as we argue in Appendix C, can generalize compositionally with fewer layers (one, for in-order generalization). Linear probe accuracy (%)\nAfter attention layer After MLP layer We compute the linear probe accuracy-averaged over in-order compositions of functions-after the MLP and attention layers at every layer of the model. (Right.) Attention is largest at the relevant data and task token. We plot the causal attention mask of a 1-layer Transformer trained using the step-by-step format on compositions of 5 in-order bijections (setup of Fig. 4). Keeping the prompt fixed to a specific composition of functions, we plot the attention map averaged over 1000 samples. We observe that the current data token attends to the a specific task relevant to compute the next step of the composition.\nx f x d f 1 f 2 f 3 f 4 f 5 x f x d f 1 f 2 f 3 f 4 f 5 0.\nevaluations on the actual task. Specifically, we first use linear probing to understand which layers contribute to improvements in the accuracy and then visualize the attention maps to understand which tokens the model attends to.\nLinear probe accuracy. In Fig. 7 (left), we use a linear probe to analyze the importance of attention layers and MLP layers. Following Geva et al. (2022), we fix the parameters of probe to the last linear layer, i.e., the unembedding layer of the trained model. We use a Transformer trained on 100 random in-order compositions of 5 functions identical to the model in Fig. 4a. In Fig. 14 we show the results of linear probe experiments on Transformers of different sizes. In Transformers of different sizes, we note a sharp increase in accuracy right after an MLP layer, i.e., the accuracy rarely increases after an attention layer.\nVisualizing attention maps. Analyzing the attention maps of a 12-layer Transformer for a discernible pattern can be difficult. We hence analyze the attentin maps of a 1-layer Transformer trained for step-by-step prompts, which surprisingly also exhibits in-order generalization. In Fig. 7 (right), we plot the attention map for a predefined composition of functions from the set F b . Keeping the task tokens to be fixed corresponding to the predefined composition, we sample 1000 data tokens and compute the attention map for the 1-layer model. The average of these maps is reported in the figure. We see that all data tokens attend to: (i) the task token that specifies the current function to be computed and (ii) the data token that the function is to be applied to.\nThe results above remarkably line up with our theoretical construction. For example, the attention maps in Fig. 7 always attend to the relevant task tokens and data token when computing the next step of the composition. The task and data tokens are all embedded in orthogonal spaces, similar to our construction, with the exception of 5 tokens which all correspond to the the identity function (see Appendix B.7). In parallel, the linear probe accuracy for a 1-layer Transformer in Fig. 14 shows no increase in accuracy after the attention layer (similar to results in Fig. 7), but a sharp increase in accuracy occurs after the MLP layers, indicating that the function is entirely computed in the MLP layers. Okawa et al. (2023) show that different capabilities can emerge multiplicatively over the course of training, i.e., a Transformer first learns functions F 1 and F 2 before it learns compositions like F 1 • F 2 . In Fig. 8, we track the accuracy over the course of training to understand if compositions of fewer functions are learned before compositions of many functions. The setup for this figure is identical to Fig. 4a with the accuracy faceted by the number of function compositions. We find that the order in which functions are learned depends entirely on the training data. If the training data consists of base and very few in-order compositions, then a Transformer generalizes to fewer compositions (more identities) first before generalizing to compositions of multiple functions. On the other hand, if the model is trained on 25 random in-order compositions, then it is better at generalizing to more complex compositions of these functions; this trend is lost when we train on 50 random in-order compositions." }, { "figure_ref": [], "heading": "Training dynamics", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_12" ], "heading": "Conclusion", "publication_ref": [ "b27" ], "table_ref": [], "text": "Given several recent works focused on prediction or elicitation of capabilities in pretrained models, we ask whether the very motivation guiding these works is tractable: can we possibly characterize all capabilities of a model, specifically a Transformer, pretrained on a compositional data domain? To address this question, we proposed a synthetic, but well-defined, data domain and formalized the notion of a capability as representing a function defined over the domain. Breaking compositional generalization into two relevant scenarios (in-order vs. out-of-order), we showed that the compositional structure of the data forces a model to learn to compose at relatively minimal data diversity, which indicatively address our primary question: an appropriate prompt could make the model compose its capabilities, yielding an \"explosion of capabilities\". This can arguably make tractable analysis of capabilities in a pretrained model relatively difficult. Transformer architecture We use nanoGPT2 with 12 layers, 12 attention heads and an embedding dimension of size 120. Each transformer block contains a causal attention layer, layer-norms, residual connections and an MLP (see Fig. 9). The MLP contains two fully-connected layers sandwiched by a GELU layer (Hendrycks & Gimpel, 2016) The first fully-connected layers has a hidden layer with size 4 times the embedding dimension (480) and the second hidden layer has a size equal to the embedding dimension (120)." }, { "figure_ref": [], "heading": "A Experimental Details", "publication_ref": [ "b65" ], "table_ref": [], "text": "The input tokens are converted to one-hot vectors before being passed through to the model. We do not use dropout or biases in the LayerNorm layers. We use weight-tying (Press & Wolf, 2016), i.e., the input and the output embedding layers share weights. Finally, we make use of mixed-precision (bf16 in torch) to speed-up training.\nLoss and Optimizer Models are trained using an autoregressive objective to predict the next token using the cross-entropy loss. Specifically, assume a sequence of tokens of t tokens denoted by x 1:t . Let p w (y | x 1:t ) denote the probability distribution over the next token as predicted by a model with weights w. For a sequence x 1:T of length T , the autoregressive objective is\nL(w) = - T -1 t=1 log p w (y = x t+1 | x 1:t ) .\nTraining is performed for 100 epochs with a cosine-annealed scheduled with warmup. We use an initial learning rate of 3e-4 annealed eventually to 6e-5. We use AdamW as the optimizer (β 1 = 0.9 and β 2 = 0.95) with a weight decay of 10 -3 and a batch-size of 512. We also make use of gradient clipping with a magnitude of 1." }, { "figure_ref": [ "fig_31" ], "heading": "A.2 Data generating process", "publication_ref": [], "table_ref": [], "text": "Data and task tokens. Both data and task tokens are converted to one-hot vectors before being fed to the Transformer. The set of data tokens is denoted by X d and the size of the vocabulary, |X d |, is 10 in all our experiments. The data tokens in the input x d ∈ X 6 d is a sequence of 6 tokens and is the input to the function composition. The 6 tokens are sampled uniformly at random from X d with replacement.\nThere are two sets of functions considered in this work. The set of functions F b (which we refer to as bijections) applies a lookup table in an element-wise fashion to each of the 6 tokens in x d . The set of functions in F p permute the 6 tokens in x d . The family of functions in F b and F p are described in Fig. 10. Each function from F p and X b has its own task token in X F .\nThe input starts with a sequence of L task tokens x f ∈ X L F . The number of compositions is generally L = 5, but in a few experiments like Figs. 15,6 (Right), L = 2." }, { "figure_ref": [ "fig_7", "fig_2", "fig_31", "fig_31", "fig_31", "fig_3", "fig_6", "fig_31" ], "heading": "Sampling task tokens", "publication_ref": [], "table_ref": [], "text": "The task tokens can be sampled such that they satisfy certain properties. For example, let us consider the composition of two functions-one from the set F 1 ⊂ F p and another from F 2 ⊂ F b (which is the setting in Fig. 6 (Right)). We can restrict the training data to compositions from the set F 2 • F 1 which are in-order compositions (see Fig. 2). Alternatively, we can also choose to include out-of-order compositions, which include compositions from \nF 1 • F 1 , F 2 • F 2 and F 1 • F 2 . In\nI • I • F 3 • I • I or I • F 4 • • • • • I.\nThere are a total of 1 + 5 i=1 F i such functions; the 1 is when all 5 functions in the composition are identity. The model is never trained on the composition of two or more functions, and at least compositions of 3 functions are necessary to generalize to all in-order compositions Fig. 18. \nx d 3 x d 9 x d 2 x d 8 x d 7 x d 3 x d 8 x d 3 x d 9 x d 3 x d 2 x d 7 x d F p (x d ) F p ∈ ℱ p = = x d F b (x d ) F b ∈ ℱ b = Set of Bijections ℱ b Set of Permutations ℱ p g g g g g g g : X d ↦ X d xd 1 →\n[x F1 , x F2 , x d , F 1 (x d ), F 2 (F 1 (x d ))]\n(see Fig. 11a) or (ii) The direct format, which does not include the intermediate outputs of the composition in the sequence and an example of such a sequence is\n[x F1 , x F2 , x d , F 2 (F 1 (x g ))]\n(see Fig. 11b).\nThe step-by-step and direct formats are also discussed in Fig. 3. The training data consists of 100,000 sequences for all experiments in one of the two formats.\nEvaluating compositions When evaluating trained models, we evaluate on 1000 different inputs for every composition of functions. Since Fig. 5 requires us to evaluate on a combinatorial set of functions, we sample 1000 functions (or the total number of functions, whichever was lower) for each cell which can be identified by the displacement and number of compositions; we then compute the accuracy averaged over those functions to populate the cell. The accuracy of a completion is calculated by averaging the accuracy of the last six tokens. We see that qualitative trends do not change when we use different metrics Fig. 19." }, { "figure_ref": [ "fig_17", "fig_4" ], "heading": "Computing linear probe accuracy", "publication_ref": [], "table_ref": [], "text": "We consider the outputs after every attention block and every MLP block (including the residual stream in both cases). We then pass these outputs through the final embedding layer and a Softmax layer to get predictions over the next token. We use these predictions to compute the accuracy at that layer. The accuracy is averaged over 1000 different input data tokens and for 200 different compositions of functions. We vary the number of layers, the number of attention heads, and the embedding dimension of the nanoGPT model in Fig. 13. We consider a setup identical to Fig. 4; all models are on 50 random in-order compositions of 5 bijections. We report accuracy averaged over all 3125 in-order compositions." }, { "figure_ref": [], "heading": "B Additional Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1 Sweeping hyper-parameters of the Transformer", "publication_ref": [], "table_ref": [], "text": "We make the following observations. (1) Most surprisingly, the accuracy reduces as the number of layers become huge for this compositional task; we expect that this is due to issues with optimization of a large depth model. (2) The accuracy does not change with the number of attention heads for a 1-layer Transformer. (3) The accuracy increases as we increase the embedding dimension and the model under fits the training data when the embedding dimension is too small. Compositionality is seen even in a 1-layer Transformer when trained with the step-by-step prompt format on 50 in-order compositions of bijections. However the ability to compose degrades as we increase the number of layers in the Transformer." }, { "figure_ref": [ "fig_7" ], "heading": "B.2 LSTMs do not learn to compose", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We report results on autoregressively trained LSTMs using the direct prompt format from Table 1 and the step-by-step prompt format in Table 2. LSTMs fail to generalize outside of the training data while Transformers generalize compositionally in both these scenarios. This points to an inductive bias that helps Transformers trained with an autoregressive objective generalize. Specifically, our mechanistic evaluation in Sec. 4.4 shows this is likely attributable to the use of Attention.\nThe LSTMs are trained using the same data using the autoregressive objective defined in Appendix A. We use the AdamW optimizer with learning rate equal to 3e-4 (β 1 = 0.9 and β 2 = 0.95), batch size of 512 and weight decay of 1e-4 for 150 epochs. As is common, we do not use a positional embedding, since the architecture is not permutation invariant. Table 1: LSTMs fail to compose in the direct prompt format. We train an LSTM on 250 composition of two functions (one permutation and one bijection) in the direct prompt format and tabulate the accuracy (%); the setup is identical to Fig. 6 (Right)." }, { "figure_ref": [], "heading": "Hidden dimension", "publication_ref": [], "table_ref": [], "text": "The inputs are passed through an input embedding layer before being passed to the LSTM and the outputs of the LSTM are also passed through a linear layer which outputs the logits. In our experiments, we vary the number of stacked LSTMs (or no. of layers) and the dimension of the internal hidden vector.\nDespite our attempt to train multiple different LSTMs with the best set of hyper-parameters, we observe that they do not show any compositional generalization on all our synthetic setups. This observation is further evidence for our hypothesis that the attention layers are important for compositionality." }, { "figure_ref": [ "fig_9", "fig_9", "fig_20" ], "heading": "B.3 Attention Masks", "publication_ref": [], "table_ref": [], "text": "Detailed setup. We train a 1-layer Transformer on a composition of 50 random in-order compositions of 5 bijections in the step-by-step prompt format. We visualize the attention masks for a fixed sequence of task tokens, averaged over 1000 different data tokens in Fig. 7(right). We found the attention masks to be identical across different choices of the task tokens. Each row corresponds to a causal attention mask for a single token and sums up to 1. At any given row, the attention is over two elements-the task token and the intermediate output of the composition. The five contiguous blocks along the columns correspond to the five steps of composition. These preliminary results indicate that it is possible to build a complete mechanistic understanding of attention for compositional tasks (see also Sec. C). In this section, we consider an experimental setup that is identical to the linear probe experiments in Fig. 7. We compute the probe accuracies for Transformers with different number of layers in Fig. 14. Across all models, we observe that accuracy increases in the last few layers. Furthermore, we also observe a sharp increase in accuracy right after the MLPs in the last few layers of the transformer." }, { "figure_ref": [ "fig_9", "fig_31", "fig_7", "fig_31" ], "heading": "B.4 Probing the layers in Transformers of different sizes", "publication_ref": [], "table_ref": [], "text": "We saw in Fig. 7(right) that the attention masks for a 1-layer model seem to select an input and a task token to operate on at every step of the composition. We hence believe that attention has a huge role in compositionality and propose the following hypothesis: The probe accuracy after some MLPs see a sharp in increase in accuracy because the attention layers play a critical role in selecting the right inputs to pass to the MLP. Specifically, unlike the 1-layer model, we suspect functions are now distributed across the model layers instead of being localized in the first MLP layer. Consequently, similar to the 1-layer model, attention heads at different layers will infer if the relevant functions implemented in MLP layers in that block are part of the prompt; if so, they transfer the input data through said function. Figure 15: Transformers fail to generalize to compositions of even 2 bijections, when trained with the direct prompt format. The curve depicts the accuracy over all 625 in-order compositions of two bijections (25 choices for each bijection) when trained on different subsets of in-order compositions. The model is trained with direct composition. Even if we train on 500 such compositions, the model fails to generalize to the remaining 125 compositions. This is additional evidence that the model is incapable composing bijections through direct composition.\nIn Fig. 6 (Left) we show that Transformers do not learn to compose 5 bijections and only generalize to compositions in the training data. Fig. 15 augments this result and shows that a similar failure occurs even when we consider the composition of just two bijections. Hence the model may not compose some function in the direct prompt format and the step-by-step format with an autoregressive objective is far more amenable to compositions." }, { "figure_ref": [ "fig_31", "fig_31", "fig_31", "fig_31", "fig_25", "fig_4" ], "heading": "B.6 Additional experiments with training data from random and base", "publication_ref": [ "b35" ], "table_ref": [], "text": "In this section, we conduct a collection of analyses for a model trained on in-order compositions of 5 bijections in the step-by-step prompt format. We perform the following experiments: (1) compare how base and random generalize to other in-order compositions (Fig. 16); (2) change the number of random functions in the training data (Fig. 17); (3) limit the maximum number of compositions in the training data and evaluate compositional generalization (Fig. 18); (4) look at alternate evaluation metrics (Fig. 19); and (5) test if the compositions are systematic (Hupkes et al., 2020) (Fig. 20). Avg. accuracy over all 3125 functions (%) Each sub-plot considers compositions of only size 2, 3, 4, 5, respectively. In each plot, we vary the number of such functions that are present int he training data. One exception is when we train on compositions of size 2. In this case, the guided generation accuracy is high, but the free generation accuracy is not. 4a and analyze the accuracy of each of the 20 functions (atomic capabilities) when averaged all instances in which it was used compositionally. We breakdown the results to see if certain functions are more accurate when used in compositions compared to others and find that models seem to learn all functions equally well." }, { "figure_ref": [], "heading": "Number of iterations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_26" ], "heading": "B.7 Token embeddings", "publication_ref": [], "table_ref": [], "text": "We study the token embeddings of the Transformer models and observe that they are similar for models with different number of layers and attention heads (see Fig. 21). We notice a block diagonal structure that separates task tokens from the data tokens. We also observe another block diagonal structure within the task tokens which occurs when we train only on in-order compositions. i } N i=1 for a fixed l, form a block-diagonal in this matrix. We observe similar word embeddings in Transformers of different sizes." }, { "figure_ref": [], "heading": "C Analysis of Step-by-step and Direct Prompt Formats", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Transformers for the step-by-step prompt format", "publication_ref": [ "b86", "b1", "b92" ], "table_ref": [], "text": "We prove that there exists Transformers that can compositionally generalize in the step-by-step prompt format. Such a constructive proof, similar to (Von Oswald et al., 2023;Ahn et al., 2023;Weiss et al., 2021;Li et al., 2023c), can be used to generate plausible mechanistic hypothesis by highlighting the role of the attention and MLP layers. While the universal approximation theorem suggests that any function can be represented by a wide enough multi-layer perceptron (MLP), the construction suggests that Transformers can represent the same function efficiently.\nDescription of the data. We will operate with a simplified prompt format where a composition of three functions is to be applied to a single input token. The construction can be generalized to compositions of more functions or to multiple input tokens. The input prompt [x F1 , x F2 , x F3 , x d ] has three task tokens and a single data token, and the desired output for this prompt is\n[F 1 (x d ), F 2 • F 1 (x d ), F 3 • F 2 • F 1 (x d )].\nThe position encodings P = p 1 p 2 • • • p 6 are learnable parameters and have dimension d p , i.e., P ∈ R dp×6 . The number of input tokens is d v and the number of task tokens is d f . Both input tokens x d and task tokens x F1 are embedded as a one-hot vector in R dx where d x = d v + d f . The first d v dimensions are used to embed the data tokens and the last d f dimensions embed the task token. Henceforth, both x d and x F1 refer to the corresponding one-hot vectors in R dx . For convenience, we define d = d x + d p . Tying this back to to section 3, observe that\n|X d | = d v and |X f | = d f .\nWe denote the input to the model using Z, which includes the token embedding and position encoding. Specifically, we have\nZ = x F1 x F2 x F3 x F 1 (x d ) F 2 • F 1 (x d ) p 1 p 2 p 3 p 4 p 5 p 6 ,\ni.e., Z ∈ R d×6 . We assume that the position encoding is concatenated to the token embedding as opposed to added to it." }, { "figure_ref": [ "fig_28" ], "heading": "Matrix notation.", "publication_ref": [ "b1", "b85", "b26" ], "table_ref": [], "text": "We use 1 x d to denote a one-hot vector in the space R dv , i.e., it excludes dimensions for the task token. On the other hand, x d denotes a one-hot vector in R dx . We use I n×n to denote an identity matrix of size n × n, 1 m×n and 0 m×n to denote matrices of 1s and 0s of size m × n, and 1 n and 0 n to denote matrices of size n × 1.\nDescription of the architecture. Before describing the Transformer architecture, we first define the attention and MLP layers. We use a simplified parameterization of linear attention (Ahn et al., 2023) with weights Q and K. The MLP contains two fully connected layers with a ReLU non-linearity parameterized by the weights W 1 and W 2 . The attention and MLP layers are functions of Z ∈ R d×6 and are defined as:\nAttn Q,K (Z) = (KZ)(M ⊙ Z T QZ),and\nMLP W1,W2 (Z) = W 2 ReLU(W 1 Z), where Q, K ∈ R d×d , W 1 ∈ R d×(d f dv) and W 2 ∈ R (d f dv)×d .\nThe matrix M ∈ R 6×6 enforces causal attention and restricts the attention to inputs from previous time-steps, i.e.,\nM =      1 1 1 • • • 1 0 1 1 • • • 1 . . . . . . . . . . . . . . . 0 0 0 • • • 1     \n.\nWe consider a 1-layer Transformer with an attention layer followed by an MLP layer. We omit layer-norm to simplify the proofs. The function computed by the Transformer is\nTr Q,K,W1,W2 (Z) = MLP (Attn(Z) + Z) + Attn(Z) + Z) .\nHenceforth, we omit the subscripts of Attn, MLP and Tr for brevity. We include a residual connection after both the attention and MLP layers which mirrors a typical Transformer architecture (Vaswani et al., 2017).\nThe output of the Transformer is passed through an unembedding matrix W e followed by a Softmax layer to obtain a probability distribution over the next token denoted by\nP (Y |Z) = Softmax(W e Tr(Z)).\nTheorem C.1. There exists weights P, Q, K, W 1 , W 2 and position encodings P such that an Autoregressive Transformer can compositionally generalize to any prompt\n[x F1 , x F2 , x F3 , x d ].\nThe values of the weights satisfy\nP T P = I 3×3 I 3×3 I 3×3 I 3×3 , Q = 0 d×d 0 d×dp 0 dp×d I dp×dp , K =   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f ×d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   , W 1 =                  1 T x d 1 1 T x d 1 1 T x d 1 • • • 1 T x d 1 1 T x d 2 1 T x d 2 1 T x d 2 • • • 1 T x d 2 . . . . . . . . . . . . 1 T x dv 1 T x dv 1 T x dv • • • 1 T x dv 0 T dv -1 T dv -1 T dv • • • -1 T dv -1 T dv 0 T dv -1 T dv • • • -1 T dv . . . . . . . . . . . . -1 T dv -1 T dv -1 T dv • • • 0 T dv 0 dp×dv 0 dp×dv 0 dp×dv • • • 0 dp×dv                  T d f ×dvcolumns\n, and\nW 2 =                             F i1 (x d1 ) T -x T d1 -x T Fi 1 F i1 (x d2 ) T -x T d2 -x T Fi 1 . . . F i1 (x dv ) T -x T dv -x T Fi 1 F i2 (x d1 ) T -x T d1 -x T Fi 2 F i2 (x d2 ) T -x T d2 -x T Fi 2 . . . F i2 (x dv ) T -x T dv -x T Fi 2\n. . .\nF i T (x d1 ) T -x T d1 -x T Fi T F i T (x d2 ) T -x T d2 -x T Fi T . . . F T (x dv ) -x dv -x Fi T                             T .\nProof. See Appendix C.4. The construction uses the attention layer to aggregate the task token and data token, i.e., attention selects the relevant task token. The query vector of the attention selects the right task using the position encoding. The first layer of the MLP projects the summation of the task and data tokens (present in orthogonal spaces) onto the Cartesian product of the set of task and data tokens. The second layer computes the function and acts similar to a lookup table (Geva et al., 2022).\nThe construction requires the output of the first fully-connected layer has size at least d f d v in order to encode the task and input tokens. In our experiments, we set d v = 10 and d f = 21 and hence the number of hidden units must be at least 210. In practice, we require at least 500 hidden units (see Fig. 22), which is not too far from our estimate. We conjecture that the additional hidden units are helpful for optimization." }, { "figure_ref": [], "heading": "C.2 Transformers for the direct prompt format", "publication_ref": [ "b59" ], "table_ref": [], "text": "We also prove the existence of a Transformer for a compositions of bijections in the direct prompt format. Unlike the step-by-step format, the direct prompt format lacks a \"scratchpad\" (Nye et al., 2021) for the intermediates outputs of the composition. In our construction, we use K = 3 Transformer blocks to compute the composition of K functions; the output of the k-th block is the result of the k th step of the composition.\nDescription of the data. We consider the composition of 3 functions with an input prompt denoted by [x F1 , x F2 , x F3 , x d ]. Unlike the step-by-step format, the output is just a single token \n[F 3 • F 2 • F 1 (x d )].\n    , where Z ∈ R d×4 .\nDescription of the architecture. Each Transformer block is defined similar to the step-by-step format, i.e., Block Qi,Ki,Wi1,Wi2 (Z) = MLP i (Attn i (Z) + Z) + (Attn i (Z) + Z), which we henceforth denote by Block i (Z). Unlike the step-by-step format, the model is now composed of 3 blocks corresponding to the 3 steps of the compositional task the model is expected to solve, i.e.,\nTr(Z) = Block 3 (Block 2 (Block 1 (Z))).\nThis input is passed through a Softmax layer to predict a probability distribution over the next token, denoted by P (Y | Z) = Softmax(W e Tr(Z)).\nTheorem C.2. There exist weights P i , Q i , K i , W 1i , W 2i for i ∈ [1, 3] and position encodings P such that the a 3-layer Transformer can compositionally generalize to any prompt of the form [x F1 , x F2 , x F3 , x d ]. The values of the weights satisfy\nQ 1 =     0 d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d I dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp     , Q 2 =     0 d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp I dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp     , Q 3 =    \n0 d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp\nI dp ,     , K 1 =   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   , K 2 = K 1 2 , K 3 = K 1 3 , P T 1 P 1 =     1 0 0 1 0 1 0 0 0 0 1 0 1 0 0 1     , P T 2 P 2 =     1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1     , P T 3 P 3 =     1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 1     , W 11 =                  1 T x d 1 1 T x d 1 1 T x d 1 • • • 1 T x d 1 1 T x d 2 1 T x d 2 1 T x d 2 • • • 1 T x d 2 . . . . . . . . . . . . 1 T x dv 1 T x dv 1 T x dv • • • 1 T x dv 0 1×dv -1 1×dv -1 1×dv • • • -1 1×dv -1 1×dv 0 1×dv -1 1×dv • • • -1 1×dv . . . . . . . . . . . . -1 1×dv -1 1×dv -1 1×dv • • • 0 1×dv 0 dp×dv 0 dp×dv 0 dp×dv • • • 0 dp×dv                  T d f ×dvcolumns , W 12 =                             F i1 (x d1 ) T -x T d1 -x T Fi 1 F i1 (x d2 ) T -x T d2 -x T Fi 1 . . . F i1 (x dv ) T -x T dv -x T Fi 1 F i2 (x d1 ) T -x T d1 -x T Fi 2 F i2 (x d2 ) T -x T d2 -x T Fi 2 . . . F i2 (x dv ) T -x T dv -x T Fi 2\n. . .\nF i T (x d1 ) T -x T d1 -x T Fi T F i T (x d2 ) T -x T d2 -x T Fi T . . . F T (x dv ) -x dv -x Fi T .                             T W 21 = W 22 = W 23 ,and\nW 31 = W 32 = W 33 .\nProof. See Appendix C.5." }, { "figure_ref": [], "heading": "C.3 Difference between the direct and step-by-step prompt formats", "publication_ref": [ "b54" ], "table_ref": [], "text": "The ability to run multiple forward passes through the Transformer allows us tackle a richer class of problems (Merrill & Sabharwal, 2023). This ability differentiates the step-by-step and direct prompt formats.\nIn the step-by-step prompt format, the Transformer makes L different forward passes, while the direct prompt format allows only one forward pass through the model to generate the output. This is also mirrored in our constructions in appendices C.1 and C.2-a model for the step-by-step prompt format requires only 1 layer, while one for the direct prompt format uses L = 3 layers to compensate for the lack of multiple forward passes. We expect that a Transformer for the direct prompt format cannot circumvent these computations and conjecture that our Transformer construction for the direct format (in appendix C.5) is efficient with respect to the number of layers.\nConjecture C.3. We conjecture that a Transformer with width of poly(|F|), needs O(L) layers in the direct prompt format compared to the O(1) layers step-by-step format in order to compositionally generalize on our synthetic task.\nThat is, a model must compute all L intermediate outputs of the composition across different layers of the Transformer. We expand on this further in the next subsection. We also note that as per the universal approximation theorem, it is certainly possible to construct a Transformer with 1-layer such that it generalizes for the direct prompt format; however, such a model must have its width to be exponential in |F| in order to store |F| L different functions in a single layer." }, { "figure_ref": [], "heading": "C.3.1 How many training compositions does each prompt format need?", "publication_ref": [ "b55", "b95" ], "table_ref": [], "text": "To further understand the difference between the two prompt formats, we will use a (highly simplified) model to reason about the number of function compositions in the training data that is required for perfect compositional generalization on our task. Let us consider a composition of L of functions from F. We assume that the compositions in the training data F L train ⊂ F L are sampled uniformly at random from the set of all compositions.\nFor this analysis, we assume that the Transformer can perfectly identify which functions to composewhich we ascribe to the attention layers-and will focus entirely on capability acquisition which we hypothesize is carried out by the MLP layers. We assume that a Transformer for the step-by-step prompt format must learn a function (capability) only once, while a Transformer for the direct prompt format must learn the function L different times-once for each layer of the Transformer. If the function composition F (l) ∈ F L train occurs in the training data, we assume that the Transformer for the step-by-step format has learned all the capabilities Detour into the coupon collector's problem. In order to learn all F = |F| capabilities, the training data must contain each capability at least once. We note that this is the coupon collector's problem (Myers & Wilf, 2006): the collector seeks all distinct coupons and recieves a coupon at every round drawn uniformly at random. The number of rounds corresponds to the number of function compositions in the training data and we would like to calculate the expected number of rounds required to learn all capabilities. It is a well known result that the expected number of rounds to collect all F coupons is F H F where H F is the Harmonic number; asymptotically this is O(F log F ). Furthermore, the probability that we complete a collection of size f , in n rounds is\nF (l) i ∈ F (l) for i ∈ [1, L],\np(L, f ) = F ! F L F -1 L -1 ,\nwhere F -1 K-1 is the Stirling number of the second kind. In the step-by-step prompt format, we observe L capabilities (or coupons) with every composition. All capabilities are learned if we observe each of them in at least one training sample. The expected number of training compositions N required to learn all capabilities is O( F log F L ) (see Xu & Tang (2011)). On the other hand, the direct prompt format can be treated as L independent coupon collector problems and must observe each capability once for each of the L layers. The expected number of rounds to learn all capabilities is the is the expected value of maximum number of rounds for L indepedent coupon collector problems. If we apply Chebyshev's inequality, we get\nP (N ≥ F H F + c log F ) ≤ π 2 6c 2 log 2 F ,\nsince the variance of N is upper bounded by n 2 π 2 6 . Hence, the maximum value of L different runs is O(F log F ) as n → ∞, or in other words, the expected number of rounds to learn all capabilities is O(F log F ). The expected number of training compositions differ by a factor of L between the two prompt formats, which tallies with the observation that a Transformer is expected to learn the same set of capabilities L different times in the direct format.\nIn practice, we find that Transformers for the direct format can sometimes fail to compositionally generalize, even with a large number of compositions in the training data (Section 4.3). We hypothesize that this is attributable to the optimization landscape, i.e., gradient descent is unable to find weights that compositionally generalize and instead prefers weights that memorize compositions of functions present in the training data. In the direct prompt, gradient descent must recover the individual capabilities from a set of compositions of bijections and this is a computationally hard problem since it is similar to finding the minimal generating set of a group (its time complexity is linear in the size of the group which is O(F L ))." }, { "figure_ref": [], "heading": "C.4 Proof of Theorem C.1", "publication_ref": [ "b62" ], "table_ref": [], "text": "Step 1: Computing the attention layer. The attention layer copies the task tokens onto the relevant data token similar to an induction head (Olsson et al., 2022). We first compute the query and value matrices of the attention.\nZ T QZ =         x T F1 p T 1 x T F2 p T 2 x T F3 p T 3 x T d p T 4 F 1 (x d ) T p 5 F 2 • F 1 (x d ) T p T 6         0 d×d 0 d×dp 0 dp×d I dp×dp x F1 x F2 x F3 • • • F 2 • F 1 (x d ) p 1 p 2 p 3 • • • p 6 =         0 p T 1 0 p T 2 0 p T 3 0 p T 4 0 p T 5 0 p T 6         x F1 x F2 x F3 • • • F 2 • F 1 (x d ) p 1 p 2 p 3 • • • p 6 = P T P\nOur construction considers a P such that p i = p i+4 for all i ∈ [1, 3] and p i • p j = 0 for all j ∈ [1, 3] and j ̸ = i. The mask M converts P T P into an upper triangular matrix, and zeroes out all entries in the lower triangle of the matrix.\nM ⊙ (Z T QZ) = M ⊙ (P T P ) = M ⊙ I 3×3 I 3×3 I 3×3 I 3×3 = I 3×3 I 3×3 0 3×3 I 3×3\nThe attention layer computes\nAttn(Z) = (KZ)(M ⊙ Z T QZ) = (KZ)(M ⊙ P P T ) =   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f ×d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   x F1 x F2 x F3 • • • F 2 • F 1 (x d ) p 1 p 2 p 3 • • • p 6 I 3×3 I 3×3 0 3×3 I 3×3 = x F1 x F2 x F3 0 d 0 d 0 d 0 dp 0 dp 0 dp 0 dp 0 dp 0 dp I 3×3 I 3×3 0 3×3 I 3×3 = x F1 x F2 x F3 x F1 x F2 x F3 0 dp 0 dp 0 dp 0 dp 0 dp 0 dp which when added to Z yields Attn(Z) + Z = 2x F1 2x F2 2x F3 x d + x F1 F 1 (x d ) + x F2 F 2 • F 1 (x d ) + x F3 p 1 p 2 p 3 p 4 p 5 p 6 ,\nif we also include the residual stream to the output of the attention layer.\nStep 2: Computing the MLP layer. After the attention layer, the data and task tokens are aggregated at one location in orthogonal sub-spaces. The MLP uses the task and data token to compute the function. The first fully-connected layer projects the input R dvdt , which uniquely identifies the task and data tokens which is used to retrived the function from W 2 . The first fully-connected layer computes\n(Attn(Z) + Z) T W T 1 =         2x T F1 p T 1 2x T F2 p T 2 2x T F3 p T 3 x T d + x T F1 p T 4 F 1 (x d ) T + x T F2 p T 5 F 2 (F 1 (x d )) T + x T F3 p T 6                          1 T x d 1 1 T x d 1 1 T x d 1 • • • 1 T x d 1 1 T x d 2 1 T x d 2 1 T x d 2 • • • 1 T x d 2 . . . . . . . . . . . . 1 T x dv 1 T x dv 1 T x dv • • • 1 T x dv 0 T dv -1 T dv -1 T dv • • • -1 T dv -1 T dv 0 T dv -1 T dv • • • -1 T dv . . . . . . . . . . . . -1 T dv -1 T dv -1 T dv • • • 0 T dv 0 dp×dv 0 dp×dv 0 dp×dv • • • 0 dp×dv                  =         -2 T dv • • • • • • 0 T dv • • • • • • -2 T dv -2 T dv • • • 0 T dv • • • • • • • • • -2 T dv -2 T dv • • • • • • • • • 0 T dv • • • -2 T dv -1 T dv + 1 T x d • • • • • • 1 T x d • • • • • • -1 T dv + 1 T x d -1 T dv + 1 T F1(x d ) • • • 1 T F1(x d ) • • • • • • • • • -1 T dv + 1 T F1(x d ) -1 T dv + 1 T F2•F1(x d ) • • • • • • 1 T F2•F1(x d ) • • • -1 T dv + 1 T F2•F1(x d )        \nThe above matrix has d f d v columns represented as d f blocks of size d v . The 0 matrix in the first, second and third row occupy d v columns each. In particular, they occupy the blocks j 1 , j 2 and j 3 where\nF i = F ij i , i.e.\nthe block number corresponds to index in the one-hot representation of the task tokens. Let 1 (x,F ) denote a one-hot vector in R dv×d f , i.e., it is a one-hot vector that uniquely identifies the task and data token. We can succincintly express the output after the non-linearity as follows:\nReLU(W 1 (Attn(Z) + Z)) = ReLU((Attn(Z) + Z)\nT W T 1 ) T ) =            0 dv 0 dv 0 dv 0 dv 0 dv 0 dv 0 dv 0 dv 0 dv • • • • • • • • • 0 dv 0 dv 0 dv • • • 1 F1(x d ) • • • 0 dv 0 dv 0 dv 1 x d • • • • • • 0 dv 0 dv 0 dv • • • • • • 1 F2•F1(x d )\n. . . . . . . . . . . . . . . . . .\n0 dv 0 dv 0 dv 0 dv 0 dv 0 dv            = 0 dvd f 0 dvd f 0 dvd f 1 (x d ,F1) 1 (F1(x d ),F2) 1 (F2•F1(x d ),F3)\nIncluding the final weight matrix W 2 , we get \nW 2 ReLU(W 1 (Attn(Z) + Z)) = W 2 0 dvd f 0 dvd f 0 dvd f 1 (x d ,F1) 1 (F1(x d ),F2) 1 (F2•F1(x d ),F3) =          0 T d 0 T dp 0 T d 0 T dp 0 T d 0 T dp F 1 (x d ) T -x d -x F1 0 T dp F 2 • F 1 (x d ) -x F1(x d ) -x F2 0 T dp F 3 • F 2 • F 1 (x d ) -x F2•F1(x d ) -x F3\nF 2 • F 1 (x d ) T -x T F1(x d ) -x T F2 0 T dp F 3 • F 2 • F 1 (x d ) T -x T F2•F1(x d ) -x T F3 0 T dp          T +         2x T F1 p T 1 2x T F2 p T 2 2x T F3 p T 3 x T d + x T F1 p T 4 x T F1(x d ) + x T F2 p T 5 x T F2•F1(x d ) + x T F3 p T 6         T = 2x F1 2x F2 2x F3 F 1 (x d ) F 2 • F 1 (x d ) F 3 • F 2 • F 1 (x d )\n2x F1 2x F2 2x F3 F 1 (x d ) F 2 • F 1 (x d ) F 3 • F 2 • F 1 (x d )\nwhich will assign high probabilities to the desired token when passed through a Softmax layer. Hence, a Transformer prompted with [x F1 , x F2 , x F3 , x d ] will auto-regressively generate\n[F 1 (x d ), F 2 • F 1 (x d ), F 3 • F 2 • F 1 (x d )]\nfor any combination of data and task tokens." }, { "figure_ref": [], "heading": "C.5 Proof of Theorem C.2", "publication_ref": [], "table_ref": [], "text": "The details of construction are similar to Appendix C.4.\nStep Using the above, the output of the first attention layer added to the residual stream is\nAttn 1 (Z) + Z = (K 1 Z)(M ⊙ Z T Q 1 Z) + Z = (K 1 Z)(M ⊙ P T 1 P 1 ) + Z = x F1 x F2 x F3 0 0 dp 0 dp 0 dp 0 dp     1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1     + Z = 2x F1 2x F2 2x F3 x d + x F1 p 1 p 2 p 3 p 4\nNote that W 11 and W 21 are identical to W 1 and W 2 in Equation ( 0), and performing a similar calculation yields Block 1 (Z) = W 21 ReLU(W 11 (Attn 1 (Z) + Z)) + (Attn 1 (Z) + Z)\n= 2x F1 2x F2 2x F3 F 1 (x d ) p 1 p 2 p 3 p 4 = Z B1 .\nWe denote the output of the first Transformer block by Z B1 .\nStep \nK 2 Z B1 = 1 2   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   2x F1 2x F2 2x F3 F 1 (x d )\np 1 p 2 p 3 p 4 = x F1 x F2 x F3 0 d 0 dp 0 dp 0 dp 0 dp .\nUsing the above, we can compute the output of the attention layer in the second Transformer block which evaluates to\nAttn 2 (Z B1 ) + Z B1 = (K 2 Z B1 )(M ⊙ Z T B1 Q 2 Z B1 ) + Z B1 = (K 2 Z B1 )(M ⊙ P T 2 P 2 ) + Z B1 = x F1 x F2 x F3 0 0 0 0 0    \n1 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1\n    + Z B1 = 3x F1 3x F2 3x F3 F 1 (x d ) + x F2 p 1 p 2 p 3 p 4 .\nThe attention layer uses sub-matrix P 2 of the position encodings to copy the second task token to the data token We repeat the calculations in Equation ( 0 Step 3: Computing the output of the final Transformer block. Unsurprisingly, the calculations for the last Transformer block are almost identical. The query matrix is Z T B2 Q 3 Z B2 = P T 3 P 3 and the value matrix is\nK 3 Z B2 = 1 3 x F1 x F2 x F3 0 d 0 dp 0 dp 0 dp 0 dp 3x F1 3x F2 3x F3 F 2 • F 1 (x d ) p 1 p 2 p 3 p 4 = x F1 x F2 x F3 0 d 0 dp 0 dp 0 dp 0 dp .\nThe output of the attention layer in the final block is\nAttn 3 (Z B3 ) + Z B3 = (K 3 Z B2 )(M ⊙ Z T B2 Q 2 Z B2 ) + Z B2 = (K 3 Z B1 )(M ⊙ P T 3 P 3 ) + Z B2 = x F1 x F2 x F3 0 0 0 0 0    \n1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1\n    + Z B2 = 4x F1 4x F2 4x F3 F 2 • F 1 (x d ) + x F3 p 1 p 2 p 3 p 4 .\nPassing the output of Attn 2 (Z B2 ) through the last MLP, yields the output of the Transformer, which is Tr(Z) = Block 3 (Block 2 (Block 1 (Z)))\n= W 32 ReLU(W 32 (Attn 3 (Z B2 ) + Z B2 )) + (Attn\n3 (Z B2 ) + Z B2 ) = 4x F1 4x F2 4x F3 F 3 • F 2 • F 1 (x d ) p 1 p 2 p 3 p 4 .\nHence, the output of the Transformer is a composition of the three functions F 1 , F 2 and F 3 applied to token x d ." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "RR thanks Kento Nishi, Gautam Reddy and Eric Bigelow for their discussions at the early stages of this project. RR thanks AWS AI, for their gift to Penn Engineering's ASSET Center for Trustworthy AI. ESL was partially supported by the National Science Foundation (IIS-2008151)." }, { "figure_ref": [], "heading": "Author Contributions", "publication_ref": [], "table_ref": [], "text": "ESL and RR conceived the initial project direction and defined the problem setup with with inputs from HT and MK. The experiments were led by RR with inputs from ESL, HT and MK. The writing of the introduction and related work was led by ESL with help from HT and RR. RR, ESL and HT extensively collaborated on the methods section. The results and appendix were led by RR. The expository figures were created by HT and RR. HT and RPD acted as advisors in the work." } ]
Transformers trained on huge text corpora exhibit a remarkable set of capabilities, e.g., performing basic arithmetic. Given the inherent compositional nature of language, one can expect the model to learn to compose these capabilities, potentially yielding a combinatorial explosion of what operations it can perform on an input. Motivated by the above, we train autoregressive Transformer models on a synthetic data-generating process that involves compositions of a set of well-defined monolithic capabilities. Through a series of extensive and systematic experiments on this data-generating process, we show that: (1) autoregressive Transformers can learn compositional structures from small amounts of training data and generalize to exponentially or even combinatorially many functions; (2) generating intermediate outputs when composing functions is more effective for generalizing to new, unseen compositions than not generating any intermediate outputs (3) biases in the order of the compositions in the training data result in Transformers that fail to compose some combinations of functions; and (4) the attention layers select which capability to apply while the feed-forward layers execute the selected capability.
Compositional Capabilities of Autoregressive Transformers: A Study on Synthetic, Interpretable Tasks
[ { "figure_caption": "Dear [Friend's Name], I hereby notify you, in accordance with applicable legal standards, that I shall be departing for the shopping center forthwith.Sincerely, [Your Name]Tell my friend that I am going to the mall. Write it in legalese.The sum of the digits of the square of the cube of 8 is 1", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Data generating process for in-order and out-of-order compositions. (a) Each of the L = 5 positions is associated with N = 4 functions f [l] i , in addition to an identity function, resulting in a total of 5 × 4 + 1 = 21 basis functions for composition. (b) The in-order compositions select functions within the same position while (c) out-of-order compositions allow for selecting functions across positions. Each position also includes the identity function since it allows us to compute compositions of fewer than 5 functions. In the examples presented in (c), displaced functions are surrounded by a black line, and we then count the number of displaced functions.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Direct v.s. Step-by-step prompts. The task (rainbow) and data (blue) tokens can be completed in two ways. They are followed by: (a) the intermediate outputs of the composition in the step-by-step format or (b) directly by the final result of compositions in the direct format.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Transformers trained on the step-by-step format can generalize to an exponential (a) or combinatorial (b) number of new functions. We plot the accuracy averaged over all compositions of L = 5 bijections, where each position of composition has 4+1 choices, with one of them being the identity function. Each curve corresponds to training data generated by a different subset of functions and the model is trained using the step-by-step prompt format. (a) The choice of 5 functions are different at different positions of composition-there are 21 different functions which can be composed (in-order) in 3125 different ways. (b)The choice of 5 functions are identical across all 5 positions of the composition which means there are 3125 different ways to compose them; only 1365 of them are unique. Both figures are evidence that one can train on a small number of compositions of functions (around 31-100) and generalize to exponentially (a) and combinatorially (b) many functions that would be considered \"out-of-distribution\".", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "of-order functions No. of identity functions in out-of-order composition No. of displacements in out-of-order compostion", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: The training data determines if a Transformer generalizes to an exponential (in-order generalization) or combinatorial (out-of-order generalization) number of functions. Each sub-plot uses a different subset of functions (from F b ) to generate the training data and we evaluate them on combinatorial set of functions generated from 20+1 functions (one of them being identity). The x-axis varies the number of displacements and the y-axis varies the number of compositions-equivalently the number of functions that are not identity. We make the following observations: (1) A Transformer trained on just 31 functions (top-middle) generalize to nearly exponentially many or 3125 compositions of functions. (2) All the above configurations do not generalize perfectly to the entire combinatorial set. They however partially generalize to nearly 4 million compositions of functions. The generalization is worse if we increase the number of compositions or displacements (see Fig.2for pictorial description of displacements).", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Compositional generalization is less frequently seen in the direct prompt format. (Left.) We train a Transformer using the direct prompt format on 20+1 bijections with 5 compositions with 4 choices at each position. The model fails to generalize to all 3125 compositions even if it trained on 2000 such functions.(Right.) We train a Transformer using the direct prompt forlat on a composition of two functions, with one function being one of 25 bijections and the other function being one of 25 permutations (totalling to 625 compositions). The model is able to compose previously unseen combinations of functions when trained on 250 of these functions in this scenario.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: (Left.) Attention layer picks a function to apply given the current input, and MLP applies the selected function for Transformers trained on compositions of bijections in the step-by-step prompt format. We see a sharp increases in accuracy after MLP layers in the last few layers of the Transformer.We compute the linear probe accuracy-averaged over in-order compositions of functions-after the MLP and attention layers at every layer of the model. (Right.) Attention is largest at the relevant data and task token. We plot the causal attention mask of a 1-layer Transformer trained using the step-by-step format on compositions of 5 in-order bijections (setup of Fig.4). Keeping the prompt fixed to a specific composition of functions, we plot the attention map averaged over 1000 samples. We observe that the current data token attends to the a specific task relevant to compute the next step of the composition.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: A Transformer trained on a random subset of functions generalizes first to a composition of more functions before it generalizes to a composition of few of them. Each line is the average accuracy over all composition of k functions and each subplot is a Transformer trained on a different subset of functions. The base is trained on the individual functions and these Transformers learn to compose a smaller set of functions (more functions in composition are identity) before learning to compose many of them. The opposite is true when the model is trained on a random subset of 25 compositions of functions.", "figure_data": "", "figure_id": "fig_11", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: We use nanoGPT as the Transformer architecture in all our experiments. The core Transformer block is a LayerNorm, a causal attention block, followed by another layer-norm and a 2-layer multi-layer perceptron (MLP). The Transformer block has two residual connections.", "figure_data": "", "figure_id": "fig_12", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 6 (Right), we restrict our training and evaluation to in-order compositions of functions and we observe that training on a subset of the elements from F 2 • F 1 suffices to compositionally generalize all functions in the set. Two other commonly used subsets of functions are base and random. Consider F 1 , F 2 , . . . , F 5 ⊂ F b . The set random considers k functions from the set F 5 • F 4 • • • • • F 1 which are drawn uniformly at random. base is used to test if the compositionality is seen when the Transformer is trained on the individual functions from F i for all i ∈ [5]. In the training data, all compositions have 4 of the 5 functions to be the identity function I, i.e it considers compositions of the form", "figure_data": "", "figure_id": "fig_13", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Step-by-step composition v.s. Direct composition. We test two possible routes for compositions. (a) Step-by-step prompting, which allows for generating intermediate outputs. (b) Direct prompting, where the model must compose the functions without the intermediate outputs.", "figure_data": "", "figure_id": "fig_14", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Transformers requires at least 2-3 layers for compositional generalization with the direct prompt format. We vary the number of layers in the Transformer and train on direct composition in a setup identical to Fig. 6 (Right).", "figure_data": "", "figure_id": "fig_16", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure13: We see compositionality in Transformers even if we change the number of layers and attention heads. Compositionality is seen even in a 1-layer Transformer when trained with the step-by-step prompt format on 50 in-order compositions of bijections. However the ability to compose degrades as we increase the number of layers in the Transformer.", "figure_data": "", "figure_id": "fig_17", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: We use a linear probe to study the accuracy at different layers on Transformers of different sizes. Most architectures see an increasing in accuracy in the latter half of the Transformer. The increase in accuracy is more gradual for Transformers with more layers. The accuracy increases sharply after an attention layer across all architectures.", "figure_data": "", "figure_id": "fig_20", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 16 :Figure 17 :1617Figure 16: How do different training datasets generalize to compositions of many and few functions? This is a fine-grained version of Fig. 4a. Model trained on 50 random compositions generalizes poorly compositions of small number of functions while a model trained on the base generalizes poorly to composition of 4 or 5 functions.", "figure_data": "", "figure_id": "fig_22", "figure_label": "1617", "figure_type": "figure" }, { "figure_caption": "Figure 18 :Figure 19 :1819Figure 18: Limiting maximum number of compositions in the training data. The figure plots the accuracy on all in-order compositions against the number of training iterations. Each sub-plot considers compositions of size exactly 2, 3, 4, 5, respectively in the training data. The model is able to generalize to most in-order compositions only if the training data consists of compositions of size at least 3 (bottom-right).", "figure_data": "", "figure_id": "fig_23", "figure_label": "1819", "figure_type": "figure" }, { "figure_caption": "Figure 20 :20Figure20: Systematicity. We consider trained models from Fig.4aand analyze the accuracy of each of the 20 functions (atomic capabilities) when averaged all instances in which it was used compositionally. We breakdown the results to see if certain functions are more accurate when used in compositions compared to others and find that models seem to learn all functions equally well.", "figure_data": "", "figure_id": "fig_25", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Figure 21 :21Figure 21: Word embedding correlations present a block-diagonal structure that separates data tokens from task tokens. We plot the inner product between all pairs of word embeddings of the tokens. The task tokens are orthogonal to the set of input tokens. Different functions in the same level, i.e. {F (l)", "figure_data": "", "figure_id": "fig_26", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 22 :22Figure 22: We see a sharp increase in accuracy as we increase the embedding dimension of the Transformer. The number of hidden units in the MLP of the Transformer is 4 times the size of the embedding dimension.", "figure_data": "", "figure_id": "fig_28", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "while a Transformer for the direct prompt format can only learn capability F (l) i at layer i. These assumptions are informed by Theorems C.1 and C.2.", "figure_data": "", "figure_id": "fig_29", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 :0x01Computing the output of the first block. The first Transformer block computes the first step of the composition. The attention layer in particular, copies the relevant task token to the data token. The value and query matrices of the attention layer in the first Transformer block are Z T Q 1 Z = d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d I dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp F1 x F2 x F3 x d p 11 p 12 p 13 p 14 p 21 p 22 p 23 p 24 p 31 p 32 p 33 p 34 dv×dv 0d f ×dv 0 d×dp 0 d f ×dv I d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   x F1 x F2 x F3 x d p 1 p 2 p 3 p 4 = x F1 x F2 x F3 0 d 0 dp 0 dp 0 dp 0 dp", "figure_data": "", "figure_id": "fig_31", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "2 :02Computing the output of the second block. The second block uses the output of the first Transformer block to compute the second step of the composition. We start similarly by computing the query and value matrices of the attention layer, i.e., d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp I dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp", "figure_data": "", "figure_id": "fig_32", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "𝚇 𝚍 𝟼 𝚇 𝚍 𝟻 𝚇 𝚍 𝟼 𝚇 𝚍 𝟺 𝚇 𝚍 𝟼 𝚇 𝚍 𝟿 𝚇 𝚍 𝟶 𝚇 𝚍 𝟽 𝚇 𝚍 𝟶 𝚇 𝚍 𝟻 𝚇 𝚍 𝟶 𝚇 𝚍 𝟾", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "𝟷 𝚇 𝚏(𝟸) 𝟺 𝚇 𝚏(𝟹) 𝟸 𝚇 𝚏(𝟺) 𝟹 𝚇 𝚏(𝟻) 𝟹 𝚇 𝚍 𝟼 𝚇 𝚍 𝟻 𝚇 𝚍 𝟼 𝚇 𝚍 𝟺 𝚇 𝚍 𝟼 𝚇 𝚍 𝟿 𝚇 𝚍 𝟿 𝚇 𝚍 𝟽 𝚇 𝚍 𝟿 𝚇 𝚍 𝟺 𝚇 𝚍 𝟿 𝚇 𝚍 𝟶 ⋯ 𝚂 𝚇 𝚏 (𝟷) 𝟷 𝚇 𝚏 (𝟸) 𝟺 𝚇 𝚏 (𝟹) 𝟸 𝚇 𝚏 (𝟺) 𝟹 𝚇 𝚏 (𝟻) 𝟹 𝚇 𝚍 𝟼 𝚇 𝚍 𝟻 𝚇 𝚍 𝟼 𝚇 𝚍 𝟺 𝚇 𝚍 𝟼 𝚇 𝚍 𝟿 𝚇 𝚍 𝟶 𝚇 𝚍 𝟽 𝚇 𝚍 𝟶 𝚇 𝚍 𝟻 𝚇 𝚍 𝟶 𝚇 𝚍 𝟾 𝚇 𝚍 𝟷 𝚇 𝚍 𝟸 𝚇 𝚍 𝟷 𝚇 𝚍 𝟺 𝚇 𝚍 𝟷 𝚇 𝚍 𝟼 𝚇 𝚍 𝟿 𝚇 𝚍 𝟽 𝚇 𝚍 𝟿 𝚇 𝚍 𝟺 𝚇 𝚍 𝟿 𝚇 𝚍 𝟶", "figure_data": "e.g.) bijection:1)𝚂 𝚇 𝚏 (𝟷)step-by-step intermediate outputs", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "xd 10 xd 2 → xd 4 xd 3 → xd 6 xd 4 → xd 9 xd 5 → xd 3 xd 6 → xd 2 xd 7 → xd 5 xd 8 → xd 7 xd 9 → xd 1 xd 10 → xd 8 Figure 10: A permutation from F p permutes the 6 tokens in the input x d . A bijection from F b applies a lookup table to each of the 6 tokens individually.", "figure_data": "Generating a sequence of tokens Asequence starts with a sequence of twotask tokens x f = [x F1 , x F2 ] followedby a sequence of data tokens x d . Thesequence can either be presented in:(i) The step-by-step format, where theintermediate outputs are also includedin the sequence; e.g., the sequence in the step-by-step format would look=x d 3x d 8x d 9x d 2x d 3x d 7likex d 6x d 7x d 1x d 4x d 6x d 5", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "We train autoregressive LSTMs on 50 in-order compositions of 5 bijections from F b in the step-by-step format and tabulate the accuracy (%); The setup is identical to Fig.4. We evaluate the LSTM on the (left) compositions seen during training and (right) in-order compositions not seen during training. LSTMs fail to generalize to functions outside of the training data while transformers generalize compositionally in the same setting.", "figure_data": "Hidden layer dimensionHidden layer dimensionLayers 1202565121024Layers 120 256 512 1024116.2 36.299.999.919.3 10.3 20.1 22.9260.3 99.399.999.8212.4 21.3 25.3 28.8418.7 100.0 100.09.946.6 13.9 17.6 10.0", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The position encodings are denoted by P = [p 1 , p 2 , . . . , p 4 ] where p i = p T i1 p T i2 p T i3 T and p i ∈ R dp and p ij ∈ R dp/3 . The dimensions d x , d v , d and d p represent the same quantities. We use dp to replace F2 x F3 x d p 11 p 12 p 13 p 14 p 21 p 22 p 23 p 24 p 31 p 32 p 33 p 34", "figure_data": "dp 3 . Theinput to the model is x F1 xZ =  ", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "2x F3 F 1 (x d )", "figure_data": "p 11 p 21p 12 p 22p 13 p 23p 14 p 24  p 31p 32p 33p 34= P T 2 P 2and", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "), with W 21 and W 22 which yieldsBlock 2 (Block 1 (Z))) = W 22 ReLU(W 21 (Attn 2 (Z B1 ) + Z B1 )) + (Attn 2 (Z B1 ) + Z B1 ) = 3x F1 3x F2 3x F3 F 2 • F 1 (x d )", "figure_data": "p 1p 2p 3p 4= Z B2 .", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" } ]
Rahul Ramesh; Ekdeep Singh Lubana; Mikail Khona; Robert P Dick; Hidenori Tanaka
[ { "authors": "A Abid; M Farooqi; J Zou", "journal": "Ethics, and Society", "ref_id": "b0", "title": "Persistent anti-muslim bias in large language models", "year": "2021" }, { "authors": "K Ahn; X Cheng; H Daneshmand; S Sra", "journal": "", "ref_id": "b1", "title": "Transformers learn to implement preconditioned gradient descent for in-context learning", "year": "2023" }, { "authors": "Z Allen-Zhu; Y Li", "journal": "", "ref_id": "b2", "title": "Physics of language models: Part 3.1, knowledge storage and extraction", "year": "2023" }, { "authors": "Z Allen-Zhu; Y Li", "journal": "", "ref_id": "b3", "title": "Physics of language models: Part 3.2, knowledge manipulation", "year": "2023" }, { "authors": "Z Allen-Zhu; Y Li", "journal": "", "ref_id": "b4", "title": "Physics of language models: Part 1, context-free grammar", "year": "2023" }, { "authors": "S Arora; A Goyal", "journal": "", "ref_id": "b5", "title": "A theory for emergence of complex skills in language models", "year": "2023" }, { "authors": "J Austin; A Odena; M Nye; M Bosma; H Michalewski; D Dohan; E Jiang; C Cai; M Terry; Q Le", "journal": "", "ref_id": "b6", "title": "Program synthesis with large language models", "year": "2021" }, { "authors": "S Bhattamishra; K Ahuja; N Goyal", "journal": "", "ref_id": "b7", "title": "On the ability and limitations of transformers to recognize formal languages", "year": "2020" }, { "authors": "R Bommasani; D A Hudson; E Adeli; R Altman; S Arora; S Von Arx; M S Bernstein; J Bohg; A Bosselut; E Brunskill", "journal": "", "ref_id": "b8", "title": "On the opportunities and risks of foundation models", "year": "2021" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "S Bubeck; V Chandrasekaran; R Eldan; J Gehrke; E Horvitz; E Kamar; P Lee; Y T Lee; Y Li; S Lundberg", "journal": "", "ref_id": "b10", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "A Chan; R Salganik; A Markelius; C Pang; N Rajkumar; D Krasheninnikov; L Langosco; Z He; Y Duan; M Carroll", "journal": "", "ref_id": "b11", "title": "Harms from increasingly agentic algorithmic systems", "year": "2023" }, { "authors": "S Chan; A Santoro; A Lampinen; J Wang; A Singh; P Richemond; J Mcclelland; F Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Data distributional properties drive emergent in-context learning in transformers", "year": "2022" }, { "authors": "M Chen; J Tworek; H Jun; Q Yuan; H P D O Pinto; J Kaplan; H Edwards; Y Burda; N Joseph; G Brockman", "journal": "", "ref_id": "b13", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "A Chowdhery; S Narang; J Devlin; M Bosma; G Mishra; A Roberts; P Barham; H W Chung; C Sutton; S Gehrmann", "journal": "", "ref_id": "b14", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "B Chughtai; L Chan; N Nanda", "journal": "", "ref_id": "b15", "title": "A toy model of universality: Reverse engineering how networks learn group operations", "year": "2023" }, { "authors": "R Csordás; K Irie; J Schmidhuber", "journal": "", "ref_id": "b16", "title": "The devil is in the detail: Simple tricks improve systematic generalization of transformers", "year": "2021" }, { "authors": "R Csordás; K Irie; J Schmidhuber", "journal": "", "ref_id": "b17", "title": "The neural data router: Adaptive control flow in transformers improves systematic generalization", "year": "2021" }, { "authors": "R Csordás; K Irie; J Schmidhuber", "journal": "", "ref_id": "b18", "title": "Ctl++: Evaluating generalization on never-seen compositional patterns of known functions, and compatibility of neural representations", "year": "2022" }, { "authors": "J A Fodor", "journal": "Harvard university press", "ref_id": "b19", "title": "The language of thought", "year": "1975" }, { "authors": "J A Fodor; E Lepore", "journal": "Oxford University Press", "ref_id": "b20", "title": "The compositionality papers", "year": "2002" }, { "authors": "J A Fodor; Z W Pylyshyn", "journal": "Cognition", "ref_id": "b21", "title": "Connectionism and cognitive architecture: A critical analysis", "year": "1988" }, { "authors": "D Ganguli; D Hernandez; L Lovitt; A Askell; Y Bai; A Chen; T Conerly; N Dassarma; D Drain; N Elhage", "journal": "", "ref_id": "b22", "title": "Predictability and surprise in large generative models", "year": "2022" }, { "authors": "S Garg; D Tsipras; P S Liang; G Valiant", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "What can transformers learn in-context? a case study of simple function classes", "year": "2022" }, { "authors": "I Garrido-Muñoz; A Montejo-Ráez; F Martínez-Santiago; L A Ureña-López", "journal": "Applied Sciences", "ref_id": "b24", "title": "A survey on bias in deep nlp", "year": "2021" }, { "authors": "S Gehman; S Gururangan; M Sap; Y Choi; N A Smith", "journal": "", "ref_id": "b25", "title": "Realtoxicityprompts: Evaluating neural toxic degeneration in language models", "year": "2020" }, { "authors": "M Geva; A Caciularu; G Dar; P Roit; S Sadde; M Shlain; B Tamir; Y Goldberg", "journal": "", "ref_id": "b26", "title": "Lm-debugger: An interactive tool for inspection and intervention in transformer-based language models", "year": "2022" }, { "authors": "D Hendrycks; K Gimpel", "journal": "", "ref_id": "b27", "title": "Gaussian error linear units (gelus)", "year": "2016" }, { "authors": "T Henighan; J Kaplan; M Katz; M Chen; C Hesse; J Jackson; H Jun; T B Brown; P Dhariwal; S Gray", "journal": "", "ref_id": "b28", "title": "Scaling laws for autoregressive generative modeling", "year": "2020" }, { "authors": "D Hernandez; J Kaplan; T Henighan; S Mccandlish", "journal": "", "ref_id": "b29", "title": "Scaling laws for transfer", "year": "2021" }, { "authors": "J Hoffmann; S Borgeaud; A Mensch; E Buchatskaya; T Cai; E Rutherford; D D L Casas; L A Hendricks; J Welbl; A Clark", "journal": "", "ref_id": "b30", "title": "Training compute-optimal large language models", "year": "2022" }, { "authors": "O Honovich; U Shaham; S R Bowman; O Levy", "journal": "", "ref_id": "b31", "title": "Instruction induction: From few examples to natural language task descriptions", "year": "2022" }, { "authors": "A Hosseini; A Vani; D Bahdanau; A Sordoni; A Courville", "journal": "", "ref_id": "b32", "title": "On the compositional generalization gap of in-context learning", "year": "2022" }, { "authors": "P.-S Huang; H Zhang; R Jiang; R Stanforth; J Welbl; J Rae; V Maini; D Yogatama; P Kohli", "journal": "", "ref_id": "b33", "title": "Reducing sentiment bias in language models via counterfactual evaluation", "year": "2019" }, { "authors": "D Hupkes; A Singh; K Korrel; G Kruszewski; E Bruni", "journal": "", "ref_id": "b34", "title": "Learning compositionally through attentive guidance", "year": "2018" }, { "authors": "D Hupkes; V Dankers; M Mul; E Bruni", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b35", "title": "Compositionality decomposed: How do neural networks generalise", "year": "2020" }, { "authors": "L Jiang; J D Hwang; C Bhagavatula; R Le Bras; J Liang; J Dodge; K Sakaguchi; M Forbes; J Borchardt; S Gabriel", "journal": "", "ref_id": "b36", "title": "Can machines learn morality? the delphi experiment", "year": "2021" }, { "authors": "A L Jones", "journal": "", "ref_id": "b37", "title": "Scaling scaling laws with board games", "year": "2021" }, { "authors": "T Kojima; S S Gu; M Reid; Y Matsuo; Y Iwasawa", "journal": "", "ref_id": "b38", "title": "Large language models are zero-shot reasoners", "year": "2022" }, { "authors": "B Lake; M Baroni", "journal": "PMLR", "ref_id": "b39", "title": "Generalization without systematicity: On the compositional skills of sequence-tosequence recurrent networks", "year": "2018" }, { "authors": "T Lee; M Yasunaga; C Meng; Y Mai; J S Park; A Gupta; Y Zhang; D Narayanan; H B Teufel; M Bellagente", "journal": "", "ref_id": "b40", "title": "Holistic evaluation of text-to-image models", "year": "2023" }, { "authors": "M A Lepori; T Serre; E Pavlick", "journal": "", "ref_id": "b41", "title": "Break it down: Evidence for structural compositionality in neural networks", "year": "2023" }, { "authors": "M Lewis; Q Yu; J Merullo; E Pavlick", "journal": "", "ref_id": "b42", "title": "Does clip bind concepts? probing compositionality in large image models", "year": "2022" }, { "authors": "K Li; A K Hopkins; D Bau; F Viégas; H Pfister; M Wattenberg", "journal": "", "ref_id": "b43", "title": "Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task", "year": "2023" }, { "authors": "Y Li; J Yosinski; J Clune; H Lipson; J Hopcroft", "journal": "", "ref_id": "b44", "title": "Convergent learning: Do different neural networks learn the same representations", "year": "2015" }, { "authors": "Y Li; M E Ildiz; D Papailiopoulos; S Oymak", "journal": "", "ref_id": "b45", "title": "Transformers as algorithms: Generalization and implicit model selection in in-context learning", "year": "2023" }, { "authors": "Y Li; K Sreenivasan; A Giannou; D Papailiopoulos; S Oymak", "journal": "", "ref_id": "b46", "title": "Dissecting chain-of-thought: A study on compositional in-context learning of mlps", "year": "2023" }, { "authors": "P Liang; R Bommasani; T Lee; D Tsipras; D Soylu; M Yasunaga; Y Zhang; D Narayanan; Y Wu; A Kumar", "journal": "", "ref_id": "b47", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "S Lin; J Hilton; O Evans; Truthfulqa", "journal": "", "ref_id": "b48", "title": "Measuring how models mimic human falsehoods", "year": "2021" }, { "authors": "A Liška; G Kruszewski; M Baroni", "journal": "", "ref_id": "b49", "title": "Memorize or generalize? searching for a compositional rnn in a haystack", "year": "2018" }, { "authors": "B Liu; J T Ash; S Goel; A Krishnamurthy; C Zhang", "journal": "", "ref_id": "b50", "title": "Transformers learn shortcuts to automata", "year": "2022" }, { "authors": "H Liu; C Li; Q Wu; Y J Lee", "journal": "", "ref_id": "b51", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "E S Lubana; E J Bigelow; R P Dick; D Krueger; H Tanaka", "journal": "PMLR", "ref_id": "b52", "title": "Mechanistic mode connectivity", "year": "2023" }, { "authors": "K Mcguffie; A Newhouse", "journal": "", "ref_id": "b53", "title": "The radicalization risks of gpt-3 and advanced neural language models", "year": "2020" }, { "authors": "W Merrill; A Sabharwal", "journal": "", "ref_id": "b54", "title": "The expresssive power of transformers with chain of thought", "year": "2023" }, { "authors": "A N Myers; H S Wilf", "journal": "SIAM review", "ref_id": "b55", "title": "Some new aspects of the coupon collector's problem", "year": "2006" }, { "authors": "N Nanda; L Chan; T Liberum; J Smith; J Steinhardt", "journal": "", "ref_id": "b56", "title": "Progress measures for grokking via mechanistic interpretability", "year": "2023" }, { "authors": "N Nanda; A Lee; M Wattenberg", "journal": "", "ref_id": "b57", "title": "Emergent linear representations in world models of self-supervised sequence models", "year": "2023" }, { "authors": "E Nelson; N Neel; O Catherine; H Tom; J Nicholas; M Ben; A Amanda; B Yuntao; C Anna; C Tom", "journal": "", "ref_id": "b58", "title": "A mathematical framework for transformer circuits", "year": "2021" }, { "authors": "M Nye; A J Andreassen; G Gur-Ari; H Michalewski; J Austin; D Bieber; D Dohan; A Lewkowycz; M Bosma; D Luan", "journal": "", "ref_id": "b59", "title": "Show your work: Scratchpads for intermediate computation with language models", "year": "2021" }, { "authors": "M Okawa; E S Lubana; R P Dick; H Tanaka", "journal": "", "ref_id": "b60", "title": "Compositional abilities emerge multiplicatively: Exploring diffusion models on a synthetic task", "year": "2023" }, { "authors": "C Olah; N Cammarata; L Schubert; G Goh; M Petrov; S Carter", "journal": "Distill", "ref_id": "b61", "title": "Zoom in: An introduction to circuits", "year": "2020" }, { "authors": "C Olsson; N Elhage; N Nanda; N Joseph; N Dassarma; T Henighan; B Mann; A Askell; Y Bai; A Chen; T Conerly; D Drain; D Ganguli; Z Hatfield-Dodds; D Hernandez; S Johnston; A Jones; J Kernion; L Lovitt; K Ndousse; D Amodei; T Brown; J Clark; J Kaplan; S Mccandlish; C Olah", "journal": "", "ref_id": "b62", "title": "In-context learning and induction heads", "year": "2022" }, { "authors": "S Ontanón; J Ainslie; V Cvicek; Z Fisher", "journal": "", "ref_id": "b63", "title": "Making transformers solve compositional tasks", "year": "2021" }, { "authors": "A Parrish; A Chen; N Nangia; V Padmakumar; J Phang; J Thompson; P M Htut; S R Bowman; Bbq", "journal": "", "ref_id": "b64", "title": "A hand-built bias benchmark for question answering", "year": "2021" }, { "authors": "O Press; L Wolf", "journal": "", "ref_id": "b65", "title": "Using the output embedding to improve language models", "year": "2016" }, { "authors": "Y Qin; S Liang; Y Ye; K Zhu; L Yan; Y Lu; Y Lin; X Cong; X Tang; B Qian", "journal": "", "ref_id": "b66", "title": "Toolllm: Facilitating large language models to master 16000+ real-world apis", "year": "2023" }, { "authors": "A Radford; K Narasimhan; T Salimans; I Sutskever", "journal": "", "ref_id": "b67", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "OpenAI blog", "ref_id": "b68", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "J W Rae; S Borgeaud; T Cai; K Millican; J Hoffmann; F Song; J Aslanides; S Henderson; R Ring; S Young", "journal": "", "ref_id": "b69", "title": "Scaling language models: Methods, analysis & insights from training gopher", "year": "2021" }, { "authors": "Y Razeghi; I V Logan; R L Gardner; M Singh; S ", "journal": "", "ref_id": "b70", "title": "Impact of pretraining term frequencies on few-shot reasoning", "year": "2022" }, { "authors": "V Sanh; A Webson; C Raffel; S H Bach; L Sutawika; Z Alyafeai; A Chaffin; A Stiegler; T L Scao; A Raja", "journal": "", "ref_id": "b71", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2021" }, { "authors": "A Saparov; H He", "journal": "", "ref_id": "b72", "title": "Language models are greedy reasoners: A systematic formal analysis of chain-ofthought", "year": "2022" }, { "authors": "E Schulz; J Tenenbaum; D K Duvenaud; M Speekenbrink; S J Gershman", "journal": "Advances in neural information processing systems", "ref_id": "b73", "title": "Probing the compositionality of intuitive functions", "year": "2016" }, { "authors": "R Shah; V Varma; R Kumar; M Phuong; V Krakovna; J Uesato; Kenton ; Z ", "journal": "", "ref_id": "b74", "title": "Goal misgeneralization: Why correct specifications aren't enough for correct goals", "year": "2022" }, { "authors": "U Sharma; J Kaplan", "journal": "", "ref_id": "b75", "title": "A neural scaling law from the dimension of the data manifold", "year": "2020" }, { "authors": "E Sheng; K.-W Chang; P Natarajan; N Peng", "journal": "", "ref_id": "b76", "title": "The woman worked as a babysitter: On biases in language generation", "year": "2019" }, { "authors": "T Shevlane; S Farquhar; B Garfinkel; M Phuong; J Whittlestone; J Leung; D Kokotajlo; N Marchal; M Anderljung; N Kolt", "journal": "", "ref_id": "b77", "title": "Model evaluation for extreme risks", "year": "2023" }, { "authors": "A Srivastava; A Rastogi; A Rao; A A M Shoeb; A Abid; A Fisch; A R Brown; A Santoro; A Gupta; A Garriga-Alonso", "journal": "", "ref_id": "b78", "title": "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models", "year": "2022" }, { "authors": "M Suzgun; N Scales; N Schärli; S Gehrmann; Y Tay; H W Chung; A Chowdhery; Q V Le; E H Chi; D Zhou", "journal": "", "ref_id": "b79", "title": "Challenging big-bench tasks and whether chain-of-thought can solve them", "year": "2022" }, { "authors": "A Tamkin; M Brundage; J Clark; D Ganguli", "journal": "", "ref_id": "b80", "title": "Understanding the capabilities, limitations, and societal impact of large language models", "year": "2021" }, { "authors": "Y Tay; J Wei; H W Chung; V Q Tran; D R So; S Shakeri; X Garcia; H S Zheng; J Rao; A Chowdhery", "journal": "", "ref_id": "b81", "title": "Transcending scaling laws with 0.1% extra compute", "year": "2022" }, { "authors": "I Tenney; D Das; E Pavlick", "journal": "", "ref_id": "b82", "title": "Bert rediscovers the classical nlp pipeline", "year": "2019" }, { "authors": "R Thoppilan; D De Freitas; J Hall; N Shazeer; A Kulshreshtha; H.-T Cheng; A Jin; T Bos; L Baker; Y Du", "journal": "", "ref_id": "b83", "title": "Lamda: Language models for dialog applications", "year": "2022" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar", "journal": "", "ref_id": "b84", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; L Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b85", "title": "Attention is all you need", "year": "2017" }, { "authors": "Von Oswald; J Niklasson; E Randazzo; E Sacramento; J Mordvintsev; A Zhmoginov; A Vladymyrov; M ", "journal": "PMLR", "ref_id": "b86", "title": "Transformers learn in-context by gradient descent", "year": "2023" }, { "authors": "K Wang; A Variengien; A Conmy; B Shlegeris; J Steinhardt", "journal": "", "ref_id": "b87", "title": "Interpretability in the wild: a circuit for indirect object identification in gpt-2 small", "year": "2022" }, { "authors": "J Wei; M Bosma; V Y Zhao; K Guu; A W Yu; B Lester; N Du; A M Dai; Q V Le", "journal": "", "ref_id": "b88", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "J Wei; Y Tay; R Bommasani; C Raffel; B Zoph; S Borgeaud; D Yogatama; M Bosma; D Zhou; D Metzler", "journal": "", "ref_id": "b89", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; E Chi; Q Le; D Zhou", "journal": "", "ref_id": "b90", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "L Weidinger; J Mellor; M Rauh; C Griffin; J Uesato; P.-S Huang; M Cheng; M Glaese; B Balle; A Kasirzadeh", "journal": "", "ref_id": "b91", "title": "Ethical and social risks of harm from language models", "year": "2021" }, { "authors": "G Weiss; Y Goldberg; E Yahav", "journal": "PMLR", "ref_id": "b92", "title": "Thinking like transformers", "year": "2021" }, { "authors": "A Xu; E Pathak; E Wallace; S Gururangan; M Sap; D Klein", "journal": "", "ref_id": "b93", "title": "Detoxifying language models risks marginalizing minority voices", "year": "2021" }, { "authors": "J Xu; D Ju; M Li; Y.-L Boureau; J Weston; E Dinan", "journal": "", "ref_id": "b94", "title": "Recipes for safety in open-domain chatbots", "year": "2020" }, { "authors": "W Xu; A K Tang", "journal": "Journal of Applied Probability", "ref_id": "b95", "title": "A generalized coupon collector problem", "year": "2011" }, { "authors": "D Yu; S Kaur; A Gupta; J Brown-Cohen; A Goyal; S Arora", "journal": "", "ref_id": "b96", "title": "Skill-mix: A flexible and expandable family of evaluations for ai models", "year": "2023" }, { "authors": "T Yun; U Bhalla; E Pavlick; C Sun", "journal": "", "ref_id": "b97", "title": "Do vision-language pretrained models learn composable primitive concepts?", "year": "2022" }, { "authors": "H Zhou; A Bradley; E Littwin; N Razin; O Saremi; J Susskind; S Bengio; P Nakkiran", "journal": "", "ref_id": "b98", "title": "What algorithms can transformers learn? a study in length generalization", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 92.16, 564.85, 427.68, 32.89 ], "formula_id": "formula_0", "formula_text": "x = [x F1 , x F2 , x d1 ]. Then, a model M : X L f × X K d → X K d that takes x as input, is expected to produce the output F 2 • F 1 (x d1 ). We use [L] to denote the ordered set (1, 2, . . . , L)." }, { "formula_coordinates": [ 4, 131.73, 695.52, 313.63, 10.27 ], "formula_id": "formula_1", "formula_text": "F i ∈ F, where i ∈ [L], M ([x F1 , x F2 , • • • x F L , x d ]) = F L • • • • • F 2 • F 1 (x d ). I F (5) 1 F (5) 2 F (5) 3 F (5) 4 F (4) 1 F (4) 2 F (4) 3 F (4) 4 F (3) 1 F (3) 2 F (3) 3 F (3) 4 F (2) 1 F (2) 2 F (2) 3 F (2) 4 F (1) 1 F (1) 2 F (1) 3 F (1)" }, { "formula_coordinates": [ 5, 332.77, 105.69, 151.14, 128.48 ], "formula_id": "formula_2", "formula_text": "F (5) 2 F (5) 4 F (5) 2 F (4) 1 F (4) 3 F (3) 3 F (3) 2 F (3) 2 F (3) 1 F (2) 4 F (2) 2 F (2) 3 F (1) 4 F (1) 2 (x) (x) (x) (x) # of compositions 5 2 3 4 # of displacements F (5) 2 F (4) 3 F (2) 2 F (1) 4 I I I I F (5) 4 F (4) 3 F (3) 4 F (2) 2 F (1) 4 F (1)" }, { "formula_coordinates": [ 5, 371.84, 185.56, 57.82, 48.52 ], "formula_id": "formula_3", "formula_text": "F (4) 1 F (4) 1 F (5) 2 F (5) 4 F (3) 1 F (1) 1 (x) (x) (x) (x)" }, { "formula_coordinates": [ 5, 92.16, 487.16, 427.68, 22.27 ], "formula_id": "formula_4", "formula_text": "F = F (l1) • • • • • F (l2) • F (l L ) (." }, { "formula_coordinates": [ 6, 105.18, 97.21, 167.84, 77.3 ], "formula_id": "formula_5", "formula_text": "F (5) 3 F (4) 3 F (3) 2 F (2) 4 F (1) 1 (x) (b)." }, { "formula_coordinates": [ 6, 352.56, 89.12, 10.86, 15.35 ], "formula_id": "formula_6", "formula_text": "F (1" }, { "formula_coordinates": [ 11, 309.51, 94.59, 199.09, 152.77 ], "formula_id": "formula_7", "formula_text": "x f x d f 1 f 2 f 3 f 4 f 5 x f x d f 1 f 2 f 3 f 4 f 5 0." }, { "formula_coordinates": [ 19, 224.19, 482.6, 163.63, 30.2 ], "formula_id": "formula_8", "formula_text": "L(w) = - T -1 t=1 log p w (y = x t+1 | x 1:t ) ." }, { "formula_coordinates": [ 20, 282.64, 239.4, 135.66, 9.65 ], "formula_id": "formula_9", "formula_text": "F 1 • F 1 , F 2 • F 2 and F 1 • F 2 . In" }, { "formula_coordinates": [ 20, 92.16, 333.05, 427.85, 21.61 ], "formula_id": "formula_10", "formula_text": "I • I • F 3 • I • I or I • F 4 • • • • • I." }, { "formula_coordinates": [ 20, 320, 395.13, 142.59, 145.13 ], "formula_id": "formula_11", "formula_text": "x d 3 x d 9 x d 2 x d 8 x d 7 x d 3 x d 8 x d 3 x d 9 x d 3 x d 2 x d 7 x d F p (x d ) F p ∈ ℱ p = = x d F b (x d ) F b ∈ ℱ b = Set of Bijections ℱ b Set of Permutations ℱ p g g g g g g g : X d ↦ X d xd 1 →" }, { "formula_coordinates": [ 20, 112.86, 519.98, 140.4, 9.65 ], "formula_id": "formula_12", "formula_text": "[x F1 , x F2 , x d , F 1 (x d ), F 2 (F 1 (x d ))]" }, { "formula_coordinates": [ 20, 146.27, 579.75, 107, 9.65 ], "formula_id": "formula_13", "formula_text": "[x F1 , x F2 , x d , F 2 (F 1 (x g ))]" }, { "formula_coordinates": [ 28, 326.22, 275.08, 164.63, 9.65 ], "formula_id": "formula_14", "formula_text": "[F 1 (x d ), F 2 • F 1 (x d ), F 3 • F 2 • F 1 (x d )]." }, { "formula_coordinates": [ 28, 207.28, 351.79, 107.94, 9.65 ], "formula_id": "formula_15", "formula_text": "|X d | = d v and |X f | = d f ." }, { "formula_coordinates": [ 28, 196.68, 383.64, 218.65, 21.61 ], "formula_id": "formula_16", "formula_text": "Z = x F1 x F2 x F3 x F 1 (x d ) F 2 • F 1 (x d ) p 1 p 2 p 3 p 4 p 5 p 6 ," }, { "formula_coordinates": [ 28, 220.11, 581.34, 171.77, 11.72 ], "formula_id": "formula_17", "formula_text": "Attn Q,K (Z) = (KZ)(M ⊙ Z T QZ),and" }, { "formula_coordinates": [ 28, 91.8, 598.36, 275.87, 31.43 ], "formula_id": "formula_18", "formula_text": "MLP W1,W2 (Z) = W 2 ReLU(W 1 Z), where Q, K ∈ R d×d , W 1 ∈ R d×(d f dv) and W 2 ∈ R (d f dv)×d ." }, { "formula_coordinates": [ 28, 249.43, 650.72, 108.72, 52.45 ], "formula_id": "formula_19", "formula_text": "M =      1 1 1 • • • 1 0 1 1 • • • 1 . . . . . . . . . . . . . . . 0 0 0 • • • 1     " }, { "formula_coordinates": [ 29, 183.46, 129.18, 245.09, 9.65 ], "formula_id": "formula_20", "formula_text": "Tr Q,K,W1,W2 (Z) = MLP (Attn(Z) + Z) + Attn(Z) + Z) ." }, { "formula_coordinates": [ 29, 239.6, 213.84, 132.8, 9.65 ], "formula_id": "formula_21", "formula_text": "P (Y |Z) = Softmax(W e Tr(Z))." }, { "formula_coordinates": [ 29, 334.55, 247.71, 77.1, 9.65 ], "formula_id": "formula_22", "formula_text": "[x F1 , x F2 , x F3 , x d ]." }, { "formula_coordinates": [ 29, 102.53, 279.47, 385.3, 217.8 ], "formula_id": "formula_23", "formula_text": "P T P = I 3×3 I 3×3 I 3×3 I 3×3 , Q = 0 d×d 0 d×dp 0 dp×d I dp×dp , K =   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f ×d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   , W 1 =                  1 T x d 1 1 T x d 1 1 T x d 1 • • • 1 T x d 1 1 T x d 2 1 T x d 2 1 T x d 2 • • • 1 T x d 2 . . . . . . . . . . . . 1 T x dv 1 T x dv 1 T x dv • • • 1 T x dv 0 T dv -1 T dv -1 T dv • • • -1 T dv -1 T dv 0 T dv -1 T dv • • • -1 T dv . . . . . . . . . . . . -1 T dv -1 T dv -1 T dv • • • 0 T dv 0 dp×dv 0 dp×dv 0 dp×dv • • • 0 dp×dv                  T d f ×dvcolumns" }, { "formula_coordinates": [ 29, 356.35, 321.62, 133.05, 182.09 ], "formula_id": "formula_24", "formula_text": "W 2 =                             F i1 (x d1 ) T -x T d1 -x T Fi 1 F i1 (x d2 ) T -x T d2 -x T Fi 1 . . . F i1 (x dv ) T -x T dv -x T Fi 1 F i2 (x d1 ) T -x T d1 -x T Fi 2 F i2 (x d2 ) T -x T d2 -x T Fi 2 . . . F i2 (x dv ) T -x T dv -x T Fi 2" }, { "formula_coordinates": [ 29, 390.15, 319.32, 119.32, 199.29 ], "formula_id": "formula_25", "formula_text": "F i T (x d1 ) T -x T d1 -x T Fi T F i T (x d2 ) T -x T d2 -x T Fi T . . . F T (x dv ) -x dv -x Fi T                             T ." }, { "formula_coordinates": [ 30, 423.27, 254.89, 77.55, 9.65 ], "formula_id": "formula_26", "formula_text": "[F 3 • F 2 • F 1 (x d )]." }, { "formula_coordinates": [ 30, 91.8, 305.46, 277.82, 64.37 ], "formula_id": "formula_27", "formula_text": "    , where Z ∈ R d×4 ." }, { "formula_coordinates": [ 30, 226.39, 462.46, 159.22, 9.65 ], "formula_id": "formula_28", "formula_text": "Tr(Z) = Block 3 (Block 2 (Block 1 (Z)))." }, { "formula_coordinates": [ 30, 112.7, 555.14, 386.61, 91.25 ], "formula_id": "formula_29", "formula_text": "Q 1 =     0 d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d I dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp     , Q 2 =     0 d×d 0 d× dp 0 d× dp 0 d× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp 0 dp×d 0 dp× dp I dp 0 dp× dp 0 dp×d 0 dp× dp 0 dp× dp 0 dp× dp     , Q 3 =    " }, { "formula_coordinates": [ 30, 232.25, 610.13, 234.3, 75.7 ], "formula_id": "formula_30", "formula_text": "I dp ,     , K 1 =   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   , K 2 = K 1 2 , K 3 = K 1 3 , P T 1 P 1 =     1 0 0 1 0 1 0 0 0 0 1 0 1 0 0 1     , P T 2 P 2 =     1 0 0 0 0 1 0 1 0 0 1 0 0 1 0 1     , P T 3 P 3 =     1 0 0 0 0 1 0 0 0 0 1 1 0 0 1 1     , W 11 =                  1 T x d 1 1 T x d 1 1 T x d 1 • • • 1 T x d 1 1 T x d 2 1 T x d 2 1 T x d 2 • • • 1 T x d 2 . . . . . . . . . . . . 1 T x dv 1 T x dv 1 T x dv • • • 1 T x dv 0 1×dv -1 1×dv -1 1×dv • • • -1 1×dv -1 1×dv 0 1×dv -1 1×dv • • • -1 1×dv . . . . . . . . . . . . -1 1×dv -1 1×dv -1 1×dv • • • 0 1×dv 0 dp×dv 0 dp×dv 0 dp×dv • • • 0 dp×dv                  T d f ×dvcolumns , W 12 =                             F i1 (x d1 ) T -x T d1 -x T Fi 1 F i1 (x d2 ) T -x T d2 -x T Fi 1 . . . F i1 (x dv ) T -x T dv -x T Fi 1 F i2 (x d1 ) T -x T d1 -x T Fi 2 F i2 (x d2 ) T -x T d2 -x T Fi 2 . . . F i2 (x dv ) T -x T dv -x T Fi 2" }, { "formula_coordinates": [ 31, 222.26, 160.9, 285.46, 213.52 ], "formula_id": "formula_31", "formula_text": "F i T (x d1 ) T -x T d1 -x T Fi T F i T (x d2 ) T -x T d2 -x T Fi T . . . F T (x dv ) -x dv -x Fi T .                             T W 21 = W 22 = W 23 ,and" }, { "formula_coordinates": [ 31, 344.17, 364.78, 82.88, 9.65 ], "formula_id": "formula_32", "formula_text": "W 31 = W 32 = W 33 ." }, { "formula_coordinates": [ 32, 92.16, 253.95, 102.3, 14.07 ], "formula_id": "formula_33", "formula_text": "F (l) i ∈ F (l) for i ∈ [1, L]," }, { "formula_coordinates": [ 32, 255, 394.87, 102.01, 22.31 ], "formula_id": "formula_34", "formula_text": "p(L, f ) = F ! F L F -1 L -1 ," }, { "formula_coordinates": [ 32, 225.14, 535.57, 161.72, 24.91 ], "formula_id": "formula_35", "formula_text": "P (N ≥ F H F + c log F ) ≤ π 2 6c 2 log 2 F ," }, { "formula_coordinates": [ 33, 123.55, 205.46, 359.65, 148.29 ], "formula_id": "formula_36", "formula_text": "Z T QZ =         x T F1 p T 1 x T F2 p T 2 x T F3 p T 3 x T d p T 4 F 1 (x d ) T p 5 F 2 • F 1 (x d ) T p T 6         0 d×d 0 d×dp 0 dp×d I dp×dp x F1 x F2 x F3 • • • F 2 • F 1 (x d ) p 1 p 2 p 3 • • • p 6 =         0 p T 1 0 p T 2 0 p T 3 0 p T 4 0 p T 5 0 p T 6         x F1 x F2 x F3 • • • F 2 • F 1 (x d ) p 1 p 2 p 3 • • • p 6 = P T P" }, { "formula_coordinates": [ 33, 159.55, 409.27, 287.14, 21.61 ], "formula_id": "formula_37", "formula_text": "M ⊙ (Z T QZ) = M ⊙ (P T P ) = M ⊙ I 3×3 I 3×3 I 3×3 I 3×3 = I 3×3 I 3×3 0 3×3 I 3×3" }, { "formula_coordinates": [ 33, 91.8, 461.19, 402.08, 177.39 ], "formula_id": "formula_38", "formula_text": "Attn(Z) = (KZ)(M ⊙ Z T QZ) = (KZ)(M ⊙ P P T ) =   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f ×d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   x F1 x F2 x F3 • • • F 2 • F 1 (x d ) p 1 p 2 p 3 • • • p 6 I 3×3 I 3×3 0 3×3 I 3×3 = x F1 x F2 x F3 0 d 0 d 0 d 0 dp 0 dp 0 dp 0 dp 0 dp 0 dp I 3×3 I 3×3 0 3×3 I 3×3 = x F1 x F2 x F3 x F1 x F2 x F3 0 dp 0 dp 0 dp 0 dp 0 dp 0 dp which when added to Z yields Attn(Z) + Z = 2x F1 2x F2 2x F3 x d + x F1 F 1 (x d ) + x F2 F 2 • F 1 (x d ) + x F3 p 1 p 2 p 3 p 4 p 5 p 6 ," }, { "formula_coordinates": [ 34, 92.16, 126.69, 439.53, 205.78 ], "formula_id": "formula_39", "formula_text": "(Attn(Z) + Z) T W T 1 =         2x T F1 p T 1 2x T F2 p T 2 2x T F3 p T 3 x T d + x T F1 p T 4 F 1 (x d ) T + x T F2 p T 5 F 2 (F 1 (x d )) T + x T F3 p T 6                          1 T x d 1 1 T x d 1 1 T x d 1 • • • 1 T x d 1 1 T x d 2 1 T x d 2 1 T x d 2 • • • 1 T x d 2 . . . . . . . . . . . . 1 T x dv 1 T x dv 1 T x dv • • • 1 T x dv 0 T dv -1 T dv -1 T dv • • • -1 T dv -1 T dv 0 T dv -1 T dv • • • -1 T dv . . . . . . . . . . . . -1 T dv -1 T dv -1 T dv • • • 0 T dv 0 dp×dv 0 dp×dv 0 dp×dv • • • 0 dp×dv                  =         -2 T dv • • • • • • 0 T dv • • • • • • -2 T dv -2 T dv • • • 0 T dv • • • • • • • • • -2 T dv -2 T dv • • • • • • • • • 0 T dv • • • -2 T dv -1 T dv + 1 T x d • • • • • • 1 T x d • • • • • • -1 T dv + 1 T x d -1 T dv + 1 T F1(x d ) • • • 1 T F1(x d ) • • • • • • • • • -1 T dv + 1 T F1(x d ) -1 T dv + 1 T F2•F1(x d ) • • • • • • 1 T F2•F1(x d ) • • • -1 T dv + 1 T F2•F1(x d )        " }, { "formula_coordinates": [ 34, 464.93, 355.21, 56.65, 11.86 ], "formula_id": "formula_40", "formula_text": "F i = F ij i , i.e." }, { "formula_coordinates": [ 34, 223.6, 411.97, 197.15, 94.86 ], "formula_id": "formula_41", "formula_text": "T W T 1 ) T ) =            0 dv 0 dv 0 dv 0 dv 0 dv 0 dv 0 dv 0 dv 0 dv • • • • • • • • • 0 dv 0 dv 0 dv • • • 1 F1(x d ) • • • 0 dv 0 dv 0 dv 1 x d • • • • • • 0 dv 0 dv 0 dv • • • • • • 1 F2•F1(x d )" }, { "formula_coordinates": [ 34, 223.6, 428.72, 273.18, 106.14 ], "formula_id": "formula_42", "formula_text": "0 dv 0 dv 0 dv 0 dv 0 dv 0 dv            = 0 dvd f 0 dvd f 0 dvd f 1 (x d ,F1) 1 (F1(x d ),F2) 1 (F2•F1(x d ),F3)" }, { "formula_coordinates": [ 34, 95.86, 570.85, 415.63, 97.02 ], "formula_id": "formula_43", "formula_text": "W 2 ReLU(W 1 (Attn(Z) + Z)) = W 2 0 dvd f 0 dvd f 0 dvd f 1 (x d ,F1) 1 (F1(x d ),F2) 1 (F2•F1(x d ),F3) =          0 T d 0 T dp 0 T d 0 T dp 0 T d 0 T dp F 1 (x d ) T -x d -x F1 0 T dp F 2 • F 1 (x d ) -x F1(x d ) -x F2 0 T dp F 3 • F 2 • F 1 (x d ) -x F2•F1(x d ) -x F3" }, { "formula_coordinates": [ 35, 152.35, 131.61, 334.04, 102 ], "formula_id": "formula_44", "formula_text": "F 2 • F 1 (x d ) T -x T F1(x d ) -x T F2 0 T dp F 3 • F 2 • F 1 (x d ) T -x T F2•F1(x d ) -x T F3 0 T dp          T +         2x T F1 p T 1 2x T F2 p T 2 2x T F3 p T 3 x T d + x T F1 p T 4 x T F1(x d ) + x T F2 p T 5 x T F2•F1(x d ) + x T F3 p T 6         T = 2x F1 2x F2 2x F3 F 1 (x d ) F 2 • F 1 (x d ) F 3 • F 2 • F 1 (x d )" }, { "formula_coordinates": [ 35, 177.05, 318.45, 257.9, 9.65 ], "formula_id": "formula_45", "formula_text": "2x F1 2x F2 2x F3 F 1 (x d ) F 2 • F 1 (x d ) F 3 • F 2 • F 1 (x d )" }, { "formula_coordinates": [ 35, 92.16, 352.42, 427.68, 21.61 ], "formula_id": "formula_46", "formula_text": "[F 1 (x d ), F 2 • F 1 (x d ), F 3 • F 2 • F 1 (x d )]" }, { "formula_coordinates": [ 36, 177.38, 115.16, 256.53, 107.88 ], "formula_id": "formula_47", "formula_text": "Attn 1 (Z) + Z = (K 1 Z)(M ⊙ Z T Q 1 Z) + Z = (K 1 Z)(M ⊙ P T 1 P 1 ) + Z = x F1 x F2 x F3 0 0 dp 0 dp 0 dp 0 dp     1 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1     + Z = 2x F1 2x F2 2x F3 x d + x F1 p 1 p 2 p 3 p 4" }, { "formula_coordinates": [ 36, 216.05, 289.59, 173.48, 21.61 ], "formula_id": "formula_48", "formula_text": "= 2x F1 2x F2 2x F3 F 1 (x d ) p 1 p 2 p 3 p 4 = Z B1 ." }, { "formula_coordinates": [ 36, 95.87, 511.5, 297.68, 35.8 ], "formula_id": "formula_49", "formula_text": "K 2 Z B1 = 1 2   0 dv×dv 0 d f ×dv 0 d×dp 0 d f ×dv I d f 0 d×dp 0 dv×d 0 d f ×d 0 dp×dp   2x F1 2x F2 2x F3 F 1 (x d )" }, { "formula_coordinates": [ 36, 167.25, 590.18, 236.28, 69.16 ], "formula_id": "formula_50", "formula_text": "Attn 2 (Z B1 ) + Z B1 = (K 2 Z B1 )(M ⊙ Z T B1 Q 2 Z B1 ) + Z B1 = (K 2 Z B1 )(M ⊙ P T 2 P 2 ) + Z B1 = x F1 x F2 x F3 0 0 0 0 0    " }, { "formula_coordinates": [ 36, 249.31, 623.07, 194.44, 74.98 ], "formula_id": "formula_51", "formula_text": "    + Z B1 = 3x F1 3x F2 3x F3 F 1 (x d ) + x F2 p 1 p 2 p 3 p 4 ." }, { "formula_coordinates": [ 37, 95.48, 222.44, 421.05, 22.31 ], "formula_id": "formula_52", "formula_text": "K 3 Z B2 = 1 3 x F1 x F2 x F3 0 d 0 dp 0 dp 0 dp 0 dp 3x F1 3x F2 3x F3 F 2 • F 1 (x d ) p 1 p 2 p 3 p 4 = x F1 x F2 x F3 0 d 0 dp 0 dp 0 dp 0 dp ." }, { "formula_coordinates": [ 37, 167.25, 276.31, 236.28, 69.16 ], "formula_id": "formula_53", "formula_text": "Attn 3 (Z B3 ) + Z B3 = (K 3 Z B2 )(M ⊙ Z T B2 Q 2 Z B2 ) + Z B2 = (K 3 Z B1 )(M ⊙ P T 3 P 3 ) + Z B2 = x F1 x F2 x F3 0 0 0 0 0    " }, { "formula_coordinates": [ 37, 249.31, 309.21, 194.45, 74.98 ], "formula_id": "formula_54", "formula_text": "    + Z B2 = 4x F1 4x F2 4x F3 F 2 • F 1 (x d ) + x F3 p 1 p 2 p 3 p 4 ." }, { "formula_coordinates": [ 37, 187.31, 433.41, 265.67, 37.55 ], "formula_id": "formula_55", "formula_text": "3 (Z B2 ) + Z B2 ) = 4x F1 4x F2 4x F3 F 3 • F 2 • F 1 (x d ) p 1 p 2 p 3 p 4 ." } ]
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b42", "b17", "b42", "b42", "b11", "b5", "b37", "b57", "b12", "b45", "b7", "b27", "b58", "b13", "b37", "b42", "b0", "b1", "b9", "b32", "b32", "b9", "b38", "b20", "b29" ], "table_ref": [], "text": "It becomes increasingly important to transmit 3D content over band-limited channels. Consequently, there is a growing interest in algorithms for compressing related data modalities [38,43]. Compared to image and video compression, efforts to reduce the bandwidth footprint of 3D data modalities have gained less attention. Moreover, the nature of 3D data renders it a challenging problem. Typically, image or video data live on a well-defined regular grid. However, the structure, or geometry, of common 3D data representations such as Point Clouds (PCs) and meshes only exists on a lower-dimensional manifold embedded in the 3D world. Moreover, this is often accompanied by attributes that are only defined on the geometry itself.\nNotably, the MPEG group has recently renewed its call for standards for 3D compression identifying point clouds as the central modality [18,43]. To this end, geometry and attribute compression are identified as the central constituents. Geometry-based Point Cloud Compression (GPCC) and Video-based Point Cloud Compressions (VPCCs) have emerged as standards for compressing 3D PCs including attributes [43]. GPCC is based on octrees and Region-Adaptive Hierarchical Transforms (RAHT) [12] and VPCC maps geometry and attributes onto a 2D regular grid and applies state-of-the-art video compression algorithms. Subsequently, there has been a growing effort in developing methods for compressing either the geometry, attributes or both simultaneously [6,38].\nNeural Fields (NFs) have recently been popularized for a variety of data modalities including images, videos and 3D [58]. To this end, a signal is viewed as a scalar-or vectorvalued field on a coordinate space and parameterized by a neural network, typically a Multilayer Perceptron (MLP). Interestingly, there is a growing trend of applying NFs to compress various data modalities, e.g. images [13,46], videos [8,28,59] or medical data [14]. Hereby, the common modus operandi is to overfit an MLP to represent a signal, e.g. image/video, and, subsequently, compress its parameters using a combination of quantization and entropy coding. Our work proposes the first NF-based 3D compression algorithm. In contrast to other geometry compression methods [38,43], NFs have been demonstrated to represent 3D data regardless of whether it is available in form of PCs [1,2] or meshes [10,33]. NFs do not explicitly encode 3D data, but rather implicitly in form of Signed Distance Fields (SDFs) [33], Unsigned Distance Fields (UDFs) [10] or vector fields [39]. Therefore, one typically applies marching cubes [21,30] on top of distances and signs/normals obtained from the NF to extract the geometry.\nWe show that NF-based compression using SDFs leads to state-of-the-art geometry compression. As SDFs assume watertight shapes, a general compression algorithm requires UDFs. However, vanilla UDFs lead to inferior compression performance since the non-differentiable target requires increased model capacity. To mitigate this, we apply two impactful modifications to UDFs. Specifically, we apply a suitable activation function to the output of UDFs. Further, we regularize UDFs trained on PCs. Therefore, we tune the distribution from which training points are sampled and apply an ℓ1-penalty on the parameters. Lastly, we demonstrate that NFs are not only a promising approach for compressing the geometry of 3D data but also its attributes by viewing attributes as a vector-valued field on the geometry." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Modelling 3D Data Using Neural Fields", "publication_ref": [ "b8", "b31", "b32", "b29", "b9", "b20", "b29", "b48", "b44" ], "table_ref": [], "text": "NFs were initially introduced to 3D shape modelling in the form of occupancy fields [9,32] and SDFs [33]. 3D meshes are extracted from the resulting SDFs using marching cubes [30]. Further, Chibane et al. [10] proposed Neural Unsigned Distance fields to represent 3D shapes using UDFs and, thus, allow for modeling non-watertight shapes. They obtain shapes as PCs by projecting uniformly sampled points along the negative gradient direction of the resulting UDFs. Later, Guillard et al. introduced MeshUDF [21] building on Marching Cubes (MC) [30] which denotes a differentiable algorithm for converting UDFs into meshes. We instantiate our method with both SDFs and UDFs leading to a more specialized and a, respectively, more general version. More recently, Rella et al. proposed to parameterize the gradient field of UDFs with a neural network. Regarding the architecture of the MLP used for parameterizing NFs, Tancik et al. [49] solidify that positional encodings improve the ability of coordinate-based neural networks to learn high frequency content. Further, Sitzmann et al. [45] demonstrate that sinusoidal activation functions have a similar effect. In this work we utilize sinusoidal activation functions as well as positional encodings." }, { "figure_ref": [], "heading": "Compression Using Neural Fields", "publication_ref": [ "b12", "b45", "b44", "b48", "b49", "b41", "b13", "b10", "b16", "b26", "b7", "b24", "b27", "b30", "b39", "b58", "b4", "b22", "b28", "b47" ], "table_ref": [], "text": "Recently, there has been an increasing interest in compressing data using NFs due to promising results and their general applicability to any coordinate-based data. Dupont et al. [13] were the first to propose NFs for image compression. Subsequently, there was a plethora of work extending this seminal work. Strümpler et al. [46] improved image compression performance and encoding runtime by combining SIREN [45] with positional encodings [49] and applying meta-learned initializations [50]. Schwarz et al. [42] and Dupont et al. [14] further expand the idea of metalearned initializations for NF-based compression. Furthermore, various more recent works have improved upon NFbased image compression performance [11,17,27]. Besides images, NF-based compression has been extensively applied to videos [8,25,28,31,40,59]. Despite the recent interest in NF-based compression, its application to compressing 3D data modalities remains scarce. Notably, there has been an increasing effort to compress 3D scenes by compressing the parameters of Neural Radiance Fields (NeRF) [5,23,29,48]. However, this work directly compresses 3D data modalities (PCs/meshes) while NeRFcompression starts from 2D image observations." }, { "figure_ref": [], "heading": "3D Data Compression", "publication_ref": [ "b37", "b42", "b15", "b40", "b50", "b18", "b19", "b34", "b36", "b56", "b55", "b54", "b51", "b46", "b47", "b42", "b11", "b35", "b51", "b23", "b43", "b54", "b33" ], "table_ref": [], "text": "Typically, 3D compression is divided into geometry and attribute compression. We refer to Quach et al. [38] for a comprehensive survey.\nGeometry Compression. MPEG has identified PCsincluding attributes -as a key modality for transmitting 3D information [43]. Subsequently, it introduced GPCC and VPCC for compressing the geometry and attributes captured in 3D PCs. GPCC is a 3D native algorithm which represents PCs using an efficient octree representation for geometry compression. On the other hand, VPCC maps the geometry and attributes onto a 2D grid and, then, takes advantage of video compression algorithms. Moreover, Draco [16] allows compressing PCs and meshes. For mesh compression it relies on the edge-breaker algorithm [41]. Tang et al. [51] take a different approach by extracting the SDF from a 3D geometry and then compressing it.\nEarly works on learned geometry compression use a Rate-Distortion Autoencoder (RDAE) based on 3D convolutions [19,20,35,37]. Wang et al. [57] also apply 3D convolutions to PC compression and later introduce an improved multi-scale version based on sparse convolutions, i.e. Point Cloud Geometry Compression v2 (PCGCv2) [56]. PCGCv2 improves upon PC geometry compression using 3D convolutions in prior works. Thus, we use it as a learned baseline in Sec. 4.1. SparsePCGC [55] further improves upon PCGCv21 . Tang et al. [52] compress watertight shapes including color attributes using Truncated Signed Distance Fields (TSDFs). Hereby, the signs of the TSDF are compressed losslessly using a learned conditional entropy model, the UDF is encoded/decoded using 3D convolutions and texture maps are compressed using a custom trackingfree UV parameterization. In contrast to all prior work on learned 3D compression, we overfit a single MLP to parameterize a single signal. While this increases the encoding time, it also drastically renders our method less vulnerable to domain shifts. Moreover, in contrast to Tang et al. which focuses on SDFs, we also utilize UDFs to compress nonwatertight geometries. NGLOD [47] proposed to represent 3D data using feature grids with variable resolution in a parameter efficient manner. VQAD [48] further substantially improves upon NGLOD by quantizing these feature grids.\nAttribute Compression. GPCC [43] compresses attributes using Region-Adaptive Hierarchical Transforms (RAHT) [12], while VPCC maps attributes onto 2D images and applies video compression algorithms. Further, Quach et al. [36] propose a folding-based Neural Networks (NNs) for attribute compression. Tang et al. [52] introduce a blockbased UV parameterization and, then, applies video compression similar to VPCC. Isik et al. [24] demonstrate that vector-valued NFs are a promising tool for attribute compression. In contrast, this work tackles both geometry and attribute compression. Sheng et al. [44] and Wang et al. [55] compress point cloud attributes using a RDAE based on PointNet++ [34] and, respectively, 3D convolutions." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Generally, we fit a single NF to a 3D shape comprised of a PC/mesh and, optionally, another NF to the attributes (e.g. color) on the geometry. Then, we compress the parameters of the NF. Specifically, Sec. 3.1 describes how we model the geometry of 3D data -for PCs as well as meshes -using truncated Neural Distance Fields (NDFs). Further, we explain how meshes and, ultimately, PCs can be recovered from Distance Fields (DFs). Sec. 3.2 elaborates on additionally compressing attributes (e.g. color) of 3D data. Lastly, Sec. 3.3 describes our approach to compressing the parameters of NFs representing the underlying 3D data. Fig. 1 outlines our geometry and attribute compression pipeline." }, { "figure_ref": [], "heading": "Representing Geometries with Truncated Neural Distance Fields", "publication_ref": [ "b9", "b32", "b44", "b44", "b48", "b45", "b9", "b9", "b44", "b7", "b12", "b13", "b27", "b45", "b52", "b45", "b29", "b20", "b9" ], "table_ref": [], "text": "We represent 3D geometries implicitly using DFs. A DF is a scalar field DF : R 3 → R that for a given 3D geometry assigns every point x ∈ R 3 the distance d S (x) ∈ R ≥0 to the closest point x S ∈ R 3 on the surface S of the geometry. We refer to such scalar fields as UDFs and omit the dependence on x, d S := d S (x). If the underlying geometry is watertight, we can further define the signed distance d S ∈ R which is negative inside S and positive on the outside. These instances are termed SDFs. In both cases, the surface S is implicitly defined by the level set {x|d S = 0} of the DF.\nIn Sec. 4 we demonstrate that using UDFs leads to strong compression performance while being generally applicable. However, when handling watertight shapes, SDFs yield further significant improvements. Truncated Neural Distance Fields. We parameterize d S using NNs N F θG with parameters θ G -in particular MLPs mapping coordinates to the corresponding values of the scalar field similar to recent work on NFs [10,33,45]. Our goal is to learn compressed representations of 3D geometries in form of the parameters θ G and, thus, it is important to limit the number of parameters. To this end, we do not train NDFs to parameterize the entire DF d S but rather a truncated version of it. Hence, we intuitively only store the information in the DF that is necessary to recover the 3D geometry. Such a truncated DF d S,T is characterized by a maximal distance d * > 0 and defined as\nd S,T = d S if |d S,T | ≤ d * sgn(d S )d ∈ {δ ∈ R : |δ| > d * } else.\nwhere sgn(d S ) returns the sign of d S . We only require |d S,T | to be larger than d * but do not fix its value. Thus, the NDFs can represent the 3D geometry with fewer parameters by focusing the model's capacity on the region closest to the surface.\nArchitecture. We use sinusoidal activation functions in the MLPs [45] combined with positional encodings [49]. This has been shown to improve the robustness of NFs to quantization [46]. Chibane et al. [10] originally proposed to parameterize UDFs using MLPs with a ReLU activation function for the output of the last layer to enforce N F θG ≥ 0 ∀x ∈ R 3 . In contrast, we apply abs(x) = |x| which drastically improves performance in the regime of small models (see Sec. 4.1). This originates from the fact that, unlike ReLU, abs(x) allows to correctly represent a UDF using negative values prior to the last activation function. This again increases the flexibility of the model. When modeling SDFs, we apply the identity as the final activation function. More details are in the supplement.\nOptimization. We train on a dataset of point-distance pairs. Following prior work [10,45], we sample points from a mixture of three distributions -uniformly in the unit cube, uniformly from the level set and uniformly from the level set with additive Gaussian noise of standard deviation σ. This encourages learning accurate distances close to the surface. When training on PCs, we restrict ourselves to approximately uniformly distributed PCs. Non-uniform PCs can be sub/super-sampled accordingly. In contrast to prior work on compressing other data modalities using NFs [8,13,14,28,46], implicitly representing 3D geometries -in particular in the form of PCs -is susceptible to overfitting as extracting a mesh from DFs using MC requires the level set to form a 2D manifold. A NF trained on a limited number of points may collapse to a solution where the level set rather resembles a mixture of delta peaks. We counteract overfitting using two methods. We find that σ is an important parameter for the tradeoff between reconstruction quality and generalization (see Sec. 4.1). We find that for the distribution of natural shapes, the values σ = 0.01 (SDFs) and σ = 0.025 (UDFs) work well across datasets. Secondly, we penalize the ℓ1-norm of θ. This further has the benefit of sparsifying the parameters θ [53] and, consequently, rendering them more compressible [46]. Overall, we train NFs to predict the above truncated UDFs/SDFs using the following loss function (with d = N F θG (x)):\nL G (θ G ) = L D (θ G ) + λ ℓ1 ||θ G || 1(1)\nwhere\nL D (θ G ) = E d -sgn(d S ) min(|d S |, d * ) 2 if |d S | ≤ d * or | d| ≤ d * and 0 otherwise.\nExtracting Geometries from Distance Fields. Our compressed representation implicitly encodes the 3D surface. For comparison and visualization purposes, we need to convert it to an explicit representation, namely a PC or mesh as part of our decoding step. Obtaining a uniformly sampled PC directly from a DF is non-trivial. Hence, we initially convert the DFs into meshes in both PC compression and mesh compression scenarios. In the case of SDFs, we apply MC [30] to obtain a mesh of the 3D geometry. Further, we extract meshes from UDFs using the recently proposed differentiable MC variant MeshUDF [21]. Note that Chibane et al. [10] originally extracted points by projecting uniform samples along the gradient direction of UDFs. However, this leads to undesirable clustering and holes on geometries containing varying curvature. When compressing PCs, we further sample points uniformly from the extracted meshes. Notably, this is the primary reason for the inability of our compression algorithm to perform lossless compression of PCs -even in the limit of very large MLPs. However, it achieves state-of-the-art performance in the regime of strong compression ratios across various datasets on 3D compression -using PCs/meshes with/without attributes (see Sec. 4). Further, sampling PCs from the shape's surface fundamentally limits the reconstruction quality in terms of Chamfer Distance (CD). However, unlike previous methods that approximately memorize the original PC directly, our method learns the underlying geometry." }, { "figure_ref": [], "heading": "Representing 3D Attributes with Neural Fields", "publication_ref": [ "b37" ], "table_ref": [], "text": "Besides the geometry of 3D data, we further compress its attributes (e.g. color) using NFs. To this end, we follow the high level approach of other attribute compression methods and compress the attributes given the geometry [38]. Thus, after training an MLP to represent the geometry of a par-ticular 3D shape we train a separate NF with parameters θ A to correctly predict attributes on the approximated surface Ŝ of the geometry x ∈ {x|N F θG (x) = 0}. Therefore, for a given point x on Ŝ we minimize the ℓ2-distance to the attribute c N N (x, S) of the nearest neighbour on the true surface S:\nL A (θ A ) = E x∈ Ŝ (N F θA (x) -c N N (x, S)) 2 + λ ℓ1 ||θ A || 1 λ ℓ1 represents the strength of the regularization of θ A and c N N (x, S) = c(argmin x ′ ∈S x-x ′\n2 ) with c(•) extracting the attribute at a surface point. Alternatively, one may also optimize a single NF to jointly represent a geometry and its attributes. However, then the NF has to represent attributes in regions x / ∈ Ŝ which wastes capacity. The supplement contains an empirical verification." }, { "figure_ref": [], "heading": "Compressing Neural Fields", "publication_ref": [ "b2" ], "table_ref": [], "text": "In the proposed compression algorithm θ G , and optionally θ A , represent the 3D data. Therefore, it is important to further compress these using NN compression techniques. We achieve this by first quantizing θ G/A , retraining the quantized MLP to regain the lost performance and, lastly, entropy coding the resulting quantized values. Subsequently, we describe each step in detail.\nQuantization. We perform scalar quantization of θ G/A using a global bitwidth b which corresponds to 2 b possible values. We use a separate uniformly-spaced quantization grid for each layer of the MLP. The layer-wise step size s l is defined by the ℓ ∞ -norm of the parameters θ l G/A of layer l and b\ns l = θ l G/A ∞ 2 b -1\nand has to be stored to recover the quantized values. Note, that the quantization grid is centered around 0 -where θ G/A peaks -to improve the gain of lossless compression using entropy coding.\nQuantization-Aware Retraining. We perform a few epochs of quantization-aware retraining with a much smaller learning rate. We compute gradients during quantization-aware training using the straight-through estimator [3]. We also experimented with solely training NFs using quantization-aware optimization. However, this drastically decreased convergence speed and, thus, increased the encoding time without improving performance.\nEntropy Coding. Finally, we further losslessly compress the quantized parameters θl G/A using a near optimal general purpose entropy coding algorithm2 ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b53", "b3", "b9", "b14", "b21", "b45", "b15", "b42", "b42", "b55", "b47", "b25", "b44", "b53", "b3", "b14" ], "table_ref": [], "text": "Sec. 4.1 depicts experiments on geometry compressionboth for PCs and meshes. Moreover, Sec. 4.1 analyses the impact of components our compression algorithm. Sec. 4.2 investigates the performance on 3D geometry and attributes compression. We exclusively consider color attributes.\nDatasets. We conduct experiments on three datasets. Firstly, we evaluate geometry compression -PC as well as mesh compression -on a set of shapes from the Stanford shape repository [54] and a subset of the MGN dataset [4] which was also used by Chibane et al. [10]. The former are high quality meshes consisting of both watertight and non-watertight shapes. The latter are lower quality meshes of clothing. Moreover, we conduct experiments on PCs extracted from 8i Voxelized Full Bodies (8iVFB) [15]. 8iVFB consists of four sequences of high quality colored PCs. Each PC contains between 700,000 and 1,000,000 points. We use the first frame of each sequence in our experiments. We refer to the supplement for visualizations of each dataset.\nData Preprocessing. We center each 3D shape around the origin. Then, we scale it by dividing by the magnitude of the point (PC), resp. vertex (mesh), with largest distance to the origin. For PCs, we compute the ground truth distance d S using the nearest neighbour in the PC. For meshes, we use a CUDA implementation to convert them to SDFs [22] which we further adapt to generate UDFs. We train on points from a mixture of three distributions. 20% are sampled uniformly x ∼ [-1, 1]\n3 , 40% are sampled uniformly from the surface S and the remaining 40% are sampled uniformly from S and perturbed by additive Gaussian noise N (0; σ). For SDFs we set σ = 0.01 and for UDFs σ = 0.025 if not stated otherwise. We sample 100,000 points from S in our experiments on geometry compression on the Stanford shape repository and the MGN dataset and we use all points in the ground truth PC on 8iVFB. Color attributes are translated and scaled to the interval [-1, 1].\nEvaluation and Metrics. We evaluate the reconstruction quality of geometry compression using the CD. The CD is calculated between the ground truth PC and the reconstructed PC when handling PCs. For mesh compression, we report the CD between PCs uniformly sampled from the ground truth and reconstructed mesh. If not stated otherwise, we use 100,000 points on the Stanford shape repository and the MGN dataset, and all available points on 8iVFB. We evaluate the quality of reconstructed attributes using a metric based on the Peak Signal-to-Noise-Ratio (PSNR). Therefore, we compute the PSNR between the attribute of each point in the ground truth PC and its nearest neighbour in the reconstructed PC, and vice versa. The final metric is then the average between both PSNRs. Subsequently, we simply refer to this metric as PSNR. Following Strümpler et al. [46], we traverse the Rate-Distortion curve by varying the width ∈ {16, 24, 32, 48, 64, 96} of the MLP.\nBaselines. We compare NF-based 3D compression with the non-learned baselines Draco [16], GPCC [43] and VPCC [43]. VPCC, which is based on video compression, is the non-learned state-of-the-art on compressing 3D PCs including attributes. We compare our method with the learned neural baseline PCGCv2 [56] which is the state-ofthe-art RDAE based on 3D convolutions and VQAD [48] which builds quantized hierarchical feature grids. None of the baselines supports all data modalities/tasks used in our experiments. Geometry compression on meshes is only supported by Draco. On geometry compression using PCs, we compare with all baselines. Lastly, joint geometry and attribute compression is only supported by GPCC and VPCC which we evaluate on 8iVFB. Note that Draco supports normal but not color attribute compression. When sampling from meshes, we also report the theoretical minimum, i.e. the expected distance between independently sampled point sets.\nOptimization. We train all NFs using a batch size of 10,000 for 500 epochs using a learning rate of 10 -4 and the Adam optimizer [26]. We use λ ℓ1 = 10 -8 and d * = 0.1. Each MLP contains 2 hidden layers. We follow Sitzmann et al. [45] and use the factor 30 as initialization scale. We use 16 fourier features as positional encodings on geometry compression and 8 on attribute compression. NFs are quantized using a bitwidth b = 8 and quantization-aware retraining is performed for 50 epochs using a learning rate Figure 2. Rate-distortion plot for PC compression on the Stanford shape repository [54] (a/b), on the MGN dataset [4] (c) and 8iVFB [15] (d) depicting the average CD and number kilobytes for NFs based on UDFs/SDFs, PCGCv2, VQAD, Draco, GPCC and VPCC. PCGCv2/VQAD are learned neural baselines, and Draco, GPCC and VPCC are non-learned standards. On the Stanford shape repository we report performance on a subset only containing watertight shapes (a) and and all shapes (b). We do not evaluate the performance of VQAD and NFs using SDFs on MGN dataset since it is exclusively comprised of non-watertight shapes. There is not theo. min. in (d) as we operate directly on PCs.\nof 10 -7 . Each NF is trained on a single V100 or A100." }, { "figure_ref": [ "fig_3", "fig_4" ], "heading": "Geometry Compression", "publication_ref": [ "b6", "b40", "b9", "b14" ], "table_ref": [], "text": "We investigate NF-based 3D geometry compression and compare it with the baselines. We evaluate our method on PC and mesh compression and verify design choices. Point Clouds. We evaluate PC compression on the Stanford shape repository, the MGN dataset and 8iVFB. Fig. 2 (a) & (b) depict the result on the Stanford shape repository, where we show results on the watertight subset (a) and all shapes (b), and Fig. 2 (c) &(d) contain the results on the MGN dataset and 8iVFB. Further, Fig. 3 shows qualitative results of reconstructed PCs on the Stanford shape repository. We observe that for watertight shapes on the Stanford shape repository, SDFs outperform the baselines for all levels of compression. On 8iVFB, NF-based compression is only outperformed by PCGCv2 whose performance drops steeply on other datasets. PCGCv2 was trained on ShapeNet [7] which contains artificial shapes with less details than the real scans in the Stanford shape repository and the MGN dataset. Despite its strong performance on 8iVFB, we hypothesize that PCGCv2 reacts very sensitive to the shift in the distribution of high frequency contents in the geometry. As expected the performance of SDFs deteriorates when adding non-watertight shapes since the SDF is not well defined in this case. Further, UDFs also outperform the baselines stronger compression ratios. Similarly, on the MGN dataset, where we do not evaluate SDFs as all shapes are non-watertight, and on 8iVFB UDFs perform well for strong compression ratios and are only out performed by VPCC on weaker compression ratios and by PCGCv2 on 8iVFB. Meshes. Fig. 4 (a) & (b) contains quantitative mesh compression results on the Stanford shape repository and the MGN dataset. Moreover, we refer to the supplement for qualitative results of reconstructed meshes. We only evaluate UDFs since both datasets contain non-watertight shapes. We find that NF-based compression outperforms Draco on more complex meshes (Stanford shape repository), while Draco can outperform UDFs on simpler meshes (MGN dataset). This is reasonable since the meshes in the MGN dataset contain large planar regions which benefits the edge-breaker algorithm [41] used by Draco as it is easier to represent such regions with only a few triangles.\nArchitecture Choices. We investigate the impact of using truncated DFs and applying the abs activation function to the output of UDFs. Fig. 5 (a) depicts the result. We observe that both, truncation and abs activation function, are essential for UDFs with strong compression performance. Note that Chibane et al. [10] used ReLU as the final activation function. We refer to the supplement for a demonstration that ReLU performs similar to a linear activation [15]. On mesh compression, we report the average CD and kilobytes for NFs based on UDFs/SDFs and Draco. We do not evaluate the performance of NFs using SDFs due to non-watertight shapes. In (c)/(d), we report the average CD/PSNR and kilobytes for NFs-based on UDFs/SDFs, GPCC and VPCC. There is no theo. min. in (c) as we use the PCs." }, { "figure_ref": [], "heading": "function.", "publication_ref": [ "b45" ], "table_ref": [], "text": "Regularization. We demonstrate the impact of regularization in Fig. 5 (b) &(c). Adding Gaussian noise σ ∼ N (0; 0.025) and applying an ℓ1-penalty to the parameters θ G increasingly improves performance for larger NFs. This is expected as larger NNs require more regularization to prevent overfitting which is a problem for NF-based compression of 3D geometries -in contrast to other data modalities.\nUDF Parameter Initialization. Interestingly, we observe that the CD of NF-based compression using UDFs has a large variance. In fact, the primary source of this randomness is the parameter initialization which we find by optionally fixing the random seed of the dataset and parameter initialization. For this result we refer the reader to the supplement. Moreover, qualitatively we find in Fig. 5 (d) that UDFs which converge to a SDFs prior to the abs activation function yield lower CD.\nRuntime. We compare the encoding and decoding runtime of NF-based compression with the baselines (see Tab. 1). The encoding runtime of our approach is slower compared to the baselines since it needs to fit a NN to each instance. Notably, this can be improved using meta-learned initializations [46] Interestingly, compared to VPCC, which is the only baseline that is competitive in terms of compression performance on weaker compression ratios, NF-based compression is competitive if decoding is performed on a GPU. This renders NF-based compression practical when a 3D shape needs to be decoded many more times than encoded." }, { "figure_ref": [ "fig_4" ], "heading": "Attribute Compression", "publication_ref": [], "table_ref": [], "text": "Furthermore, we evaluate NF-based compression on PCs containing color attributes on 8iVFB and compare it with GPCC and VPCC. Fig. 4 (c) & (d) show quantitative and Fig. 6 qualitative results. NF-based compression using SDF outperforms both baselines on geometry compression. UDFs outperform VPCC for strong compression ratios. On attribute compression SDFs and UDFs outperform GPCC by a large margin, but VPCC only for stronger compression ratios. We show in the supplement that jointly compressing geometry and attributes leads to worse performance. " }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b27", "b13", "b45", "b45", "b44", "b0", "b1" ], "table_ref": [], "text": "We proposed an algorithm for compressing the geometry as well as attributes of 3D data using NFs. We introduced two variants -one specialized to watertight shapes based on SDFs and another more general approach based on UDFs (see Sec. 3). For watertight shapes SDFs demonstrate strong geometry compression performance across all compression ratios (see Sec. 4.1). UDFs perform well on geometry compression -in particular for stronger compression ratios. However, VPCC can outperform UDFs on weaker compression ratios. Notably, the learned neural baseline PCGCv2s shows strong performance on 8iVFB, but suffers from large performance drops on Stanford shape repository and MGN dataset. This highlights another strength of NFbased neural compression -it does not exhibit the sensitivity to domain shifts of other neural compression algorithms. On attribute compression (see Sec. 4.2) SDFs as well as UDFs outperform both baselines for stronger compression ratios, while VPCC performs better when less compression is required.\nInterestingly, we observed in Sec. 4.1 that the decoding runtime of NF-based compression is competitive on a GPU. This is in line with recent findings on NF-based video compression [28]. However, the encoding runtime remains one order of magnitude larger than the next slowest method (VPCC). This gap can potentially be closed when using meta-learned initializations which have been found to improve NF-based image compression [14,46] and speed up convergence by a factor of 10 [46]. Furthermore, we found that the performance of UDFs strongly depends on the parameter initialization (see Sec. 4.1). When using the abs activation function, UDFs are flexible regarding the values they predict prior to it. On watertight shapes, UDFs can converge to a function predicting the same sign inside and outside or one that predicts different signs. In presence of a sign flip, UDFs perform better. Thus, a promising direction for improving the more general compression method based on UDFs are novel initialization methods beyond the one provided in Sitzmann et al. [45]. Notably, one line of work aims at learning SDFs form raw point clouds by initializing NFs such that they approximately resemble an SDF of an r-ball after initialization [1,2]. However, none of these methods work when using positional encodings which are necessary for strong performance." } ]
Neural Fields (NFs) have gained momentum as a tool for compressing various data modalities -e.g. images and videos. This work leverages previous advances and proposes a novel NF-based compression algorithm for 3D data. We derive two versions of our approach -one tailored to watertight shapes based on Signed Distance Fields (SDFs) and, more generally, one for arbitrary nonwatertight shapes using Unsigned Distance Fields (UDFs). We demonstrate that our method excels at geometry compression on 3D point clouds as well as meshes. Moreover, we show that, due to the NF formulation, it is straightforward to extend our compression algorithm to compress both geometry and attribute (e.g. color) of 3D data.
3D Compression Using Neural Fields
[ { "figure_caption": "Figure 1 .1Figure 1. Overview of NF-based 3D compression. Geometry compression is in the upper part (see Sec.3.1), while the lower part visualizes attribute compression (optional) (see Sec. 3.2). Encoding is shown on the left side. It contains two parts: a) geometry representation using NFs (see Sec. 3.1) and, resp., attribute representation (see Sec.3.2) and b) the NF parameter compression (see Sec.3.3).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 1. Overview of NF-based 3D compression. Geometry compression is in the upper part (see Sec.3.1), while the lower part visualizes attribute compression (optional) (see Sec. 3.2). Encoding is shown on the left side. It contains two parts: a) geometry representation using NFs (see Sec. 3.1) and, resp., attribute representation (see Sec.3.2) and b) the NF parameter compression (see Sec.3.3).", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. We depict qualitative results on \"Armadillo\" of the Stanford shape repository for Draco (a), PCGCv2 (b), GPCC (c), VPCC (d), Vector Quantized Autodecoder (e), UDFs/SDFs (f/g) and the ground truth (h).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Rate-distortion plot for mesh compression on the Stanford shape repository[54] (a) and the MGN dataset[4] (b), and for geometry and attribute compression (c/d) on 8iVFB[15]. On mesh compression, we report the average CD and kilobytes for NFs based on UDFs/SDFs and Draco. We do not evaluate the performance of NFs using SDFs due to non-watertight shapes. In (c)/(d), we report the average CD/PSNR and kilobytes for NFs-based on UDFs/SDFs, GPCC and VPCC. There is no theo. min. in (c) as we use the PCs.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .Figure 6 .56Figure 5. Impact of using truncated (Trunc.) DFs and the abs activation function (a), regularization effect of the standard deviation σ added to points during training (b) on the Stanford shape repository[54]. (c) depicts the impact of the ℓ1 penalty on 8iVFB. We plot Draco, VPCC and GPCC in the background for reference (grey). We observe that UDFs demonstrate the strongest performance when combining truncated unsigned distance fields with the abs activation function (a), when adding noise of standard deviation σ = 0.025 to the raw PC (b) and when applying an ℓ1-penalty to its parameters. We further visualize a 1D-cut through the center of the shape Armadillo of the dataset Stanford shape repository for two independently trained UDFs with a hidden dimension of 16 (d). We observe that the UDF which converges to a SDF (red) prior to the abs activation function yields lower CD.", "figure_data": "", "figure_id": "fig_5", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "We report the average encoding (top) and decoding (bottom) runtime in [s] on the Stanford shape repository of Draco, PCGCv2, GPCC, VPCC, VQAD and our approach based on UDFs. Draco, GPCC and VPCC are evaluated on a CPU, while NF-based compression, VQAD and PCGCv2 are evaluated on a single V100.", "figure_data": "Draco PCGCv2 GPCC VPCC VQAD OURS0.060.500.2646.11725000.040.960.042.33.31.6", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Janis Postels; Yannick Strümpler; Klara Reichard; Luc Van Gool; Federico Tombari
[ { "authors": "Matan Atzmon; Yaron Lipman", "journal": "", "ref_id": "b0", "title": "Sal: Sign agnostic learning of shapes from raw data", "year": "2020" }, { "authors": "Matan Atzmon; Yaron Lipman", "journal": "", "ref_id": "b1", "title": "Sald: Sign agnostic learning with derivatives", "year": "2021" }, { "authors": "Yoshua Bengio; Nicholas Léonard; Aaron Courville", "journal": "", "ref_id": "b2", "title": "Estimating or propagating gradients through stochastic neurons for conditional computation", "year": "2013" }, { "authors": "Bharat Lal Bhatnagar; Garvita Tiwari; Christian Theobalt; Gerard Pons-Moll", "journal": "IEEE", "ref_id": "b3", "title": "Multi-garment net: Learning to dress 3d people from images", "year": "2019" }, { "authors": "Thomas Bird; Johannes Ballé; Saurabh Singh; Philip A Chou", "journal": "IEEE", "ref_id": "b4", "title": "3d scene compression through entropy penalized neural representation functions", "year": "2021" }, { "authors": "Chao Cao; Marius Preda; Titus Zaharia", "journal": "", "ref_id": "b5", "title": "3d point cloud compression: A survey", "year": "2019" }, { "authors": "X Angel; Thomas Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Jianxiong Su; Li Xiao; Fisher Yi; Yu", "journal": "", "ref_id": "b6", "title": "ShapeNet: An Information-Rich 3D Model Repository", "year": "2015" }, { "authors": "Bo Hao Chen; Hanyu He; Yixuan Wang; Ren; Nam Ser; Abhinav Lim; Shrivastava", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Nerv: Neural representations for videos", "year": "2021" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b8", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Julian Chibane; Gerard Pons-Moll", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "Neural unsigned distance fields for implicit function learning", "year": "2020" }, { "authors": "Muhammet Bharath Bhushan Damodaran; Franck Balcilar; Pierre Galpin; Hellier", "journal": "", "ref_id": "b10", "title": "Rqat-inr: Improved implicit neural image compression", "year": "2023" }, { "authors": "Ricardo L ; De Queiroz; Philip A Chou", "journal": "IEEE Transactions on Image Processing", "ref_id": "b11", "title": "Compression of 3d point clouds using a region-adaptive hierarchical transform", "year": "2016" }, { "authors": "Emilien Dupont; Adam Goliński; Milad Alizadeh; Yee Whye Teh; Arnaud Doucet", "journal": "", "ref_id": "b12", "title": "Coin: Compression with implicit neural representations", "year": "2021" }, { "authors": "Emilien Dupont; Hrushikesh Loya; Milad Alizadeh; Adam Golinski; Y Whye Teh; Arnaud Doucet", "journal": "Transactions on Machine Learning Research", "ref_id": "b13", "title": "Coin++: Neural compression across modalities", "year": "2022" }, { "authors": "Bob Eugene D'eon; Taos Harrison; Philip A Myers; Chou", "journal": "", "ref_id": "b14", "title": "8i voxelized full bodies-a voxelized point cloud dataset", "year": "2017" }, { "authors": "Frank Galligan; Michael Hemmer; Ondrej Stava; Fan Zhang; Jamieson Brettle", "journal": "", "ref_id": "b15", "title": "Google/draco: a library for compressing and decompressing 3d geometric meshes and point clouds", "year": "2018" }, { "authors": "Harry Gao; Weijie Gan; Zhixin Sun; Ulugbek S Kamilov", "journal": "", "ref_id": "b16", "title": "Sinco: A novel structural regularizer for image compression using implicit neural representations", "year": "2022" }, { "authors": " Graphics", "journal": "", "ref_id": "b17", "title": "Call for proposals for point cloud compression v2", "year": "1676" }, { "authors": "Nuno Mm André Fr Guarda; Fernando Rodrigues; Pereira", "journal": "IEEE", "ref_id": "b18", "title": "Deep learning-based point cloud coding: A behavior and performance study", "year": "2019" }, { "authors": "Nuno Mm André Fr Guarda; Fernando Rodrigues; Pereira", "journal": "IEEE", "ref_id": "b19", "title": "Deep learning-based point cloud geometry coding: Rd control through implicit and explicit quantization", "year": "2020" }, { "authors": "Federico Benoit Guillard; Pascal Stella; Fua", "journal": "Springer", "ref_id": "b20", "title": "Meshudf: Fast and differentiable meshing of unsigned distance field networks", "year": "2022" }, { "authors": "Zekun Hao; Hadar Averbuch-Elor; Noah Snavely; Serge Belongie", "journal": "", "ref_id": "b21", "title": "Dualsdf: Semantic shape manipulation using a two-level representation", "year": "2020" }, { "authors": "Berivan Isik", "journal": "", "ref_id": "b22", "title": "Neural 3d scene compression via model compression", "year": "2021" }, { "authors": "Berivan Isik; Philip Chou; Sung ; Jin Hwang; Nicholas Johnston; George Toderici", "journal": "Frontiers in Signal Processing", "ref_id": "b23", "title": "Lvac: Learned volumetric attribute compression for point clouds using coordinate based networks", "year": "2021" }, { "authors": "Subin Kim; Sihyun Yu; Jaeho Lee; Jinwoo Shin", "journal": "", "ref_id": "b24", "title": "Scalable neural video representations with learnable positional features", "year": "" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b25", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Théo Ladune; Pierrick Philippe; Félix Henry; Gordon Clare", "journal": "", "ref_id": "b26", "title": "Cool-chic: Coordinate-based low complexity hierarchical image codec", "year": "2022" }, { "authors": "Chan Joo; Daniel Lee; Jong Hwan Rho; Eunbyung Ko; Park", "journal": "", "ref_id": "b27", "title": "Ffnerv: Flow-guided frame-wise neural representations for videos", "year": "2022" }, { "authors": "Lingzhi Li; Zhen Shen; Zhongshu Wang; Li Shen; Liefeng Bo", "journal": "", "ref_id": "b28", "title": "Compressing volumetric radiance fields to 1 mb", "year": "2023" }, { "authors": "E William; Harvey E Lorensen; Cline", "journal": "ACM siggraph computer graphics", "ref_id": "b29", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1987" }, { "authors": "Sharath Shishira R Maiya; Max Girish; Hanyu Ehrlich; Kwot Sin Wang; Patrick Lee; Pengxiang Poirson; Chen Wu; Abhinav Wang; Shrivastava", "journal": "", "ref_id": "b30", "title": "Nirvana: Neural implicit representations of videos with adaptive networks and autoregressive patch-wise modeling", "year": "2022" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b31", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b32", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Maurice Quach; Giuseppe Valenzise; Frederic Dufaux", "journal": "IEEE", "ref_id": "b34", "title": "Learning convolutional transforms for lossy point cloud geometry compression", "year": "2019" }, { "authors": "Maurice Quach; Giuseppe Valenzise; Frederic Dufaux", "journal": "IEEE", "ref_id": "b35", "title": "Folding-based compression of point cloud attributes", "year": "2020" }, { "authors": "Maurice Quach; Giuseppe Valenzise; Frederic Dufaux", "journal": "IEEE", "ref_id": "b36", "title": "Improved deep point cloud geometry compression", "year": "2020" }, { "authors": "Maurice Quach; Jiahao Pang; Dong Tian; Giuseppe Valenzise; Frédéric Dufaux", "journal": "Frontiers in Signal Processing", "ref_id": "b37", "title": "Survey on deep learning-based point cloud compression", "year": "2022" }, { "authors": "Edoardo Mello Rella; Ajad Chhatkuli; Ender Konukoglu; Luc Van Gool", "journal": "", "ref_id": "b38", "title": "Neural vector fields for surface representation and inference", "year": "2022" }, { "authors": "Daniel Rho; Junwoo Cho; Jong Hwan Ko; Eunbyung Park", "journal": "", "ref_id": "b39", "title": "Neural residual flow fields for efficient video representations", "year": "2022" }, { "authors": "Jarek Rossignac", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b40", "title": "Edgebreaker: Connectivity compression for triangle meshes", "year": "1999" }, { "authors": "Jonathan Schwarz; Yee Whye Teh", "journal": "Transactions of Machine Learning Research", "ref_id": "b41", "title": "Meta-learning sparse compression networks", "year": "2022" }, { "authors": "Sebastian Schwarz; Marius Preda; Vittorio Baroncini; Madhukar Budagavi; Pablo Cesar; Philip A Chou; Robert A Cohen; Maja Krivokuća; Sébastien Lasserre; Zhu Li", "journal": "IEEE Journal on Emerging and Selected Topics in Circuits and Systems", "ref_id": "b42", "title": "Emerging mpeg standards for point cloud compression", "year": "2018" }, { "authors": "Xihua Sheng; Li Li; Dong Liu; Zhiwei Xiong; Zhu Li; Feng Wu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b43", "title": "Deep-pcac: An end-to-end deep lossy compression framework for point cloud attributes", "year": "2021" }, { "authors": "Julien Vincent Sitzmann; Alexander Martel; David Bergman; Gordon Lindell; Wetzstein", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b44", "title": "Implicit neural representations with periodic activation functions", "year": "2020" }, { "authors": "Yannick Strümpler; Janis Postels; Ren Yang; Luc Van Gool; Federico Tombari", "journal": "Springer", "ref_id": "b45", "title": "Implicit neural representations for image compression", "year": "2022" }, { "authors": "Towaki Takikawa; Joey Litalien; Kangxue Yin; Karsten Kreis; Charles Loop; Derek Nowrouzezahrai; Alec Jacobson; Morgan Mcguire; Sanja Fidler", "journal": "", "ref_id": "b46", "title": "Neural geometric level of detail: Real-time rendering with implicit 3d shapes", "year": "2021" }, { "authors": "Towaki Takikawa; Alex Evans; Jonathan Tremblay; Thomas Müller; Morgan Mcguire; Alec Jacobson; Sanja Fidler", "journal": "", "ref_id": "b47", "title": "Variable bitrate neural fields", "year": "2022" }, { "authors": "Matthew Tancik; Pratul Srinivasan; Ben Mildenhall; Sara Fridovich-Keil; Nithin Raghavan; Utkarsh Singhal; Ravi Ramamoorthi; Jonathan Barron; Ren Ng", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "year": "2020" }, { "authors": "Matthew Tancik; Ben Mildenhall; Terrance Wang; Divi Schmidt; P Pratul; Jonathan T Srinivasan; Ren Barron; Ng", "journal": "", "ref_id": "b49", "title": "Learned initializations for optimizing coordinate-based neural representations", "year": "2021" }, { "authors": "Danhang Tang; Mingsong Dou; Peter Lincoln; Philip Davidson; Kaiwen Guo; Jonathan Taylor; Sean Fanello; Cem Keskin; Adarsh Kowdle; Sofien Bouaziz", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b50", "title": "Real-time compression and streaming of 4d performances", "year": "2018" }, { "authors": "Danhang Tang; Saurabh Singh; Philip A Chou; Christian Hane; Mingsong Dou; Sean Fanello; Jonathan Taylor; Philip Davidson; G Onur; Yinda Guleryuz; Zhang", "journal": "", "ref_id": "b51", "title": "Deep implicit volume compression", "year": "2020" }, { "authors": "Robert Tibshirani", "journal": "Journal of the Royal Statistical Society: Series B (Methodological)", "ref_id": "b52", "title": "Regression shrinkage and selection via the lasso", "year": "1996" }, { "authors": "Greg Turk; Marc Levoy", "journal": "", "ref_id": "b53", "title": "Zippered polygon meshes from range images", "year": "1994" }, { "authors": "Jianqiang Wang; Zhan Ma", "journal": "IEEE", "ref_id": "b54", "title": "Sparse tensor-based point cloud attribute compression", "year": "2022" }, { "authors": "Jianqiang Wang; Dandan Ding; Zhu Li; Zhan Ma", "journal": "IEEE", "ref_id": "b55", "title": "Multiscale point cloud geometry compression", "year": "2021" }, { "authors": "Jianqiang Wang; Hao Zhu; Haojie Liu; Zhan Ma", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b56", "title": "Lossy point cloud geometry compression via end-to-end learning", "year": "2021" }, { "authors": "Yiheng Xie; Towaki Takikawa; Shunsuke Saito; Or Litany; Shiqin Yan; Numair Khan; Federico Tombari; James Tompkin; Vincent Sitzmann; Srinath Sridhar", "journal": "Computer Graphics Forum", "ref_id": "b57", "title": "Neural fields in visual computing and beyond", "year": "2022" }, { "authors": "Yunfan Zhang; Ties Van Rozendaal; Johann Brehmer; Markus Nagel; Taco Cohen", "journal": "", "ref_id": "b58", "title": "Implicit neural video compression", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 50.64, 617.66, 233.52, 25.84 ], "formula_id": "formula_0", "formula_text": "d S,T = d S if |d S,T | ≤ d * sgn(d S )d ∈ {δ ∈ R : |δ| > d * } else." }, { "formula_coordinates": [ 3, 360.24, 564.44, 185.6, 10.9 ], "formula_id": "formula_1", "formula_text": "L G (θ G ) = L D (θ G ) + λ ℓ1 ||θ G || 1(1)" }, { "formula_coordinates": [ 3, 308.88, 581.61, 236.24, 36.94 ], "formula_id": "formula_2", "formula_text": "L D (θ G ) = E d -sgn(d S ) min(|d S |, d * ) 2 if |d S | ≤ d * or | d| ≤ d * and 0 otherwise." }, { "formula_coordinates": [ 4, 308.88, 466.41, 236.34, 53.74 ], "formula_id": "formula_3", "formula_text": "L A (θ A ) = E x∈ Ŝ (N F θA (x) -c N N (x, S)) 2 + λ ℓ1 ||θ A || 1 λ ℓ1 represents the strength of the regularization of θ A and c N N (x, S) = c(argmin x ′ ∈S x-x ′" }, { "formula_coordinates": [ 5, 135.12, 122.69, 64.46, 29.08 ], "formula_id": "formula_4", "formula_text": "s l = θ l G/A ∞ 2 b -1" } ]
10.18653/v1/W18-0906
2023-11-21
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b50", "b40", "b61", "b49", "b60", "b15", "b9", "b46", "b16", "b18", "b47", "b19", "b13", "b66", "b67", "b14", "b11", "b20", "b25", "b68", "b3", "b27", "b2", "b52", "b8", "b42", "b55", "b29", "b36", "b22", "b69", "b21", "b54", "b70", "b18", "b17", "b12", "b58", "b33" ], "table_ref": [], "text": "Many words in the lexicon are polysemous in that the same word form can express multiple distinct yet related senses: for instance, some English verbs describing our interactions with physical objects such as get, grasp can also denote the acquisition or distribution of abstract knowledge (e.g. to grasp/get someone's idea); as a result, human speakers are able to extend the meaning of other interaction verbs like steal to form metaphorical expressions such as \"to steal information\". On the other hand, although recent work suggests that distributed semantic models such as word embeddings and contextualized language models can be applied Given two conceptually related semantic domains (e.g. ITEM and INFORMATION) and usages of polysemous words describing both domains (e.g. the verbs get, grasp, sell that can take both ITEM class and INFORMATION class nouns as objects), we wish to extend the meaning of another word (e.g. steal with its literal sense only) from denoting one of the two domains to denoting both.\nto disambiguate related word senses (Reisinger and Mooney, 2010;Mikolov et al., 2013;Wiedemann et al., 2019;Reif et al., 2019) and recognize regular relations between lexical items (Boleda et al., 2012a;Vulić et al., 2020;Garí Soler and Apidianaki, 2021), few has investigated whether machines can also productively leverage the detected regularity to generate and understand novel language use in a human-like way. Linguists and cognitive scientists have suggested that the extensional processes of many polysemous words from conventional to novel senses are governed by the same set of generative lexical rules (Copestake and Briscoe, 1995;Pustejovsky, 1998;Gentner, 1983;Gentner et al., 2001;Pustejovsky and Rumshisky, 2010) and are therefore intrinsically related to each other -that is, word meaning extensions exhibit systematicity, as suggested by both theoretical studies of human cognition (Gentner and Toupin, 1986;Fodor and Pylyshyn, 1988) and empirical investigations of word meaning change (Xu and Kemp, 2015;Xu et al., 2017;Fugikawa et al., 2023). Here we show that neural language models often fail to generate plausible novel word meaning that bears predictable system-atic relations with existing senses, a pattern that is consistent with their poor systematicity in NLP (Ettinger et al., 2018;Goodwin et al., 2020;Keysers et al., 2020;Yanaka et al., 2020) and similar failures observed in other domains of machine learning (Bentivogli et al., 2016;Lake and Baroni, 2018;Bahdanau et al., 2018). The lack of systematicity in word meaning extension also explains recent findings that language models tend to struggle at processing under-represented figurative expressions including metaphor (Stowe et al., 2022), simile (Chakrabarty et al., 2022) and slang (Ni and Wang, 2017;Sun et al., 2022).\nA recent line of work has proposed to predict word meaning extension based on the cognitive theory of chaining (Lakoff, 1987;Malt et al., 1999), where novel meaning is linked to existing ones due to their proximity in semantic space (Habibi et al., 2020;Yu and Xu, 2021;Grewal and Xu, 2021;Sun et al., 2021;Yu and Xu, 2023). However, existing chaining models prefer extensions across literally similar domains with high overlapping in semantic features, while ignoring the relational similarity between word senses that is essential to understanding conceptual and linguistic metaphors (Gentner et al., 2001;Gentner and Bowdle, 2008). As a result, chaining models often fail to predict many figurative word senses that share few similar semantic features with literal meaning.\nWe propose a novel task called systematic word meta-sense extension (SWORME) to evaluate a language model's ability to predict regular types of word meaning extension in naturalistic context. As illustrated in Figure 1, given two semantic domains that are conceptually related via general cognitive processes such as analogy, we wish to simulate the scenario where a person, after learning usages of polysemous words describing both domains, can leverage the regular relation between them to extend the meaning of a new target word from one domain to the other. Inspired by research in analogical inference (Falkenhainer et al., 1989;Turney, 2006;Levy et al., 2015), we introduce a new model that infers novel word meta-sense based on the relational similarity between systematically alternating word meta-senses, which predicts both incrementally and radically novel usages for over 7,300 polysemous English words." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Regular polysemy and meaning extension", "publication_ref": [ "b46", "b9", "b30", "b7", "b56", "b59" ], "table_ref": [], "text": "Several lexical semantics and cognitive linguistic theories have been proposed to explain word meaning extension using symbolic rules operating on the semantic structures of lexical entries, including the Generative Lexicon theory by Pustejovsky (1998), the semi-productive sense extension framework by Copestake and Briscoe (1995), and the conceptual metaphor theory by Lakoff and Johnson (2008). Inspired by the ontological view of word meaning variation in Generative Lexicon, some pioneering studies on regular polysemy grouped word senses into broader classes of semantic categories based on WordNet (Buitelaar, 1998;Tomuro, 2001) or linguistic corpus statistics (Boleda et al., 2012b), so that regular polysemy can be defined as a set of words showing the same variation between two (or more) categories (Utt and Padó, 2011). Our framework adopts a similar definition of regular polysemy but instead tackles the problem from a generative perspective." }, { "figure_ref": [], "heading": "Systematicity in NLP", "publication_ref": [ "b13", "b38", "b3", "b27", "b2", "b11", "b20", "b25", "b68", "b34" ], "table_ref": [], "text": "It has been argued for a long time that neural networks are not cognitively feasible models of natural language because they fail to make systematic generalizations (Fodor and Pylyshyn, 1988;Marcus, 1998), and there has been an extensive line of empirical work to evaluate and improve the systematicity of neural networks (Bentivogli et al., 2016;Lake and Baroni, 2018;Bahdanau et al., 2018). Existing NLP studies on systematicity mostly focus on investigating whether words have consistent contributions to the meaning representations of their composed expressions (Ettinger et al., 2018;Goodwin et al., 2020;Keysers et al., 2020;Yanaka et al., 2020). However, there also exists a wide range of non-compositional, idiosyncratic expressions that can still confuse state-of-the-art large language models like GPT-3 (Li et al., 2022). We shall demonstrate that while many figurative expressions are non-compositional at word-level, their meaning can be modeled as the composition of literal word senses and regular types of semantic relation." }, { "figure_ref": [], "heading": "Figurative language processing", "publication_ref": [ "b51", "b31", "b0", "b53", "b4", "b37", "b8", "b24" ], "table_ref": [], "text": "Most previous work on figurative language focuses on constructing datasets and training models of identifying metaphors in text (Stowe and Palmer, 2018;Leong et al., 2018;Aghazadeh et al., 2022). Several studies built metaphor interpretation systems by first identifying metaphorical usages and then translating them into its literal word sense recorded in WordNet (Su et al., 2017;Bizzoni and Lappin, 2018;Mao et al., 2018). Other work has focused on interpreting figurative language in narratives in context (Chakrabarty et al., 2022;Jhamtani et al., 2021) and observed that many models show very large drops in performance compared to contexts without figurative language." }, { "figure_ref": [], "heading": "Computational framework", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the concept of word meta-sense, and formulate regular polysemy as systematic types of meta-sense alternation. Next, we introduce the process of partitioning a polysemous word type into multiple hypothetical tokens signifying its different meta-senses to operationalize the scenario of meaning extension toward novel domains. We then define SWORME as a task of inferring partitioned token pairs denoting systematically related meta-senses to substitute each other in naturalistic context. We finally introduce methods of learning systematicity in meta-sense extension." }, { "figure_ref": [], "heading": "Meta-sense and systematic alternation", "publication_ref": [ "b1", "b43", "b59", "b41" ], "table_ref": [], "text": "It has been suggested that regular polysemy can be indicated by multiple words sharing the same distribution over denoted semantic domains (Apresjan, 1974;Nunberg, 1979). We define a meta-sense as a group of word senses that share certain highlevel semantic features, and a pair of meta-senses is called a meta-alternation if there exists a word form that has senses from both meta-sense categories, and we call such word a lexical instantiation of the meta-alternation. Following the frequency-based definition of systematic polysemy in Utt and Padó (2011) , we consider a meta-alternation as systematic if there is a large set of words instantiating the same meta-alternation2 , and a systematic word meta-sense extension (SWORME) is the case where a word w with existing senses only under metasense m is used to express a new sense from m ′ which together with m forms a systematic alternation (m, m ′ ). For example, the two meta-senses ANIMAL and FOOD together form a systematic meta-alternation with metonymic lexical instantiations such as chicken and lamb that denote both animal names and their meat.\nWe use the CoreLex ontology made by Buitelaar (1998) as our meta-sense inventory for English words. CoreLex builds on WordNet (Miller, 1995) and defines a layer of abstraction above WordNet synsets consisting of 39 basic meta-senses, with each meta-sense having a namesake anchor synset in WordNet. 3 We follow the method introduced in Boleda et al. (2012a) to map each WordNet synset s to a meta-sense whose anchor synset is closest to s on the taxonomy tree, and we can therefore assign a meta-sense label for each usage of a word in a sense-annotated corpus. Since CoreLex only covers noun synsets, we extend meta-sense categorization to verbs and adjectives by assigning each usage of a verb or adjective the same meta-sense label as its syntactic noun object -for instance, the both verb grasp and the adjective big can then have two meta-senses ITEM and INFORMATION, with the former meta-sense being signified in phrases like \"to grasp an item\" and \"a big item\", and the latter being reflected by expressions such as \"to grasp an idea\" and \"a big idea\"." }, { "figure_ref": [ "fig_1" ], "heading": "Meaning-based word type partitioning", "publication_ref": [], "table_ref": [], "text": "We wish to investigate whether language models can flexibly extend word meaning across a systematic meta-alternation (m, m ′ ). We operationalize this idea by training a language model from scratch on a text corpus in which some lexical instantiations w of (m, m ′ ) are partitioned into two new hypothetical tokens: a token t(w, m) replacing all mentions of w in a sense-annotated corpus that exhibit the meta-sense m, and another token t(w, m ′ ) replaces w for sentences in the corpus signifying the meta-sense m ′ , as illustrated in Figure 2(a)-(c). The resulting language model can therefore compute valid meaning representations for usages of w with meta-sense m ′ using the partitioned token t(w, m ′ ) without knowing that w can actually express m ′ ." }, { "figure_ref": [ "fig_1" ], "heading": "SWORME as token substitution", "publication_ref": [], "table_ref": [], "text": "Let (m, m ′ ) be a systematic meta-alternation with a lexical instantiation w, and let U (t(w, m)), U (t(w, m ′ )) be two sets of usage sentences with w replaced by its partitioned tokens t(w, m), t(w, m ′ ) respectively. As illus- We then partition each lexical instantiation by replacing it with two hypothetical tokens -e.g. the nonce words t(w, m) = galumph and t(w, m ′ ) = chortle in (c) replace mentions of arrive at exhibiting the LOCATION and the PSYCHOLOGICAL-STATE meta-senses respectively, and their systematic relation is indicated by their matching background shape figures. A language model is then pretrained from scratch on the replaced corpus and is then evaluated on the token substitution task, where the model is asked to choose the correct partitioned token galumph in (e) to paraphrase its \"sibling\" token chortle.\ntrated in Figure 2(e), given a usage sentence u ∈ U (t(w, m ′ )), we say that a model extends the meaning of t(w, m) to m ′ under context u if the model infers that t(w, m) is a good substitution to paraphrase t(w, m ′ ) in u. In particular, let T be a list of candidate paraphrase tokens containing t(w, m), we would ask the language model to first compute the contextualized embedding h(t, u) of each t ∈ T in context u (with t(w, m) replaced by t), and choose the best paraphrase token t * that maximizes the semantic similarity between the contextualized embeddings of t and t(w, m ′ ) in u:\nt * = argmin t∈T ||h(t, u) -h(t(w, m ′ ), u)|| 2 (1)\nthe meaning extension of t(w, m) to m ′ is successful if and only if t * = t(w, m)." }, { "figure_ref": [], "heading": "Learning systematic meta-sense extensions", "publication_ref": [ "b29", "b16", "b58", "b40", "b44", "b48", "b22", "b45", "b54", "b70" ], "table_ref": [], "text": "We hypothesize that the language model embedding space optimized on standard pretraining objectives such as masked language modeling may not well capture the regularity underlying metaalternations, and we next propose two methods to incorporate knowledge of systematic meta-sense extension into language models. Our methods are based on the cognitive theory of chaining (Lakoff, 1987) which states that word meaning extends to novel yet semantically similar meta-senses, and we consider two chaining models with different operationalizations of semantic similarity. Analogical chaining. We define a word metasense prototype h(w, m) as the mean contextualized embedding of all mentions of w exhibit-ing meta-sense m in a reference corpus, and z(w, m, m ′ ) = h(w, m) -h(w, m ′ ) be the offset between the prototypes of w's two meta-senses. Let W (m, m ′ ) be the whole set of lexical instantiations of meta-alternation (m, m ′ ), the analogical chaining model draws inspirations from parallelogram models of human and machine analogical inference (Gentner, 1983;Turney, 2006;Mikolov et al., 2013;Peterson et al., 2020) and assumes that the relational representations between the meta-sense prototypes of any two (w 1 , w 2 ) ∈ W (m, m ′ ), operationalized as the offset embeddings z(w 1 , m, m ′ ) = h(w 1 , m) -h(w 1 , m ′ ) and z(w 2 , m, m ′ ) = h(w 2 , m) -h(w 2 , m ′ ), should be similar. We could therefore train a language model to align z(w 1 , m, m ′ ), z(w 2 , m, m ′ ) for a subset of lexical instantiations of each meta-alternation, and then test whether the model can generalize the learned relational regularity to unseen lexical items in the same meta-alternation category. In particular, at each trial, we sample a systematic alternation (m, m ′ ) and a pair of its lexical instantiations (w 1 , w 2 ), and train the language model to minimize the following loss function:\nL analog = - (m,m ′ ,w 1 ,w 2 ) d(w 1 , w 2 , m, m ′ ) (2) d(w 1 , w 2 , m, m ′ ) = ||z(w 1 , m, m ′ ) -z(w 2 , m, m ′ )|| 2 (3)\nAssociative chaining. The associative model follows recent computational implementations of semantic chaining (Ramiro et al., 2018;Habibi et al., 2020;Pinto Jr and Xu, 2021) and predicts that the token t(w, m) with an existing meta-sense m can be extended to express a new meta-sense m ′ if they share similar semantic feature valuesi.e. the semantic distance between their prototypes z(w, m, m ′ ) = h(w, m) -h(w, m ′ ) is small. We use the formulation of prototype-based chaining in (Sun et al., 2021;Yu and Xu, 2023) and train language models on a contrastive learning objective: in each step, we sample a meta-sense triplet M trip = (m, m + , m -), so that (m, m + ) together form a meta-alternation while (m, m -) is not a systematic alternation. We then sample a lexical instantiation w of (m, m + ) and another word w ′ with meta-sense m -, and train the language model to minimize the following loss function:\nL assoc = - M trip w,w ′ l(w, w ′ ) (4) l(w, w ′ ) = ||h(w, m) -h(w, m + )|| 2 -||h(w, m) -h(w ′ , m -)|| 2\n(5)" }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b70", "b39" ], "table_ref": [ "tab_0" ], "text": "We construct our SWORME usage dataset based on the sense-annotated text corpus made by (Yu and Xu, 2023), which consists of 1.47M sentences taken from the Wikitext-103 corpus (Merity et al., 2016) and contains usages of over 7,500 English polysemous words labeled with their associated WordNet synset IDs. We obtain the CoreLex metasense label for each polysemous word usage via the mapping method introduced in section 3.1. For each word, we only keep usages of its top-2 most frequent meta-senses in the corpus, so that there is no overlap between the lexical instantiation sets of any two meta-alternation classes. To decide a set of systematic meta-alternations, we then take all meta-sense pairs (m, m ′ ) with at least 50 lexical instantiations of more than 10 usage examples under each meta-sense (i.e. with at least 20 mentions in total). This gives us a total of 50 meta-sense alternation pairs that covers a variety of widely studied types of regular meaning alternation including logical metonymy, weak metaphor and strong metaphor4 . For each systematic meta-alternation, we take the top-100 lexical instantiations with highest numbers of usage examples in the corpus. This pipeline finally yields approximately 880,000 usage sentences for 7,346 English words (3,155 nouns and 2576 verbs and 1,615 adjectives). See Table 1 for sample entries of the resulting dataset.\n5 Results on SWORME" }, { "figure_ref": [], "heading": "Experimental setup", "publication_ref": [], "table_ref": [], "text": "We split the collection of lexical instantions W (m, m ′ ) of each meta-alternation (m, m ′ ) into two subsets W train (m, m ′ ), W test (m, m ′ ), and evaluate transformer-based language models on the task of SWORME via three steps: 1) in the pretraining step, the model is trained from scratch via the masked language modeling (MLM) objective on usage sentences of each w ∈ W (m, m ′ ), where the model takes batches of sampled usage sentences with 15% of randomly chosen tokens masked out, and updates its parameter weights to maximize the probability of infilling the correct missing tokens. We replace each w ∈ W test (m, m ′ ) with its partitioned tokens, and increase the vocabulary size of the language model by adding rows to its first embedding layer and its language model head layer accordingly. For words with multiple tokens, we would replace all of its constituent tokens with a single new token added into the tokenizer vocabulary. We keep the original word form for each w ∈ W train (m, m ′ ) so that the model learns that (m, m ′ ) can be expressed together by some word forms suggesting systematic relations. 2) in the SWORME learning step, the language model is further fine-tuned on one of the two chaining objectives L analog or L assoc over usage sentences of each w ∈ W train (m, m ′ ) in its original word form; 3) in the evaluation step, we test the language model on the lexical substitution task over usage sentences of w ∈ W test (m, m ′ ) with w replaced by its partitioned tokens. In particular, at each evaluation trial, we present the model with a usage sentence of a hypothetical token t(w, m ′ ), and a list of 100 candidate tokens consisting of a ground-truth substitution t(w, m) and 99 negative alternatives randomly sampled from the set of hypothetical tokens partitioned from other words w ′ ∈ W test (m, m ′ ) 5 . We use mean precision to measure model performance, which is the percentage of cases where the model predicts t(w, m) as the most likely substitution among 100 candidates, so a random baseline would yield a 1% predictive accuracy.\nWe expect a systematic model of SWORME to generalize the meaning of a token t(w, m) to express a new meta-sense m ′ after learning from a small set of examples indicating the regularity between (m, m ′ ). We therefore change the proportion of unpartitioned training words per meta-\nalternation α = |W train (m,m ′ )|\n|W train (m,m ′ )+Wtest(m,m ′ )| from 0 to 0.8 with a step size of 0.2, and learn 5 independent SWORME models to examine how their performance change as the linguistic evidence of systematic meta-sense alternation increases. Further details of experimental setups can be found in Appendix A." }, { "figure_ref": [], "heading": "Models of SWORME", "publication_ref": [ "b10" ], "table_ref": [], "text": "We take a randomly initialized transformer encoder with the same architecture as BERT-baseuncased by Devlin et al. (2019) as our main language model, based on which we implement three models of SWORME: 1) a SWORME-analogy model pretrained on MLM and fine-tuned on SWORME using the analogical chaining objective, 2) a SWORME-associate model pretrained on MLM and fine-tuned using the associative chaining objective, and 3) a SWORME-full model that is fine-tuned on both chaining objectives after being pretrained via MLM. We also include a baseline model BERT-MLM baseline that is only pretrained om MLM but is not fine-tuned on chaining. 5 We experimented with several alternative sampling methods of negative source tokens, such as taking the top-100 partitioned tokens with most similar static embeddings to the target token, but did not observe significant performance change. " }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Results", "publication_ref": [ "b65" ], "table_ref": [ "tab_2" ], "text": "Figure 3 shows model precision with various values of α over 5 independent runs. We observe that all BERT-based models achieve significantly above chance accuracy and perform better as being exposed to more lexical instantiations per metaalternation during pretraining. In particular, even in the case where a pair of systematically related metasenses are never expressed together by any word form in training data (i.e. α = 0), BERT can still predict that words denoting one of the two semantic categories can be extended express the other, suggesting that the language model has captured some intrinsic conceptual relatedness between semantic domains during MLM pretraining. Moreover, the superior performance of the analogical chaining models over their associative chaining counterparts suggest that the analogical or relational similarity between semantic domains is more useful than their overall featural proximity for systematic word meaning extensions. We further examine model sensitivity to the conceptual relatedness between existing and extended meta-senses. We quantify the degree of conceptual relatedness as the mean Wu-Palmer similarity (Wu and Palmer, 1994) between the anchored Word-Net synsets of two meta-senses, and we then compute the mean model precision of predicting substituted partitioned tokens from each meta-sense alternation pair (averaged over both extensional directions), as shown in Figure 4 for three experiment setups with increasing amount of training words per meta-alternation (α = [0, 0.2, 0.8]). We found that all models generally make better predictions on meta-alternations that are conceptually more contiguous (e.g., metonymy), and perform less well on examples where the novel metasense is conceptually very different to the existing one (e.g., strong metaphor). Moreover, analogical chaining model exhibits less sensitivity to semantic proximity and generally does better at predicting radical meta-sense extensions than its associative chaining counterpart. Table 2 shows the top-3 meta-alternation classes on which analogical chaining improves model performance most significantly over associative chaining. We found that all these meta-alternations are typical examples of \"metaphorical\" extensions consisting of a concrete meta-sense and a semantically very different abstract meta-sense. These results again suggest that the literal similarity between conventional and novel meaning is insufficient to account for various types of lexical creativity." }, { "figure_ref": [], "heading": "Application to figurative language understanding", "publication_ref": [ "b52", "b32", "b63", "b57", "b23" ], "table_ref": [ "tab_3", "tab_4" ], "text": "We finally demonstrate that learning SWORME can benefit transformer language models on the task of figurative language understanding (FLU).\nData. We evaluate models on two publicly available datasets of natural language inference (NLI) with figurative expressions: the IMPLI dataset by Stowe et al. (2022) contains 25,860 figurativeliteral expression pairs, where each literal expression can be either entailed or non-entailed by its paired figurative expression that comes from one of the two classes: metaphors or idioms. The Winograd-style questions (Levesque et al., 2012), where a model is asked to identify a literal entailment among two candidates for a pair of superficially similar figurative expressions with opposite meaning. The questions in Fig-QA can be categorized into four classes based on the type of knowledge required to answer them: objective knowledge (Obj), visual metaphors (Vis), social understanding (Soc), and cultural metaphors (Cul).\nModels. We test three off-the-shelf pretrained transformer language models on FLU: 1) BERTbase-uncased (with 0.11B parameters, pretrained on 40 GB of text) implemented by HuggingFace (Wolf et al., 2019), 2) GPT2-XL (with 1.5B parameters, pretrained on 800GB of text) implemented also by HuggingFace, and 3) LLaMA (with 7B parameters, pretrained on 1TB of text) implemented by Meta (Touvron et al., 2023). Before FLU evaluation, each language model is fine-tuned on the the training set of SWORME with α = 0.8 using either associative or analogical chaining objective (usage sentences containing the other 20% word types are left out as the validation set to decide model convergence). For auto-regressive models (GPT2-XL and LLaMA), the contextualized embeddings of a target word is computed only The classification layers and the underlying encoders are then trained together to minimize on the standard cross entropy loss between model predicted and true entailment labels. We perform full model fine-tuning for BERT-base-uncased and apply parameter-efficient fine-tuning via LoRA (Hu et al., 2021) for GPT2-XL and LLaMA. We also include a baseline version for each language model that is not fine-tuned on SWORME.\nResults. Table 3 summarizes model classification accuracy on the official evaluation sets of the two FLU datasets. We found that language models fine-tuned on SWORME through analogical chaining yield best overall classification accuracy, as well as on most sub-categories of figurative language use. Fine-tuning via associative chaining, on the other hand, is much less helpful or can sometimes even be harmful for FLU. We hypothesize that associative chaining pushes usage embeddings of related meta-senses too close to each other, so that some important sentence-level semantic features in the sentence embedding become degenerated. These results together suggest that learning relational similarity between systematic word metasenses can serve as a simple yet effective method to drive language models toward human-level understanding of figurative language.\nTable 4 shows model predictions on sample FLU questions. We found that many idiomatic expressions in IMPLI can also be interpreted as systematic meaning extensions from more \"literal\" metasenses of common polysemous words (e.g. \"storm\" referring to \"difficult situation\", which signifies a systematic extension from (hostile) NATURAL PHENOMENON to (poor) COGNITIVE STATE), so learning analogical chaining helps model better distinguish such usages against the adversarial hypothesis with high lexical overlap. We also observe that even the largest LLaMA-7B model still makes errors on metaphorical expressions whose interpretations are obvious to humans (e.g. broad imagination), while learning SWORME through analogical chaining helps correct many of these mistakes. Meanwhile, analogical chaining helps lit-tle on understanding ironic expressions such as \"as joyful as funeral\", which can also be considered as a systematic semantic extension toward the opposite word meaning. Future work can explore how antonymic meaning change can be incorporated into the SWORME framework." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have presented a framework of systematic word meta-sense extension (SWORME) that supports lexical items to express new semantic domains in a productive yet predictable way. Our results show that the feature associative similarity only predicts incrementally novel meaning, while analogical similarity provides a general account for both gradual and radical types of word meaning extension. We also show that learning analogical chainingbased meta-sense extension improves transformer language model performance on figurative natural language inference." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b67", "b62" ], "table_ref": [], "text": "Our work has some limitations. For instance, in the current SWORME framework we train models to predict extensions across systematically alternating meta-sense pairs in both directions, while research in leixcal semantic change suggests that such extension sometimes only happens uni-directionally (Xu et al., 2017;Winter and Srinivasan, 2022) -for example, it is quite natural to extend word meaning from the ANIMAL domain to the MEAT domain (e.g. to raise chicken → grilled chicken) but much less plausible for the opposite direction (e.g. grilled beef → to raise beef ). A more realistic approach would be to sort all meta-senses of a word chronologically by their historical time of emergence, and only ask the model to predict the newer meta-sense based on the older one. However, we found it infeasible to determine accurate timestamps of the meta-senses or their associated WordNet senses at a comprehensive scale, and we believe that learning to make some unattested types of meta-sense extension would be beneficial for language models to understand idiosyncratic word uses that are usually under-represented in training corpora." }, { "figure_ref": [], "heading": "A Details of SWORME experiments", "publication_ref": [ "b64", "b26" ], "table_ref": [], "text": "We use the BERT-base-uncased configuration provided by HuggingFace (Wolf et al., 2020) to initialize all BERT-based SWORME models (the BERT-MLM baseline and two chaining-based SWORME models).\nDuring MLM pretraining, we randomly mask 15% of tokens in each sentence, and train each model on predicting the masked tokens. We add all partitioned tokens as special tokens into the vocabulary of the BERT tokenizer, each pseudo-token will be encoded as a whole in the input sequence. Learning is performed using the Adam optimizer (Kingma and Ba, 2015), with a learning rate of 5e-5 and a batch size of 128, for 50 epochs (after which all models achieved highest evaluation accuracy).\nDuring SWORME training, we kept 10% of usage sentences in SWORME training set for validation, and fine-tune the associative and analogical chaining models on the rest 90% sentences via their corresponding objective functions in Eq.3.4 and Eq.3.4 using Adam, with a batch size of 32 and a learning rate of 2e-5. The associative chaining model is trained for 8 epochs and the analogical chaining model is trained for 24 epochs. All experiments are run on machines with an NVIDIA Tesla A100 GPU." }, { "figure_ref": [], "heading": "B CoreLex meta-sense and systematic meta-alternations", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "See Table 5 " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The author would like to thank Yang Xu, Gemma Boleda and anonymous OpenReview reviewers for their helpful suggestions on the manuscript." } ]
The meaning of polysemous words often varies in a highly productive yet predictable way. Generalizing the regularity between conventional senses to derive novel word meaning is crucial for automated processing of non-literal language uses such as figurative expressions. We introduce a novel task called systematic word meta-sense extension (SWORME) to test and improve language models' ability to extend word meaning to denote new semantic domains (also called meta-senses) that bear regular semantic relations with existing senses. We found that language models prefer incremental lexical semantic change toward conceptually similar meta-senses such as logical metonymy, and are much worse at predicting highly nonliteral meaning extensions such as metaphors. We propose a novel analogy-based method of word meaning extension, and show that it effectively improves language model systematicity in making both gradual and radical types of meta-sense extension. We further demonstrate that learning systematic meta-sense extensions benefits language models on multiple benchmarks of figurative language understanding.
Systematic word meta-sense extension
[ { "figure_caption": "Figure 1 :1Figure1: Illustration of systematic word meta-sense extension. Given two conceptually related semantic domains (e.g. ITEM and INFORMATION) and usages of polysemous words describing both domains (e.g. the verbs get, grasp, sell that can take both ITEM class and INFORMATION class nouns as objects), we wish to extend the meaning of another word (e.g. steal with its literal sense only) from denoting one of the two domains to denoting both.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of the SWORME framework. Given a sense-annotated text corpus, we first decide a set of systematic meta-alternations (e.g. the INFORMATION/ITEM and the LOCATION/PSYCHOLOGICAL-STATE alternations in (b)) with sufficient lexical instantiations denoting both meta-senss (e.g. arrive at with both m = LOCATION type objects such as school and m ′ = PSYCHOLOGICAL-STATE type objects such as conclusion).We then partition each lexical instantiation by replacing it with two hypothetical tokens -e.g. the nonce words t(w, m) = galumph and t(w, m ′ ) = chortle in (c) replace mentions of arrive at exhibiting the LOCATION and the PSYCHOLOGICAL-STATE meta-senses respectively, and their systematic relation is indicated by their matching background shape figures. A language model is then pretrained from scratch on the replaced corpus and is then evaluated on the token substitution task, where the model is asked to choose the correct partitioned token galumph in (e) to paraphrase its \"sibling\" token chortle.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Average model precision on SWORME with increasing amount of of training evidence for each metasense alternation. Error bars show the standard deviations over five independent runs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Meta-sense semantic similarity vs. mean predictive accuracy of models trained on SWORME via associative and analogical chaining objectives under zero-shot (α = 0), few-shot (α = 0.2) and many-shot (α = 0.8) setups. When α = 0 all models are equivalent to BERT-MLM so only one set of data points are plotted. Pearson correlations ρ between accuracy and semantic similarity are shown in legends (p < 10 -35 for all cases).", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig-QA dataset by Liu et al. (2022) consists of 10,256", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Sample entries of the SWORME dataset. Target words (lexical instantiations of meta-alternations) in usage sentences are shown in bold italic, and noun objects that decide meta-sense labels of verb and adjective lexical instantiations are underlined.", "figure_data": "WordPOSUsageCoreLex meta-senseSystematic meta-sense alternationchickennounThe Scots had a tradition of deep frying chicken in fat, unlike their English counterparts who baked or boiled chicken.FOODANIMAL -FOODarrive (at) verbthen a rising and expanding parcel of air will arrive at the new altitude at a lower temperature than the surrounding airDEFINITE QUANTITYLOCATION -DEFINITE QUANTITYcoldadjectiveAlthough he shows a cold attitude, she realizes she can't help but love him.PSYCH.FEATURESUBSTANCE -PSYCH.FEATURE", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Model classification accuracy on two figurative language understanding datasets.", "figure_data": "ModelIMPLIFig-QAMetaphors Idioms AllObjVisSocCul AllBERT-base80.1569.72 71.18 86.50 89.49 82.11 86.32 86.05+ assoc.chaining78.6072.33 73.29 86.41 90.19 80.87 79.19 85.51+ analog.chaining85.0474.98 76.52 86.70 96.24 80.08 86.76 87.84GPT2-XL77.5661.45 61.99 73.72 72.97 72.23 76.10 73.90+ assoc.chaining77.3164.72 65.05 72.18 74.01 71.16 75.34 73.82+ analog.chaining79.9666.20 68.48 73.55 78.96 71.12 80.60 77.03LLaMA-7B87.8584.93 85.21 86.99 90.94 87.02 85.17 89.10+ assoc.chaining88.9580.01 80.97 83.51 83.27 85.50 80.44 83.39+ analog.chaining91.6287.90 88.11 89.73 93.29 86.64 84.08 89.74DatasetPremiseHypothesisTrue LabelModel predicted entailment probabilityIMPLIHow have you weathered the storm?How have you calmed the storm?non-entailmentBERT: 0.76 (✗) BERT+analog.chain.: 0.30 (✓)IMPLITime to come out from under a cloud and enjoy yourself.Time to come out from under a roof and enjoy yourself.non-entailmentGPT2: 0.68 (✗) GPT2+analog.chain.: 0.41 (✓)Fig-QAHis imagination is as broad as the sky.He has a vivid imagination.entailmentLLaMA: 0.39 (✗) LLaMA+analog.chain.: 0.53 (✓)Fig-QAThe place was as joyful as a funeral.The place was joyful.non-entailmentLLaMA: 0.57 (✗) LLaMA+analog.chain.: 0.55 (✗)", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Example FLU questions and model outputs. Entailment labels and model predicted entailment probabilities are marked in blue, and non-entailment labels/probabilities are marked in red.", "figure_data": "using its prefix context in each sentence. AfterSWORME training, each model is fine-tuned onthe official training sets of the two FLU datasets,where we add linear classification layers on topof each language model that takes contextualizedembeddings of the last [CLS] token of each con-catenated premise-hypothesis sentence pair andoutputs a binary entailment/non-entailment label.", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "and Table6. CoreLex's meta-senses (names in lowercase) with their corresponding WordNet anchor synsets (names in uppercase).", "figure_data": "abs ABSTRACTIONentENTITYlocLOCATIONprt PARTactACTevtEVENTlogGEO.LOCATIONpsy PSYCHOL.FEATUREagtAGENTfodFOODmea MEASUREqud DEFINITE QUANTITYanm ANIMALfrm FORMmic MICROORGANISM qui INDEFINITE QUANTITYartARTIFACTgrbBIOLOG.GROUPnatNATURAL BODYrelRELATIONatrATTRIBUTEgrpGROUPINGphm PHENOMENONspc SPACEcelCELLgrsSOCIAL GROUPpho PHYSICAL OBJECT sta STATEchm CHEMICALhum HUMANpltPLANTsub SUBSTANCEcom COMMUNICATION lfrLIVING BEINGpos POSSESSIONtme TIMEcon CONSEQUENCElme LINEAR MEASURE proPROCESS", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Top-50 systematic CoreLex meta alternations with highest corpus frequency.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Lei Yu
[ { "authors": "Ehsan Aghazadeh; Mohsen Fayyaz; Yadollah Yaghoobzadeh", "journal": "", "ref_id": "b0", "title": "Metaphors in pre-trained language models: Probing and generalization across datasets and languages", "year": "2022" }, { "authors": " Ju D Apresjan", "journal": "Linguistics", "ref_id": "b1", "title": "Regular polysemy", "year": "1974" }, { "authors": "Dzmitry Bahdanau; Shikhar Murty; Michael Noukhovitch; Thien Huu Nguyen; Harm De Vries; Aaron Courville", "journal": "", "ref_id": "b2", "title": "Systematic generalization: What is required and can it be learned?", "year": "2018" }, { "authors": "Luisa Bentivogli; Raffaella Bernardi; Marco Marelli; Stefano Menini; Marco Baroni; Roberto Zamparelli", "journal": "Language Resources and Evaluation", "ref_id": "b3", "title": "Sick through the semeval glasses. lesson learned from the evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment", "year": "2016" }, { "authors": "Yuri Bizzoni; Shalom Lappin", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Predicting human metaphor paraphrase judgments with deep neural networks", "year": "2018" }, { "authors": "Gemma Boleda; Sebastian Padó; Jason Utt", "journal": "", "ref_id": "b5", "title": "Regular polysemy: A distributional model", "year": "2012" }, { "authors": "Gemma Boleda; Sabine Schulte Im Walde; Toni Badia", "journal": "Computational Linguistics", "ref_id": "b6", "title": "Modeling regular polysemy: A study on the semantic classification of catalan adjectives", "year": "2012" }, { "authors": "Paul Buitelaar", "journal": "", "ref_id": "b7", "title": "Corelex: An ontology of systematic polysemous classes", "year": "1998" }, { "authors": "Tuhin Chakrabarty; Arkadiy Saakyan; Debanjan Ghosh; Smaranda Muresan", "journal": "", "ref_id": "b8", "title": "Flute: Figurative language understanding through textual explanations", "year": "2022" }, { "authors": "Ann Copestake; Ted Briscoe", "journal": "Journal of semantics", "ref_id": "b9", "title": "Semi-productive polysemy and sense extension", "year": "1995" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Allyson Ettinger; Ahmed Elgohary; Colin Phillips; Philip Resnik", "journal": "", "ref_id": "b11", "title": "Assessing composition in sentence vector representations", "year": "2018" }, { "authors": "Brian Falkenhainer; Kenneth D Forbus; Dedre Gentner", "journal": "Artificial intelligence", "ref_id": "b12", "title": "The structure-mapping engine: Algorithm and examples", "year": "1989" }, { "authors": "Jerry A Fodor; Zenon W Pylyshyn", "journal": "Cognition", "ref_id": "b13", "title": "Connectionism and cognitive architecture: A critical analysis", "year": "1988" }, { "authors": "Olivia Fugikawa; Oliver Hayman; Raymond Liu; Lei Yu; Thomas Brochhagen; Yang Xu", "journal": "Frontiers in Communication", "ref_id": "b14", "title": "A computational analysis of crosslinguistic regularity in semantic change", "year": "2023" }, { "authors": "Aina Garí; Soler ; Marianna Apidianaki", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b15", "title": "Let's play mono-poly: Bert can reveal words' polysemy level and partitionability into senses", "year": "2021" }, { "authors": "Dedre Gentner", "journal": "Cognitive science", "ref_id": "b16", "title": "Structure-mapping: A theoretical framework for analogy", "year": "1983" }, { "authors": "Dedre Gentner; Brian Bowdle", "journal": "The Cambridge handbook of metaphor and thought", "ref_id": "b17", "title": "Metaphor as structure-mapping", "year": "2008" }, { "authors": "Dedre Gentner; Brian Bowdle; Phillip Wolff; Consuelo Boronat", "journal": "", "ref_id": "b18", "title": "Metaphor is like analogy. The analogical mind: Perspectives from cognitive science", "year": "2001" }, { "authors": "Dedre Gentner; Cecile Toupin", "journal": "Cognitive science", "ref_id": "b19", "title": "Systematicity and surface similarity in the development of analogy", "year": "1986" }, { "authors": "Emily Goodwin; Koustuv Sinha; Timothy O' Donnell", "journal": "", "ref_id": "b20", "title": "Probing linguistic systematicity", "year": "2020" }, { "authors": "Karan Grewal; Yang Xu", "journal": "Computational approaches to semantic change", "ref_id": "b21", "title": "Chaining algorithms and historical adjective extension", "year": "2021" }, { "authors": "Ahmad Amir; Charles Habibi; Yang Kemp; Xu", "journal": "Cognition", "ref_id": "b22", "title": "Chaining and the growth of linguistic categories", "year": "2020" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b23", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Harsh Jhamtani; Varun Gangal; Eduard Hovy; Taylor Berg-Kirkpatrick", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Investigating robustness of dialog models to popular figurative language constructs", "year": "2021" }, { "authors": "Daniel Keysers; Nathanael Schärli; Nathan Scales; Hylke Buisman; Daniel Furrer; Sergii Kashubin; Nikola Momchev; Danila Sinopalnikov; Lukasz Stafiniak; Tibor Tihon", "journal": "", "ref_id": "b25", "title": "Measuring compositional generalization: A comprehensive method on realistic data", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b26", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Brenden Lake; Marco Baroni", "journal": "", "ref_id": "b27", "title": "Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "George Lakoff", "journal": "University of Chicago press", "ref_id": "b29", "title": "Women, fire, and dangerous things: What categories reveal about the mind", "year": "1987" }, { "authors": "George Lakoff; Mark Johnson", "journal": "University of Chicago press", "ref_id": "b30", "title": "Metaphors we live by", "year": "2008" }, { "authors": "Chee Wee; ( Ben; ) Leong; Beata Beigman Klebanov; Ekaterina Shutova", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "A report on the 2018 VUA metaphor detection shared task", "year": "2018" }, { "authors": "Hector Levesque; Ernest Davis; Leora Morgenstern", "journal": "", "ref_id": "b32", "title": "The winograd schema challenge", "year": "2012" }, { "authors": "Omer Levy; Yoav Goldberg; Ido Dagan", "journal": "Transactions of the association for computational linguistics", "ref_id": "b33", "title": "Improving distributional similarity with lessons learned from word embeddings", "year": "2015" }, { "authors": "Siyan Li; Riley Carlson; Christopher Potts", "journal": "", "ref_id": "b34", "title": "Systematicity in gpt-3's interpretation of novel english noun compounds", "year": "2022" }, { "authors": "Emmy Liu; Chenxuan Cui; Kenneth Zheng; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Testing the ability of language models to interpret figurative language", "year": "2022" }, { "authors": "Steven A Barbara C Malt; Silvia Sloman; Meiyi Gennari; Yuan Shi; Wang", "journal": "Journal of Memory and Language", "ref_id": "b36", "title": "Knowing versus naming: Similarity and the linguistic categorization of artifacts", "year": "1999" }, { "authors": "Rui Mao; Chenghua Lin; Frank Guerin", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Word embedding and WordNet based metaphor identification and interpretation", "year": "2018" }, { "authors": "Marcus Gary", "journal": "Cognitive psychology", "ref_id": "b38", "title": "Rethinking eliminative connectionism", "year": "1998" }, { "authors": "Stephen Merity; Caiming Xiong; James Bradbury; Richard Socher", "journal": "", "ref_id": "b39", "title": "Pointer sentinel mixture models", "year": "2016" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b40", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "George A Miller", "journal": "Communications of the ACM", "ref_id": "b41", "title": "Wordnet: a lexical database for english", "year": "1995" }, { "authors": "Ke Ni; William Yang; Wang ", "journal": "", "ref_id": "b42", "title": "Learning to explain non-standard english words and phrases", "year": "2017" }, { "authors": "Geoffrey Nunberg", "journal": "Linguistics and philosophy", "ref_id": "b43", "title": "The non-uniqueness of semantic solutions: Polysemy", "year": "1979" }, { "authors": "Dawn Joshua C Peterson; Thomas L Chen; Griffiths", "journal": "Cognition", "ref_id": "b44", "title": "Parallelograms revisited: Exploring the limitations of vector space models for simple analogies", "year": "2020" }, { "authors": "Renato Ferreira; Pinto ; Yang Xu", "journal": "Cognition", "ref_id": "b45", "title": "A computational theory of child overextension", "year": "2021" }, { "authors": "James Pustejovsky", "journal": "MIT press", "ref_id": "b46", "title": "The generative lexicon", "year": "1998" }, { "authors": "James Pustejovsky; Anna Rumshisky", "journal": "", "ref_id": "b47", "title": "Mechanisms of sense extension in verbs", "year": "2010" }, { "authors": "Christian Ramiro; Mahesh Srinivasan; Barbara C Malt; Yang Xu", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b48", "title": "Algorithms in the historical emergence of word senses", "year": "2018" }, { "authors": "Emily Reif; Ann Yuan; Martin Wattenberg; Fernanda B Viegas; Andy Coenen; Adam Pearce; Been Kim", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b49", "title": "Visualizing and measuring the geometry of bert", "year": "2019" }, { "authors": "Joseph Reisinger; Raymond J Mooney", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "Multiprototype vector-space models of word meaning", "year": "2010" }, { "authors": "Kevin Stowe; Martha Palmer", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Leveraging syntactic constructions for metaphor identification", "year": "2018" }, { "authors": "Kevin Stowe; Prasetya Utama; Iryna Gurevych", "journal": "", "ref_id": "b52", "title": "Impli: Investigating nli models' performance on figurative language", "year": "2022" }, { "authors": "Chang Su; Shuman Huang; Yijiang Chen", "journal": "Neurocomputing", "ref_id": "b53", "title": "Automatic detection and interpretation of nominal metaphor based on the theory of meaning", "year": "2017" }, { "authors": "Zhewei Sun; Richard Zemel; Yang Xu", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b54", "title": "A computational framework for slang generation", "year": "2021" }, { "authors": "Zhewei Sun; Richard Zemel; Yang Xu", "journal": "", "ref_id": "b55", "title": "Semantically informed slang interpretation", "year": "2022" }, { "authors": "Noriko Tomuro", "journal": "", "ref_id": "b56", "title": "Tree-cut and a lexicon based on systematic polysemy", "year": "2001" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b57", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "D Peter; Turney", "journal": "Computational Linguistics", "ref_id": "b58", "title": "Similarity of semantic relations", "year": "2006" }, { "authors": "Jason Utt; Sebastian Padó", "journal": "", "ref_id": "b59", "title": "Ontology-based distinction between polysemy and homonymy", "year": "2011" }, { "authors": "Ivan Vulić; Maria Edoardo; Robert Ponti; Goran Litschko; Anna Glavaš; Korhonen", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "Probing pretrained language models for lexical semantics", "year": "2020" }, { "authors": "Gregor Wiedemann; Steffen Remus; Avi Chawla; Chris Biemann", "journal": "", "ref_id": "b61", "title": "Does bert make any sense? interpretable word sense disambiguation with contextualized embeddings", "year": "2019" }, { "authors": "Bodo Winter; Mahesh Srinivasan", "journal": "Metaphor and Symbol", "ref_id": "b62", "title": "Why is semantic change asymmetric? the role of concreteness and word frequency and metaphor and metonymy", "year": "2022" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b63", "title": "Huggingface's transformers: State-ofthe-art natural language processing", "year": "2019" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b64", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Zhibiao Wu; Martha Palmer", "journal": "", "ref_id": "b65", "title": "Verbs semantics and lexical selection", "year": "1994" }, { "authors": "Yang Xu; Charles Kemp", "journal": "", "ref_id": "b66", "title": "A computational evaluation of two laws of semantic change", "year": "2015" }, { "authors": "Yang Xu; Barbara C Malt; Mahesh Srinivasan", "journal": "Cognitive psychology", "ref_id": "b67", "title": "Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium", "year": "2017" }, { "authors": "Hitomi Yanaka; Koji Mineshima; Daisuke Bekki; Kentaro Inui", "journal": "", "ref_id": "b68", "title": "Do neural models learn systematicity of monotonicity inference in natural language", "year": "2020" }, { "authors": "Lei Yu; Yang Xu", "journal": "", "ref_id": "b69", "title": "Predicting emergent linguistic compositions through time: Syntactic frame extension via multimodal chaining", "year": "2021" }, { "authors": "Lei Yu; Yang Xu", "journal": "Association for Computational Linguistics", "ref_id": "b70", "title": "Word sense extension", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 77.77, 480.35, 212.1, 14.35 ], "formula_id": "formula_0", "formula_text": "t * = argmin t∈T ||h(t, u) -h(t(w, m ′ ), u)|| 2 (1)" }, { "formula_coordinates": [ 4, 313.42, 644.54, 211.72, 54.87 ], "formula_id": "formula_1", "formula_text": "L analog = - (m,m ′ ,w 1 ,w 2 ) d(w 1 , w 2 , m, m ′ ) (2) d(w 1 , w 2 , m, m ′ ) = ||z(w 1 , m, m ′ ) -z(w 2 , m, m ′ )|| 2 (3)" }, { "formula_coordinates": [ 5, 70.87, 416.92, 219, 51.64 ], "formula_id": "formula_2", "formula_text": "L assoc = - M trip w,w ′ l(w, w ′ ) (4) l(w, w ′ ) = ||h(w, m) -h(w, m + )|| 2 -||h(w, m) -h(w ′ , m -)|| 2" }, { "formula_coordinates": [ 6, 70.87, 373.11, 156.27, 14.63 ], "formula_id": "formula_3", "formula_text": "alternation α = |W train (m,m ′ )|" } ]
10.1117/12.230388
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b4", "b5", "b6", "b7", "b2", "b8", "b9" ], "table_ref": [], "text": "Computer vision based depth estimation has many applications such as augmented and virtual reality (AR and VR) [1], autonomous robotics [2], background subtraction, and changing the focus of an image after it was taken [3][4] [5]. Techniques such as structure from motion, structure from shading, shape from structured light, shape from defocus blur, depth from focus, multi-view stereo and Time-of-Flight (ToF) sensors can be used to estimate the depth of a scene [6,7]. Active methods such as structured light and ToF sensors need specialized hardware and are power hungry. Stereo techniques measure depth by relying on multiple cameras to take several pictures of the scene. Techniques such as structure-from-motion and depth-from-focus [8] require several images of a static scene to estimate it's structure. Also, the assumption about static scene does not hold when the scene is changing over time. Furthermore, structure from motion can only recover the depth of a scene up to a scale and cannot measure the absolute depth.\nSingle image defocus blur based depth estimation is a fairly under-explored topic in the literature [3] which utilizes the phenomena that certain objects in a photo appear more blurred than the others depending on the distance to those objects from the camera. Therefore, measuring the amount of defocus blur at a point of an image can provide a way to recover the depth to the respective point in the real 3D world. As we will show in Section 3, this method is effective for close range depth measurements (typically under 2 to 3 meters). This makes defocus blur-based depth estimation techniques ideal for measuring depth under many situations including in microscopic scenes [9,10] and measuring depth to hands and nearby objects for a wearable camera.\nSingle image depth from defocus blur methods are not robust to changes of cameras. As we will show in our experiments, the performance of existing methods degrades significantly when they are trained on images taken from one camera and evaluated on images taken from another camera (even when they both image the same scene). This is due to the fact that different cameras will produce defocus blurs with different characteristics.\nIn this paper we describe a novel technique to estimate depth from defocus blur in a camera-independent manner. We exploit the optical physics equations that describe the relationships between various camera parameters and the amount of defocus blur. Our method can be used to train a deep learning model in a supervised manner on a dataset containing defocus blurred images taken from a single or multiple camera/s and respective ground truth depth maps. This trained model can be used to predict depth using images taken with a wide range of other cameras with a slight modification to the model (depending on the particular camera parameters of the new camera) and without the need for retraining. We also describe a novel method to estimate the camera parameters of a given camera with an easy to use calibration process. This will be particularly useful when the parameters for a certain camera cannot be obtained (certain manufacturers do not provide all the parameters in the data-sheets and/or the values are only provided as approximations).\nOur main contributions are as follows:\n• We show that depth from defocus technique can measure depth more accurately than the state-of-the-art techniques. • We show that existing depth from defocus methods are not robust to changes of cameras the images are acquired with. • This paper is the first to device a relationship between defocus blur and the blur created due to pixel binning.\n• We present a novel depth from defocus blur method which is robust to images taken from a wide range of cameras, given camera parameters that describe a particular camera. • We present a novel calibration technique to estimate the camera parameters based on several images taken from a given camera. • Our methods have less estimation error than the stateof-the-art when performing depth from blur in a cameraindependent manner. The error reduction is around 3cm under the DDFF12 dataset, 7cm under the NYU depth v2 dataset and around 5cm for the synthetic dataset we created.\n2 Related Work" }, { "figure_ref": [], "heading": "Depth from RGB images", "publication_ref": [ "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b17", "b18", "b5", "b19" ], "table_ref": [], "text": "Estimating depth maps from images can use various characteristics of images such as semantics, stereo matching, blur or differences in blur over a stack of images. [11][12][13]. Although stereo matching based and blur based depth measurements are seen as completely separate methods, Schechner and Kiryati [14] showed that both of them can be understood under the same mathematical formulation. A depth map can be estimated for the given image based on the domain knowledge on the structure of the objects in the image embedded in the estimation model [15][16][17][18][19]. Methods such as ZoeDepth [18] and VPD [19] have pushed the state-of-the art to be very accurate in measuring depth. However, a problem with these methods is that the estimated depth is only an approximation based on the structure of the objects. This makes these models sensitive to domain changes [6]. Also, techniques that can recover 3D structure from RGB images such as structure from motion can only estimate relative depths in a given scene [20]." }, { "figure_ref": [], "heading": "Depth from defocus blur", "publication_ref": [ "b5", "b20", "b21", "b2", "b7", "b22", "b3", "b23", "b3", "b5", "b24", "b25", "b26", "b3", "b24", "b27", "b5", "b28", "b29", "b5", "b26", "b30", "b8", "b31", "b32", "b33", "b23" ], "table_ref": [], "text": "The amount of defocus blur can be used to measure the depth of a scene from images. Since these methods rely more on blur which is a local feature of the image to estimate the depth, they are more robust to domain changes [6]. Shape/depth from focus methods aim to measure depth to a scene given a stack of images of different focus levels. A measure of the sharpness of each pixel of the images over the stack is calculated. The depth of a point is taken as the focus distance with the sharpest pixel. Various methods such as the Laplacian or sum-modified-Laplacian, gray-level variance and gradient magnitude squared were traditionally used to measure the sharpness [21,22]. Modern methods utilize deep learning to automatically learn the sharpness measure from focal stacks [3,8,23]. But deep learning based techniques require a large amount of data to train [4].\nDepth from focus methods that use a focal stack of the same scene has several drawbacks. First they assume the scene is static during the time needed to acquire several images with different focus (focal stack). Second, an accurate registration of the images in the focal stack is needed due to focal breathing (slight change of the filed-of-view of the camera due to changes of focal distance) or small movement of the camera and/or the scene [24]. Therefore, more investigation on depth estimation with a single image is necessary. Depth from defocus/blur rely on measuring the exact blur on a single image to estimate the depth and cannot use the relative variation of sharpness/blurriness of a focal stack. Due to this, depth from blur can be used to estimate the depth from a single blurred image [4,6,[25][26][27]. Certain works are also concerned about removing the blur at the same time as estimating depth [4,25,28].\nEstimating depth from the amount of the blur of a single image is ill-posed. This is due to having two possible depth values for a given blur [6]. Researchers have take two different paths to solve this. One solution is hardware based. One example for this is changing the shape of the aperture (coded aperture) of the camera to a shape that can help avoid the ambiguity. Ikoma et al. [29] used deep learning to learn the optimal shape for an aperture and came up with a prototype camera to measure depth from blur. Another example is to use a lightfield camera which takes many pictures with closely spaced micro lenses placed inside the camera [30]. The second approach is to use the domain knowledge (e.g. the shape and sizes of objects in the scene) of the scene to remove the ambiguity. Our research falls into this category. Gur and Wolf created a model which can generate the depth map of a scene given the blurred image and the All-in-Focus (AiF) image [6]. They also makes certain assumptions about the shape of the blur circle. Usage of both AiF image and blurred images in making prediction makes this model less useful in certain situations because both of these images are not usually available from regular cameras. Many methods in the literature first estimate the blur of a given blurred image and secondly estimate the depth from the blur. Physical consistency between the estimated blur and depth has been used as a form of domain knowledge by Zhang et al. [27]. Lu et al. create two separate models to estimate the blur and the amount of focus (sharpness) of a given blurred image. They claim that this method provides better estimates of depth due to the capability of estimating both blur and the sharpness of an image [31]. But since sharpness is just the inverse of blur, a question remains that by estimating blur aren't we also estimating the (inverse of) sharpness. Ban et al. [9] extend depth from blur to microscopic images. Certain works focus just on blur estimation from a blurred image. Tai and Brown use hand crafted features of an image to estimate the blur map [32] while Zhuo and Sim assume that the edges in the images are step edges [33]. Cun et al. estimated the blur of a given image to separate the blurred and focused areas from a blurred image [34]. While all of the above methods assume that the blur is a single parameter (e.g. Gaussian or disk shape) Liu et al. expand our understanding by introducing a two parameter model. This model is also helpful in removing errors due to pattern edges [24]." }, { "figure_ref": [], "heading": "Camera dependency of depth from blur", "publication_ref": [ "b25", "b5", "b5", "b3" ], "table_ref": [], "text": "Certain characteristics of blur depend on the camera that is being used to acquire the images. The blurred image of a point has the same shape (but scaled) as the lens aperture. For example, if the aperture is circular, the blur of a point is also circular theoretically. But in practice this is a Gaussian due to diffraction [26]. In this research we assume all the apertures are circular in shape.\nThe size of the blur of a point depends on many other parameters of the camera. The f-number, focal length, pixel size of the image sensor (if the camera is digital), camera output scale and focal distance all affect the size of the blur [6] as shown in section 3. Depth from defocus blur techniques estimate the blur of a given image as an intermediate step when estimating the depth. This makes these models sensitive to the variations due to camera parameters. We show evidence supporting this in our evaluation section. But no papers in the literature address this problem. Gur and Wolf [6] use camera parameters in their model to recreate a blurred image. It was not used directly to predict depth and they do not test their model under different cameras. Although Maximov et.al [4] evaluate their model on several simulated datasets generated with different camera parameters they do not explicitly address or propose a solution to this problem." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "This section starts with a theoretical introduction to estimating depth from defocus blur and establishes the challenges faced by this technique and our solution." }, { "figure_ref": [ "fig_0" ], "heading": "Theory and Techniques", "publication_ref": [ "b34" ], "table_ref": [], "text": "When imaging a scene with a camera, the points that are not in focus appear blurred and the points that are perfectly in focused appear sharp in the image. This phenomenon is called defocus blurring. To illustrate this, in the left side of the Figure 1, the point 𝑃2 that is in focus appears as a point in the image plane of the camera. A point 𝑃1 that is not in focus appears as a blur in the image plane where the pixel intensity is the highest at the center and gradually falls of as we move away. This can be modelled with a 2D Gaussian function as denoted in equation 1 with 𝜎 as the standard deviation, 𝑥 and 𝑦 are image coordinates.\n𝐺 (𝑥, 𝑦) = 1 2𝜋𝜎 𝑒 -1 2 𝑥 2 +𝑦 2 𝜎 2(1)\n𝜎 depends on the distance to the point 𝑃1 from the camera center and several other camera dependent factors as shown in equation 2. \n|𝑠 1 -𝑠 2 | 𝑠 2 • 1 (𝑠 1 -𝑓 ) • 𝑓 2 𝑁 • 1 𝑝 • 𝑜𝑢𝑡 𝑝𝑖𝑥 𝑠𝑒𝑛𝑠𝑜𝑟 𝑝𝑖𝑥 = 𝑘 𝑟 • 𝜎 (2)\nIn equation 2, 𝑠 2 is the depth (distnace to the )𝑓 is the focal length of the camera, 𝑁 is the f-number, 𝑝 is the pixel width, 𝑜𝑢𝑡 𝑝𝑖𝑥 is the number of pixels in the final image, 𝑠𝑒𝑛𝑠𝑜𝑟 𝑝𝑖𝑥 is the number of pixels in the image sensor, 𝑠 1 is the focus distance, 𝑘 𝑟 is a constant that depends on the camera [35]. Many cameras allow user to change 𝑠 1 thereby focusing the camera at different distances. we define a camera dependent parameter 𝑘 𝑐𝑎𝑚 as shown in equation 3.\n|𝑠 1 -𝑠 2 | 𝑠 2 • 𝑘 𝑐𝑎𝑚 = 𝜎(3)\nwhere\n𝑘 𝑐𝑎𝑚 = 1 (𝑠 1 -𝑓 ) • 𝑓 2 𝑁 • 1 𝑝 • 𝑜𝑢𝑡 𝑝𝑖𝑥 𝑠𝑒𝑛𝑠𝑜𝑟 𝑝𝑖𝑥 • 1 𝑘 𝑟 𝐺 (𝑥, 𝑦)\nis the response of the camera system to a point target and is called the point spread function (PSF). We can obtain the defocus blurred image 𝐵(𝑥, 𝑦) by convolving the the perfectly focused image 𝐹 (𝑥, 𝑦) with PSF 𝐺 (𝑥, 𝑦) as show in equation 4." }, { "figure_ref": [ "fig_0" ], "heading": "𝐵(𝑥", "publication_ref": [ "b35" ], "table_ref": [ "tab_0" ], "text": ", 𝑦) = 𝐺 (𝑥, 𝑦) * 𝐹 (𝑥, 𝑦)(4)\nEquation 4 explains the blur solely due to defocus blurring. An additional blurring can occur due to various other reasons such as filtering in the camera hardware (e.g. to reduce noise), pixel binning, color filter mosaics, analog/digital image processing, analog to digital conversion, etc. [36]. This additional blurring can also be modelled as a convolution with another Gaussian function 𝑄 (𝑥, 𝑦) having a standard deviation 𝛾 which we assume to be constant for a given camera stetting. The final image can be obtained by\n𝐼 (𝑥, 𝑦) = 𝑄 (𝑥, 𝑦) * 𝐺 (𝑥, 𝑦) * 𝐹 (𝑥, 𝑦)(5)\nAll we can observe is the final image 𝐼 . We show that the combined blurring (from defocus and due to other reasons described above) can also be modelled with a Gaussian PSF \n𝜎 = √︁ 𝜆 2 -𝛾 2(6)\nSubstituting equation 6 into equation 3 we can obtain equation 7.\n|𝑠 1 -𝑠 2 | 𝑠 2 • 𝑘 𝑐𝑎𝑚 = √︁ 𝜆 2 -𝛾 2 (7)\nThe right side of the Figure 1 shows the variation of 𝜎 with different distances (𝑠 2 ) and under different cameras. For a given camera (hence for a given 𝑘 𝑐𝑎𝑚 ), we can estimate the 𝜎 from a given image and then estimate 𝑠 2 . For certain sections of the curve (e.g. curve of 𝑘 𝑐𝑎𝑚 = 17.3 at the shown value of 𝜎), estimating 𝑠 2 is ambiguous since there will be two 𝑠 2 values for a given 𝜎. This limitation can be mitigated by using a learning based model to estimate 𝑠 2 . Another observation is that the value of 𝜎 depends on 𝑘 𝑐𝑎𝑚 . This poses the main problem that we are addressing in this paper. If a model was trained to estimate depth using data from a camera with one 𝑘 𝑐𝑎𝑚 , this model will fail to predict the depth accurately for images taken with a camera having a different 𝑘 𝑐𝑎𝑚 . Furthermore, the sensitivity of 𝜎 to the distance diminishes as the distance increases. Hence the effectiveness of the defocus blur based depth measurement techniques will also lessen with increasing distance. This limits the effectiveness of depth from defocus blur techniques to close range; as a rule of thumb, to distances less than 2m.\nTable 1 shows some camera models and their 𝑘 𝑐𝑎𝑚 values based on the particular lens/settings used. Please refer to the Appendix for a more detailed calculation and for 𝑘 𝑐𝑎𝑚 values for more cameras/settings. When we train our model, we calculate two types of losses; blur estimation loss and the depth estimation loss. Blur estimation loss (𝐿 𝑏 ) is calculated at the prediction of 𝜆. Ground truth 𝐿 𝑏 can be obtained with equation 2 with known camera parameters at the training time. Depth prediction loss (𝐿 𝑑 ) is calculated comparing the predicted and ground truth depth maps. The final loss is obtained by,\n𝐿 𝑡𝑜𝑡𝑎𝑙 = 𝐿 𝑑 + 𝑏_𝑤𝑒𝑖𝑔ℎ𝑡 • 𝐿 𝑏 (8\n)\nwhere 𝑏_𝑤𝑒𝑖𝑔ℎ𝑡 is a parameter used to scale 𝐿 𝑏 ." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Defocus Blur Calibration", "publication_ref": [ "b36" ], "table_ref": [], "text": "Assume we train our model with images from a certain camera and need to use this already trained model to estimate depth using images from another camera with a different 𝑘 𝑐𝑎𝑚 and 𝛾. In this section we present our novel method that can be used to estimate these parameters for a given camera. We call this method the \"Defocus Blur Calibration\". Note that defocus blur calibration is different from but requires the conventional camera calibration where camera intrinsics and distortion coefficients are estimated. The steps for defocus blur calibration are as follows.\n1. Fix the focal distance of the camera at 𝑠 1 (we used 𝑠 1 = 2𝑚 in our experiments) and calibrate the camera (in a conventional sense) with a calibration pattern [37]. We have used an asymmetric circular pattern as can be seen in Figure 3. Maintain a rough distance of around 𝑠 1 2 from the camera to the calibration pattern. After this calibration, we can estimate the distance to a given point on the calibration pattern that is visible in a given image. 2. Capture two images of a circular calibration pattern (preferably the same pattern that was used in step 1) while maintaining a distance of 𝑠 1 2 (we used 1𝑚) from the camera to the pattern. The first image is obtained with the camera focused on the pattern (𝑠 1 = 1𝑚) and the second image is obtained while maintaining 𝑠 1 = 2𝑚. Since the first image is focused on the calibration pattern, the circles on the pattern will appear sharp as shown in the upper part of Figure 3. The second image will look blurred as shown in the bottom half of the Figure 3. According to Figure 3, the images of circle edges on the focused images have a steeper slope (A Gaussian with a lower std). The slight blurring in these images are solely due to pixel binning. The edges on the blurred images have a more gradual slope. Also the slope becomes even more gradual as 𝑘 𝑐𝑎𝑚 is increased." }, { "figure_ref": [ "fig_2", "fig_1" ], "heading": "Estimate the std of the Gaussian function of the circle", "publication_ref": [], "table_ref": [], "text": "edges from the focused image. We horizontally slice the image of the circle as seen in Figure 3 and obtain the distribution of pixel intensities. These are flat-top Gaussian functions. The flat top nature is due to the intensity being constant inside the circle. It falls gradually at the edges of the circles. We scale these intensity values into the range from zero to one. We consider all the values less than a threshold (we used 0.95) as belonging to the falling edges. We then integrate the resulting distribution (one dimensional Gaussian). According to equation 9, we can estimate 𝛾 after obtaining the integral 𝐽 for a constant 𝑦.\n𝐽 = ∫ ∞ -∞ 𝐺 (𝑥)𝑑𝑥 = ∫ ∞ -∞ 𝑒 -1 2 𝑥 2 +𝑦 2 𝛾 2 𝑑𝑥 = 𝛾 √ 2𝜋(9)\n4. Estimate the std of the Gaussian of the falling edges (𝜆) of the circles from defocus blurred images similarly to step 3. Note the the blurring of the defocused images are due to both defocus blurring and pixel binning. We can estimate the std of the Gaussian of the falling edges due to defocus blurring with equation 6. 5. Estimate the distance to each circle center from camera using the defocus blurred images using the camera intrinsic matrix generated with calibration in step1. This is a well-established procedure that is available in most of the computer vision libraries. We can write a separate version of equation 7 for each circle in the calibration pattern. With 𝜆 and 𝛾 already estimated, we can estimate the 𝑘 𝑐𝑎𝑚 for the given camera using Here we have assumed that the distance from the camera to each circle center is approximately equal to the distance to the edges of the circle. This can be justified because the distance to the circles from the camera (around 1m) is much larger than the diameter of the circles (around 4cm in our case). 6. To improve the accuracy of the estimate, we can repeat steps from 2 to 5 several times. See the evaluation section for further details on the experiments. We can estimate the 𝑘 𝑐𝑎𝑚 for a given camera with the above steps. The estimated 𝑘 𝑐𝑎𝑚 can be used as shown in Figure 2 to predict depth with the images taken from this new camera." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b3", "b3", "b22", "b19", "b37", "b3", "b39" ], "table_ref": [ "tab_1" ], "text": "Defocusnet dataset. We use the synthetic dataset generated by Maximov et al. [4] to train one of our models. This dataset was created with a virtual camera having several 𝐾 𝑐𝑎𝑚 values of 0.15,0.33,0.78,1.59 and 2.41. This dataset has 500 focal stacks, each with 5 images with different focal distances. Synthetic Blender dataset. We create a new synthetic dataset by expanding the defocusnet dataset [4]. This new dataset has various textures (to make them realistic) mapped to the 3D objects that was not present in the original dataset. We use several simulated cameras with 𝐾 𝑐𝑎𝑚 of 0.08, 0.15, 0.23 and 0.33. This dataset has a focal distance of 1.5m and contains 400 defocus blurred images. We use the script provided by Maximov et al. (modified) to create our dataset. The images we generate in this dataset are 256 x 256 pixels.\nBoth the Synthetic blender and the defocusnet datasets also have a perfectly focused image per each defocus blurred image. DDFF12 dataset. We also use the DDFF12 dataset provided by Hazirbas et al. [23] which contains 720 images created with a lightfield camera. We use the two real world datasets Method 𝐾 𝑐𝑎𝑚 0.08 0.14 0.23 0.33 in-focus 0.099 0.081 0.082 0.100 No 𝐾 𝑐𝑎𝑚 0.062 0.050 0.056 0.085 GT 𝐾 𝑐𝑎𝑚 0.045 0.037 0.052 0.061 Table 2. Performance on Blender dataset (MSE) (DDFF12 and the NYU dataset described next) so that we can show that our models can work under real world images and deal with the domain gap between real and synthetic images [20]. After pre-processing the images as mentioned by Hazirbas et al. we obtained the blurred images which are focused at various distances. NYU depth v2 dataset. The NYU depth v2 dataset [38] contains 1449 pairs of aligned RGB and depth image pairs. Following previous papers [4] [39] we create the training and testing splits. We create artificially defocus blurred images from this dataset using the method described by Carvalho et al. [40]. We have fixed certain drawbacks in their Matlab script in order to produce more realistic defocus blurred images as further discussed in the appendix. We used 𝐾 𝑐𝑎𝑚 values for training and testing as shown in Table 3. Images of 480 x 480 pixels were used for training and 480 x 640 were used for testing." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b40", "b41" ], "table_ref": [], "text": "We use PyTorch [41] to implement the neural networks. We use the Adam optimizer [42] with 𝛽 1 = 0.9 and 𝛽 2 = 0.999 and a learning rate of 10 -4 . Mean Squared Error was used as the loss function for both the blur and the depth to train all the models. We evaluate our depth predictions with the metrics absolute relative error (REL), mean-squared error (MSE), Root-Mean-Squared error (RMSE) and average log10 error. We also report threshold accuracy 𝛿 𝑛 which is the percentage of pixels which satisfy the condition 𝑚𝑎𝑥 (𝑑 𝑖 / d𝑖 , d𝑖 /𝑑 𝑖 ) < 1.25 𝑛 . We train our models on the defocusnet dataset for 400 epochs and a batch size of 20. Our NUY depth models were train for 800 epochs with a batch size of 8." }, { "figure_ref": [ "fig_1" ], "heading": "Performance", "publication_ref": [ "b3", "b3", "b18", "b22", "b3" ], "table_ref": [ "tab_1", "tab_2", "tab_2" ], "text": "Table 2 shows the performance of the model trained on the defocusnet [4] dataset and evaluated on our Blender dataset with different 𝑘 𝑐𝑎𝑚 values of simulated cameras. All three methods; in-focus, No 𝐾 𝑐𝑎𝑚 and GT 𝐾 𝑐𝑎𝑚 , use the same deep learning architecture to predict depth. The only exception is that the GT 𝐾 𝑐𝑎𝑚 model performs the 𝐾 𝑐𝑎𝑚 correction as shown in Figure 2. We use the 𝐾 𝑐𝑎𝑚 values that were used to generate the data and these can be called the Ground Truth 𝐾 𝑐𝑎𝑚 values (GT 𝐾 𝑐𝑎𝑚 ). The In-focus model was both trained and tested on perfectly focused images. The No 𝐾 𝑐𝑎𝑚 model does not consider the effect of 𝐾 𝑐𝑎𝑚 during either training or testing (similar to defocusnet [4] model). This means the Table 3 shows the performance on the defocus blurred NYU depth dataset. Here we use a single trained model to evaluate the performance under various settings. The model was trained on data refocused with a 𝐾 𝑐𝑎𝑚 of 8.79 and 35.61 and tested on the rest under the distance range of 0 to 2 m. The VPD model was trained and tested on in-focus images with no defocus blurring. Our GT and est 𝐾 𝑐𝑎𝑚 methods outperform the state-of-the-art depth estimation model (VPD) on the NUY depth v2 dataset [19] by around 0.04 in RMSE. This converts to a reduction of error of around 4cm in the depth estimation. This proves again the importance of defocus blurring in depth estimation. We evaluate our models under three methods which depend on the nature of the 𝐾 𝑐𝑎𝑚 values used. The method column, \"GT 𝐾 𝑐𝑎𝑚 \" means we have used the 𝐾 𝑐𝑎𝑚 values that were used to defocus blur the particular dataset which can be considered as Ground Truth 𝐾 𝑐𝑎𝑚 values. \"est 𝐾 𝑐𝑎𝑚 \" represents the 𝐾 𝑐𝑎𝑚 values that were estimated with the defocus calibration method described in section 3.3. No 𝐾 𝑐𝑎𝑚 models do not consider the effect of 𝐾 𝑐𝑎𝑚 . We describe further details of the estimation process in the subsequent sections.\n𝐾 𝑐𝑎𝑚 method 𝛿 1 ↑ 𝛿 2 ↑ 𝛿 3 ↑ REL ↓ RMSE ↓ log10 ↓ in-\nTable 4 shows the performance of our model under the DDFF12 dataset [23]. All the models were first trained on the defocusnet dataset [4] (the same model we used to evaluate the Blender dataset as shown in Table 2). The In-focus model was trained and tested on well focused images. The No 𝐾 𝑐𝑎𝑚 4 are the performance of our model under the test set using the 𝐾 𝑐𝑎𝑚 value found above. Both the No 𝐾 𝑐𝑎𝑚 and est 𝐾 𝑐𝑎𝑚 models perform better than the in-focus model for depth prediction. Also using the appropriate 𝐾 𝑐𝑎𝑚 to transfer the model to the new domain of images significantly improves the performance compared to the no 𝐾 𝑐𝑎𝑚 model which does not perform a correction that depends on the camera." }, { "figure_ref": [ "fig_2", "fig_3", "fig_3", "fig_4", "fig_6", "fig_4" ], "heading": "Defocus Blur Calibration Performance", "publication_ref": [], "table_ref": [], "text": "We expand the discussion on defocus blur calibration in this section. These experiments were performed on the refocused NYU depth v2 dataset. We have created refocused data with 𝐾 𝑐𝑎𝑚 values of 1. 39, 5.61, 8.79, 12.69, 22.67, 25.61. Note that we have used the additional 𝐾 𝑐𝑎𝑚 values (1.39 and 5.61) that were not used to evaluate the performance of depth estimation in Table 2. We obtain several photos of the asymmetric circular pattern shown in Figure 3 with the Microsoft Kinect camera and refocus them with the above mentioned 𝐾 𝑐𝑎𝑚 values. We used from 19 to 20 different image pairs (an in-focus image and a defocus-blurred image) for each 𝐾 𝑐𝑎𝑚 value. Then we perform the defocus blur calibration procedure described in section 3.2. Estimated 𝐾 𝑐𝑎𝑚 values vs. the actual values (ground truth 𝐾 𝑐𝑎𝑚 ) are shown in Figure 4. The relationship between the ground truth and estimated 𝐾 𝑐𝑎𝑚 values are very linear as expected. We estimate one 𝐾 𝑐𝑎𝑚 value per one circle from an in-focus and defocus blurred image pair. Since there are 44 circles in the pattern, for 20 image pairs we obtained 880 estimated 𝐾 𝑐𝑎𝑚 values. The results in Figure 4 were obtained after removing outliers and calculating the median from the estimated 𝐾 𝑐𝑎𝑚 values. We show box plots with interquartile range, median, minimum and maximum values of these estimations along with the ground truth 𝐾 𝑐𝑎𝑚 values. 4.4.1 Sensitivity of depth estimation performance to 𝐾 𝑐𝑎𝑚 . In this section we explore how the variation in estimated 𝐾 𝑐𝑎𝑚 values affect the depth estimation performance. 5, we use a range of numbers centered on the actual 𝐾 𝑐𝑎𝑚 values for the respective datasets and obtain the RMSE error of depth estimation. It can be seen that the error response of the model to the variation of 𝐾 𝑐𝑎𝑚 used has a clear minimum. The error increases if the values used in the place of 𝐾 𝑐𝑎𝑚 deviates from the actual value. For example, the error of the response of 𝐾 𝑐𝑎𝑚 =23.67 increases by around 16% if the 𝐾 𝑐𝑎𝑚 used deviates positively from the GT values by 18%. Figure 8 shows some examples of predicted depth maps when the model was provided with an unseen virtually blurred image from a camera with 𝐾 𝑐𝑎𝑚 = 22.67. Agreeing with the Figure 5, the predictions get distorted faster when the 𝐾 𝑐𝑎𝑚 used lowers than the ground truth 𝐾 𝑐𝑎𝑚 and distorts slower when it increases." }, { "figure_ref": [], "heading": "Effect of the blur weight", "publication_ref": [], "table_ref": [], "text": "We change the scaling parameter 𝑏_𝑤𝑒𝑖𝑔ℎ𝑡 from equation 8 while training several models on data from defocus blurred NYU depth v2 dataset with 𝐾 𝑐𝑎𝑚 values of 8.79 and 35.61. The performance on the two evaluation datasets (with 𝐾 𝑐𝑎𝑚 values of 12.69 and 22.67) are shown in Figure 7." }, { "figure_ref": [ "fig_1" ], "heading": "Effect of the Field of View", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Field of view of a camera can be defined in several ways. One way is to define it as the size of an object at a given distance from the camera that would completely fill the image sensor. In Figure 6, 𝑠 is the length of the sensor, f is the focal length of the lens, 𝑑 is the distance to the object and 𝑤 is the length It can be seen that 𝑤 is inversely proportional to 𝑓 . Cameras with a smaller focal length have a larger Field of View and vise-versa. In all the experiments that we performed including the NYU dataset, we have assumed that the cameras have a fixed FOV even when the 𝑘 𝑐𝑎𝑚 (and therefore 𝑓 ) changes. While this is helpful to analyze the performance of blur based depth estimation methods, it is important to investigate the effect of the FOV change on the performance of the models. We created a dataset by scaling down the images in the NYU depth dataset by a factor of 0.6 and then refocusing with a 𝑘 𝑐𝑎𝑚 (respective 𝑓 is 30mm) of 12.69. The model has been trained with data having 𝑘 𝑐𝑎𝑚 s of 8.79 and 35.61 (they had focal lengths (𝑓 ) of 20mm and 50mm). We scaled down the images of 𝑓 = 30𝑚𝑚 with respect to 𝑓 = 50𝑚𝑚 which is 0.6. Note that the images have the same amount of blur as the images of original size; only the size of the objects visible have changed. From Table 5, it can be seen that the performance drops significantly by more than three folds when we perform the resizing. The reason for this can be understood from Figure 2. The depth estimation section receives two inputs. One is the blur and the second is the image features in the form of skip connections. Although we account for the change of blur through division by the respective 𝑘 𝑐𝑎𝑚 , we do not modify image features to reflect the change of FOV. This is a limitation of our work and needs to be addressed in the future. We show that estimating depth from defocus blur is significantly superior to conventional semantic based depth prediction provided that the camera is suitable for it. But this technique is sensitive to the camera. Our novel approach performs a simple correction to an already trained depth prediction model using camera parameters of a given camera. We show that this correction can alleviate the sensitivity of the model to the camera. Our novel defocus blur calibration technique can estimate the camera parameters using several images taken by a given camera. We show that our approach beats the state-of-the art for several datasets. Finally we show some limitations of our work and suggest future improvements.\nMethods such as ours which utilize a camera can come with certain privacy and security concerns. Our technique specifically can be used by enemy drones to measure distance to a target person in close quarters and cause significant harm which raises several ethical concerns. As researchers it is important to address these concerns alongside the algorithmic improvements to the state-of-the-art." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgment: This work is supported by the award 70NANB21H029 from the U.S. Department of Commerce, National Institute of Standards and Technology (NIST)." } ]
Monocular depth estimation is an important step in many downstream tasks in machine vision. We address the topic of estimating monocular depth from defocus blur which can yield more accurate results than the semantic based depth estimation methods. The existing monocular depth from defocus techniques are sensitive to the particular camera that the images are taken from. We show how several camera-related parameters affect the defocus blur using optical physics equations and how they make the defocus blur depend on these parameters. The simple correction procedure we propose can alleviate this problem which does not require any retraining of the original model. We created a synthetic dataset which can be used to test the camera independent performance of depth from defocus blur models. We evaluate our model on both synthetic and real datasets (DDFF12 and NYU depth V2) obtained with different cameras and show that our methods are significantly more robust to the changes of cameras.
Camera-Independent Single Image Depth Estimation from Defocus Blur
[ { "figure_caption": "Figure 1 .1Figure 1. Left:Image formation in a simple camera system. Right:Blur vs. distance", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Our Model 3.2 Our solution The operation or our model is shown in Figure 2. Both Blur Estimation and Depth estimation sections are CNN based neural networks inspired by the defocusnet [4]. Given a defocus blurred image, the blur estimation model estimates the PSF standard deviation 𝜆 at each pixel of the image. Then we calculate the standard deviation of the PSF solely due to defocus blurring 𝜎 according to equation 6. Next we divide the obtained 𝜎 with 𝑘 𝑐𝑎𝑚 to obtain |𝑠 2 -𝑠 1 | 𝑠 2 which does", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Defocus Blur Calibration equation 7.Here we have assumed that the distance from the camera to each circle center is approximately equal to the distance to the edges of the circle. This can be justified because the distance to the circles from the camera (around 1m) is much larger than the diameter of the circles (around 4cm in our case). 6. To improve the accuracy of the estimate, we can repeat steps from 2 to 5 several times. See the evaluation section for further details on the experiments. We can estimate the 𝑘 𝑐𝑎𝑚 for a given camera with the above steps. The estimated 𝑘 𝑐𝑎𝑚 can be used as shown in Figure2to predict depth with the images taken from this new camera.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Estimation of 𝐾 𝑐𝑎𝑚 values of different cameras and est 𝐾 𝑐𝑎𝑚 models were trained and tested on defocus blurred images. Since we do not have ground truth 𝐾 𝑐𝑎𝑚 for the DDFF12 dataset, we performed a linear search of the 𝐾 𝑐𝑎𝑚 which predicts the best depth using the ground truth depth maps provided in the training set. The results in Table4are the performance of our model under the test set using the 𝐾 𝑐𝑎𝑚 value found above. Both the No 𝐾 𝑐𝑎𝑚 and est 𝐾 𝑐𝑎𝑚 models perform better than the in-focus model for depth prediction. Also using the appropriate 𝐾 𝑐𝑎𝑚 to transfer the model to the new domain of images significantly improves the performance compared to the no 𝐾 𝑐𝑎𝑚 model which does not perform a correction that depends on the camera.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. 𝐾 𝑐𝑎𝑚 Estimation Error", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure 6. FOV of a camera", "figure_data": "", "figure_id": "fig_5", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Examples of from the camera with 𝐾 𝑐𝑎𝑚 = 22.67 predicted using various 𝐾 𝑐𝑎𝑚 values 5 Conclusions", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "𝑘", "figure_data": "Camera/deviceLensf (mm) N𝐾 𝑐𝑎𝑚Cannon EOS Rebel T7EF-S18415.54555.6 105.58EF 50mm501.2 406.16EF 70-300mm705.6 172.35Nikon D7500Nikon AF-S501.8 240.40AF-S DX NIKKOR183.515.76185.69.85553.5 149.98555.693.73Sony Alpha 7 IVFE PZ 16-35mm1648.9235443.1316221.62FE 70-200 mm702.8 250.942002.8 2196.48702231.93Google Pixel 7 Prowide251.85 50.39telephoto1202.55 1577.82", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance on NYU datasetNo 𝐾 𝑐𝑎𝑚 model does not divide the output of the blur estimation model with 𝐾 𝑐𝑎𝑚 as shown in Figure2. The GT 𝐾 𝑐𝑎𝑚 model on the other hand considers the effect of 𝐾 𝑐𝑎𝑚 and behaves as shown in Figure2during both training and testing. According to Table2the performance of both the No 𝐾 𝑐𝑎𝑚 and the GT 𝐾 𝑐𝑎𝑚 models are better (by around 0.025) than that of the in-focus method which shows that considering defocus blur is valuable when estimating depth. Our models perform better when considering the effect of 𝐾 𝑐𝑎𝑚 (GT 𝐾 𝑐𝑎𝑚 ) compared to when not considering it (No 𝐾 𝑐𝑎𝑚 ) by around 0.015 in MSE. This shows that we can transfer the knowledge learned with the trained model into a new domain (images taken with a different 𝐾 𝑐𝑎𝑚 ) just with one parameter 𝐾 𝑐𝑎𝑚 .", "figure_data": "focus VPD[19] 0.953 0.992 0.999 0.0520.1540.0278.79GT 𝐾 𝑐𝑎𝑚 0.976 0.997 0.999 0.0460.0820.0198.79No 𝐾 𝑐𝑎𝑚 0.912 0.975 0.998 0.0950.1610.03735.61GT 𝐾 𝑐𝑎𝑚 0.976 0.997 0.999 0.0460.0820.01935.61No 𝐾 𝑐𝑎𝑚 0.962 0.995 0.999 0.0540.1010.02312.69GT 𝐾 𝑐𝑎𝑚 0.969 0.999 0.999 0.0680.1230.08812.69est 𝐾 𝑐𝑎𝑚 0.970 0.999 0.999 0.0680.1220.03012.69No 𝐾 𝑐𝑎𝑚 0.853 0.963 0.999 0.1270.1930.05022.67GT 𝐾 𝑐𝑎𝑚 0.980 0.998 0.999 0.0680.1170.02822.67est 𝐾 𝑐𝑎𝑚 0.980 0.998 0.999 0.0690.1180.02822.67No 𝐾 𝑐𝑎𝑚 0.896 0.994 0.999 0.1050.1650.043", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance on DDFF12 dataset", "figure_data": "𝐾 𝑐𝑎𝑚Image size RMSE12.69 Original size 0.123Resized0.43822.67 Original size 0.117Resized0.399", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Effect of FOVWe use the same model that we used to obtain the results in Table3that was trained on data from 𝐾 𝑐𝑎𝑚 values of 8.79 and 35.61 and evaluate them on data from 𝐾 𝑐𝑎𝑚 values of 12.69 and 22.67. As can be seen in Figure", "figure_data": "", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
Lahiru Wijayasingha; John A Stankovic
[ { "authors": "J Ping; Y Liu; D Weng", "journal": "IEEE", "ref_id": "b0", "title": "Comparison in depth perception between virtual reality and augmented reality systems", "year": "2019" }, { "authors": "X Dong; M A Garratt; S G Anavatti; H A Abbass", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b1", "title": "Towards real-time monocular depth estimation for robotics: A survey", "year": "2022" }, { "authors": "N.-H Wang; R Wang; Y.-L Liu; Y.-H Huang; Y.-L Chang; C.-P Chen; K Jou", "journal": "", "ref_id": "b2", "title": "Bridging unsupervised and supervised depth from focus via all-in-focus supervision", "year": "2021" }, { "authors": "M Maximov; K Galim; L Leal-Taixé", "journal": "", "ref_id": "b3", "title": "Focus on defocus: bridging the synthetic to real domain gap for depth estimation", "year": "2020" }, { "authors": "V Casser; S Pirk; R Mahjourian; A Angelova", "journal": "", "ref_id": "b4", "title": "Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos", "year": "2019" }, { "authors": "S Gur; L Wolf", "journal": "", "ref_id": "b5", "title": "Single image depth estimation trained via depth from defocus cues", "year": "2019" }, { "authors": "M Watanabe; S K Nayar; M N Noguchi", "journal": "", "ref_id": "b6", "title": "Real-time computation of depth from defocus", "year": "1996" }, { "authors": "F Yang; X Huang; Z Zhou", "journal": "", "ref_id": "b7", "title": "Deep Depth from Focus with Differential Focus Volume", "year": "2022" }, { "authors": "Y Ban; M Liu; P Wu; B Yang; S Liu; L Yin; W Zheng", "journal": "Electronics", "ref_id": "b8", "title": "Depth estimation method for monocular camera defocus images in microscopic scenes", "year": "2012" }, { "authors": "X Zhang; H Wang; W Wang; S Yang; J Wang; J Lei; Z Zhang; Z Dong", "journal": "Optics and Lasers in Engineering", "ref_id": "b9", "title": "Particle field positioning with a commercial microscope based on a developed CNN and the depth-from-defocus method", "year": "2022" }, { "authors": "C Kong; S Lucey", "journal": "", "ref_id": "b10", "title": "Deep non-rigid structure from motion", "year": "2019" }, { "authors": "N.-H Wang; B Solarte; Y.-H Tsai; W.-C Chiu; M Sun", "journal": "IEEE", "ref_id": "b11", "title": "360sd-net: 360 stereo depth estimation with learnable cost volume", "year": "2020" }, { "authors": "Y.-L Chang; W.-Y Chen; J.-Y Chang; Y.-M Tsai; C.-L Lee; L.-G Chen", "journal": "SPIE", "ref_id": "b12", "title": "Priority depth fusion for the 2D to 3D conversion system", "year": "2008" }, { "authors": "Y Y Schechner; N Kiryati", "journal": "International Journal of Computer Vision", "ref_id": "b13", "title": "Depth from Defocus vs. Stereo: How Different Really Are They?", "year": "2000" }, { "authors": "Y Li; Y Guo; Z Yan; X Huang; Y Duan; L Ren", "journal": "", "ref_id": "b14", "title": "Omnifusion: 360 monocular depth estimation via geometry-aware fusion", "year": "2022" }, { "authors": "A Mertan; D J Duff; G Unal", "journal": "Digital Signal Processing", "ref_id": "b15", "title": "Single image depth estimation: An overview", "year": "2022" }, { "authors": "V Patil; C Sakaridis; A Liniger; L Van Gool", "journal": "", "ref_id": "b16", "title": "P3depth: Monocular depth estimation with a piecewise planarity prior", "year": "2022" }, { "authors": "S F Bhat; R Birkl; D Wofk; P Wonka; M Müller", "journal": "", "ref_id": "b17", "title": "Zoedepth: Zeroshot transfer by combining relative and metric depth", "year": "2023" }, { "authors": "W Zhao; Y Rao; Z Liu; B Liu; J Zhou; J Lu", "journal": "", "ref_id": "b18", "title": "Unleashing text-to-image diffusion models for visual perception", "year": "2023" }, { "authors": "K Xian; J Zhang; O Wang; L Mai; Z Lin; Z Cao", "journal": "", "ref_id": "b19", "title": "Structureguided ranking loss for single image depth prediction", "year": "2020" }, { "authors": "S K Nayar; Y Nakagawa", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b20", "title": "Shape from focus", "year": "1994" }, { "authors": "M Subbarao; J . K Tyan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b21", "title": "Selecting the optimal focus measure for autofocusing and depth-from-focus", "year": "1998" }, { "authors": "C Hazirbas; S G Soyer; M C Staab; L Leal-Taixé; D Cremers", "journal": "Springer", "ref_id": "b22", "title": "Deep depth from focus", "year": "2018" }, { "authors": "S Liu; F Zhou; Q Liao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b23", "title": "Defocus map estimation from a single image based on two-parameter defocus model", "year": "2016" }, { "authors": "S Anwar; Z Hayder; F Porikli", "journal": "BMVC", "ref_id": "b24", "title": "Depth Estimation and Blur Removal from a Single Out-of-focus Image", "year": "2017" }, { "authors": "M Subbarao; G Surya", "journal": "International Journal of Computer Vision", "ref_id": "b25", "title": "Depth from defocus: A spatial domain approach", "year": "1994" }, { "authors": "A Zhang; J Sun", "journal": "IEEE Transactions on Image Processing", "ref_id": "b26", "title": "Joint Depth and Defocus Estimation From a Single Image Using Physical Consistency", "year": "2021" }, { "authors": "S Pertuz; D Puig; M A Garcia; A Fusiello", "journal": "IEEE Transactions on Image Processing", "ref_id": "b27", "title": "Generation of All-in-Focus Images by Noise-Robust Selective Fusion of Limited Depth-of-Field Images", "year": "2013" }, { "authors": "H Ikoma; C M Nguyen; C A Metzler; Y Peng; G Wetzstein", "journal": "IEEE", "ref_id": "b28", "title": "Depth from defocus with learned optics for imaging and occlusionaware depth estimation", "year": "2021" }, { "authors": "R Ng; M Levoy; M Brédif; G Duval; M Horowitz; P Hanrahan", "journal": "", "ref_id": "b29", "title": "Light field photography with a hand-held plenoptic camera", "year": "2005" }, { "authors": "Y Lu; G Milliron; J Slagter; G Lu", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b30", "title": "Self-supervised single-image depth estimation from focus and defocus clues", "year": "2021" }, { "authors": "Y.-W Tai; M S Brown", "journal": "IEEE", "ref_id": "b31", "title": "Single image defocus map estimation using local contrast prior", "year": "2009" }, { "authors": "S Zhuo; T Sim", "journal": "Pattern Recognition", "ref_id": "b32", "title": "Defocus map estimation from a single image", "year": "2011" }, { "authors": "X Cun; C.-M Pun", "journal": "Springer", "ref_id": "b33", "title": "Defocus blur detection via depth distillation", "year": "2020" }, { "authors": "M Subbarao; G Surya", "journal": "International Journal of Computer Vision", "ref_id": "b34", "title": "Depth from defocus: A spatial domain approach", "year": "1994" }, { "authors": "Jeff Meyer; Alex Summersby", "journal": "", "ref_id": "b35", "title": "Image sensors explained", "year": "" }, { "authors": " Opencv", "journal": "", "ref_id": "b36", "title": "Camera Calibration", "year": "2023" }, { "authors": "N Silberman; D Hoiem; P Kohli; R Fergus", "journal": "ECCV", "ref_id": "b37", "title": "Indoor segmentation and support inference from rgbd images", "year": "2012" }, { "authors": "D Eigen; C Puhrsch; R Fergus", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Depth map prediction from a single image using a multi-scale deep network", "year": "2014" }, { "authors": "M Carvalho; B Le Saux; P Trouvé-Peloux; A Almansa; F Champagnat", "journal": "", "ref_id": "b39", "title": "Deep Depth from Defocus: how can defocus blur improve 3D estimation using dense neural networks?", "year": "2018" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b40", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b41", "title": "Adam: A method for stochastic optimization", "year": "2014" } ]
[ { "formula_coordinates": [ 3, 126.9, 656.5, 167.74, 23.36 ], "formula_id": "formula_0", "formula_text": "𝐺 (𝑥, 𝑦) = 1 2𝜋𝜎 𝑒 -1 2 𝑥 2 +𝑦 2 𝜎 2(1)" }, { "formula_coordinates": [ 3, 345.49, 272.7, 213.11, 25.24 ], "formula_id": "formula_1", "formula_text": "|𝑠 1 -𝑠 2 | 𝑠 2 • 1 (𝑠 1 -𝑓 ) • 𝑓 2 𝑁 • 1 𝑝 • 𝑜𝑢𝑡 𝑝𝑖𝑥 𝑠𝑒𝑛𝑠𝑜𝑟 𝑝𝑖𝑥 = 𝑘 𝑟 • 𝜎 (2)" }, { "formula_coordinates": [ 3, 401.4, 411.17, 157.2, 23.81 ], "formula_id": "formula_2", "formula_text": "|𝑠 1 -𝑠 2 | 𝑠 2 • 𝑘 𝑐𝑎𝑚 = 𝜎(3)" }, { "formula_coordinates": [ 3, 327.42, 438.37, 168.14, 26.69 ], "formula_id": "formula_3", "formula_text": "𝑘 𝑐𝑎𝑚 = 1 (𝑠 1 -𝑓 ) • 𝑓 2 𝑁 • 1 𝑝 • 𝑜𝑢𝑡 𝑝𝑖𝑥 𝑠𝑒𝑛𝑠𝑜𝑟 𝑝𝑖𝑥 • 1 𝑘 𝑟 𝐺 (𝑥, 𝑦)" }, { "formula_coordinates": [ 3, 399.81, 530.42, 158.79, 9.36 ], "formula_id": "formula_4", "formula_text": ", 𝑦) = 𝐺 (𝑥, 𝑦) * 𝐹 (𝑥, 𝑦)(4)" }, { "formula_coordinates": [ 3, 367.23, 668.4, 191.37, 9.36 ], "formula_id": "formula_5", "formula_text": "𝐼 (𝑥, 𝑦) = 𝑄 (𝑥, 𝑦) * 𝐺 (𝑥, 𝑦) * 𝐹 (𝑥, 𝑦)(5)" }, { "formula_coordinates": [ 4, 147.29, 367.42, 147.35, 11.96 ], "formula_id": "formula_6", "formula_text": "𝜎 = √︁ 𝜆 2 -𝛾 2(6)" }, { "formula_coordinates": [ 4, 122.42, 419.01, 172.23, 23.81 ], "formula_id": "formula_7", "formula_text": "|𝑠 1 -𝑠 2 | 𝑠 2 • 𝑘 𝑐𝑎𝑚 = √︁ 𝜆 2 -𝛾 2 (7)" }, { "formula_coordinates": [ 4, 384.12, 521.91, 170.95, 9.36 ], "formula_id": "formula_8", "formula_text": "𝐿 𝑡𝑜𝑡𝑎𝑙 = 𝐿 𝑑 + 𝑏_𝑤𝑒𝑖𝑔ℎ𝑡 • 𝐿 𝑏 (8" }, { "formula_coordinates": [ 4, 555.07, 522.44, 3.52, 8.84 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 5, 85.9, 520.06, 208.74, 25.79 ], "formula_id": "formula_10", "formula_text": "𝐽 = ∫ ∞ -∞ 𝐺 (𝑥)𝑑𝑥 = ∫ ∞ -∞ 𝑒 -1 2 𝑥 2 +𝑦 2 𝛾 2 𝑑𝑥 = 𝛾 √ 2𝜋(9)" }, { "formula_coordinates": [ 6, 322.22, 76.2, 243.65, 22.52 ], "formula_id": "formula_11", "formula_text": "𝐾 𝑐𝑎𝑚 method 𝛿 1 ↑ 𝛿 2 ↑ 𝛿 3 ↑ REL ↓ RMSE ↓ log10 ↓ in-" } ]
10.1109/CITSM.2016.7577578
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b2", "b27", "b21", "b5", "b2", "b25", "b12", "b13", "b19", "b14", "b2", "b7", "b35", "b15", "b13", "b1", "b24" ], "table_ref": [], "text": "Large Language Models (LLMs), such as ChatGPT, have emerged as powerful tools in understanding and generating human language (Li et al., 2023c;Touvron et al., 2023;OpenAI, 2023), playing a pivotal role in diverse open-domain tasks and leaving a significant impact on both industry and academia (Bubeck et al., 2023;Yao et al., 2023;Touvron et al., 2023;Laskar et al., 2023). However, their performance is often confined to the text-based domains and tasks they were trained on, overlooking the multimodal and dynamic nature of real-world information. As people increasingly rely on LLMs to address their daily challenges, the imperative to enhance the task-handling capabilities of these models grows ever more pressing. In addition to addressing many of people's emerging needs in the real world, enhancing LLMs with multimodal problem-solving skills could be a significant step towards the realization of AGI in an idealized future (Bubeck et al., 2023).\nReflecting this demand and vision, recent studies have embarked on two primary approaches to integrate multimodal processing capabilities into existing LLMs (Li et al., 2023a): 1) Joint training or finetuning LLMs with components for multimodal encoding and generation (Wu et al., 2023;Maaz et al., 2023;Zhang et al., 2023a); 2) Introducing auxiliary API tools via natural language interfaces (Patil et al., 2023;Shen et al., 2023;Qin et al., 2023), positioning LLMs as the central decisionmaking entity determining the appropriate tools to employ for the inquiry. Joint training of multimodal LLMs, despite creating more unified models, faces challenges with computational demands and potential loss of the generalization ability (Bubeck et al., 2023). On the other hand, evolving API functions, which are modularly designed, allow LLMs to adapt to new tasks by simply altering the API configuration.\nDespite the significant potential and flexibility the tool-augmented LLMs express on multimodal tasks, their quantitative performance of multimodal tasks when integrated with API tools still remains insufficiently examined. Recent studies are very inadequate and merely focus on and gleaning insights from open-domain tasks such as mathematical computations, database searches, and graph reasoning (Li et al., 2023b;Zhuang et al., 2023;Qiu et al., 2023). This gap in leveraging API tools to achieve multimodal tasks can be attributed to two primary obstacles: 1) the unavailability of high-quality APIprompt datasets, and 2) the absence of established metrics specifically designed to evaluate the efficacy of LLMs in multimodal tasks.\nIn this paper, we address the aforementioned challenges by constructing a large-scale API instruction-function dataset that provides API functions and evaluates LLMs' multimodal performance, called MultiAPI. Based on the Hugging-Face dataset (Patil et al., 2023), we extracted models with high-quality descriptions across 9 domains along with their instructions. These models were initially encapsulated as API functions using Chat-GPT prompts, followed by meticulous human refinements to ensure executability and consistent argumentation across domains. This help create the MultiAPI benchmark dataset with 235 functional API calls and 2,038 instructions in this paper.\nWe subsequently conducted experiments on both API-based LLMs and open-sourced LLMs, exploring strategies that were previously proven effective in improving LLM prompting such as in-context learning (Brown et al., 2020) and chain-of-thought (Wei et al., 2023). Our investigation spanned singlestep API call (only 1 API is required to resolve the instruction) and sequential API chain (multiple APIs are required) settings, evaluating 4 intuitive aspects: 1) invocation assessment; 2) domain match; 3) function match; and 4) argument match. Results revealed that while models accurately make decisions to invoke API functions, they often suffer from selecting the right function and parameters from the correct domain. Furthermore, we surprisingly noticed that adding auxiliary context could harm the API call performance. Extensive error analyses were conducted to understand the potential cause of such errors, leading us to propose two simple yet effective solutions to mitigate these errors. The experimental results validate the effectiveness of our method.\nWe summarize the contributions of this paper as follows:\n• We constructed a pioneering large-scale multimodal instruction-function benchmark dataset, MultiAPI, with 235 executable API functions and 2,038 prompts. This data underwent rigorous human refinement to ensure its robustness and relevance in the context of LLM evaluations.\n• Our experimental framework comprehensively assesses both API-based and opensourced LLMs, revealing their strengths in API call decisions but highlighting challenges in domain and function selection, as well as argument generation.\n• A thorough error analysis leads us to mitigate these errors and set a new direction for future LLM research within the multimodal context.\n2 Related Work" }, { "figure_ref": [], "heading": "Evaluation of Large Language Models", "publication_ref": [ "b10", "b5", "b7" ], "table_ref": [], "text": "Performance evaluation of LLMs has become a particularly prominent field postdate of the introduction of ChatGPT, providing valuable insights for enhancing future model iterations and assisting the industry in developing more resilient applications. Extensive research has been undertaken to assess the competencies of LLMs (Yin et al., 2023a;Yang et al., 2023;Laskar et al., 2023;Zhang et al., 2023d). These works demonstrated LLMs expressed near-human performance on open-domain tasks such as mathematics, coding, law, and psychology. However, their proficiency with tool use has not been thoroughly explored. Li et al. (2023b) introduced a benchmark for assessing LLMs' tool-use proficiency through a set of APIs. This benchmark, however, the amount of APIs is constrained by its reliance on human implementation and primarily evaluates LLMs on general tasks like setting alarms or scheduling meetings.\nIn contrast, our study pivots to evaluate LLMs' ability to handle multimodal tasks via the use of tool APIs. We have harnessed ChatGPT's code generation capabilities based on the provided code template, followed by meticulous human refinement, to construct MultiAPI, a high-quality and large-scale multimodal API dataset. This novel dataset enables us to delve into the multimodal task performance of LLMs, marking a significant advancement in the field." }, { "figure_ref": [], "heading": "Large Language Model Augmentation", "publication_ref": [ "b21", "b0", "b25", "b13", "b11", "b18", "b33", "b20", "b19" ], "table_ref": [], "text": "Although large language models recently demonstrated superior zero-shot language understanding (OpenAI, 2023;Touvron et al., 2023;Zhang et al., 2023b) capability, the task scope they could handle is highly tethered with their pretraining data. To adapt LLMs to diverse inputs and tasks, recent studies have primarily followed two avenues. The first involves joint fine-tuning of LLMs with pertinent neural network components. In this approach, the hidden representations of novel modalities are aligned with the LLM's latent space (Awais et al., 2023;Wu et al., 2023;Yin et al., 2023a;Patil et al., 2023;Lyu et al., 2023). The second avenue in- tegrates tools such as API functions as external modules (Schick et al., 2023;Zhang, 2023;Song et al., 2023). The strategy offers enhanced flexibility, allowing API functions to be seamlessly incorporated into textual contexts, irrespective of whether the LLM is API-centric or open-sourced.\nSeveral studies have examined combining large language models with external resources. Shen et al. (2023) notably linked ChatGPT with Hug-gingFace, enhancing its decision-making range. However, this integration struggled with producing precise code due to inconsistencies in the ground truth code and insufficient documentation. In our study, we mitigated these limitations by utilizing human annotators to integrate each HuggingFace model as a function call. We also standardized function arguments within the same domain, simplifying the evaluation process and reducing the complexity of model interactions during assessments." }, { "figure_ref": [], "heading": "MultiAPI Benchmark Dataset", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "Data Collection", "publication_ref": [ "b22" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In this section, we detail the process of constructing MultiAPI leveraging the HuggingFace instruction-code dataset introduced by Patil et al. ( 2023). The original dataset consists of a model definition file including model descriptions along with its corresponding example code template; and an instruction-code pair file linking models to selfgenerated instructions (Wang et al., 2023).\nWe first filtered out all the models that could potentially assist multimodal tasks from 9 unique domains, as shown in Table 1, and their corresponding instruction-code pairs. The subsequent data processing comprises four steps: 1) Description Verification, 2) Model Encapsulation, 3) Argument Standardization, and 4) Ground Truth Transformation. The primary procedures are illustrated in Figure 1. It's noteworthy that the first three steps are applied to the model definition and the last is applied to the instruction-code pair.\nDescription Verification: While most models come equipped with a description field that provides the basic information, the quality of these descriptions varies widely, largely depending on community contributors. Previous studies verified that a precise and detailed model description plays a critical role in aiding the model to identify the appropriate tool based on user specifications (Hsieh et al., 2023). Such specificity could also bolster the accuracy and reliability of evaluation outcomes. To this end, we engaged two human annotators with expertise in NLP to manually review all descriptions. They were tasked with removing the model whose descriptions only offered a broad overview, lacking a delineated use case, as depicted in lower Figure 1 (a).\nModel Encapsulation: The primary utility of the original dataset was to facilitate the training or finetuning of LLMs to autonomously generate the API call code, contingent on retrieval results. Consequently, models were invoked using the example_code field present in the dataset, as illustrated in the upper section of Figure 1(b). To adapt the existing models from HuggingFace to the API function-calling framework, we prompt gpt-3.5-turbo to transform the example code template into an API function and subsequently extract the potential arguments. In addition, we identify and include the import statements inside the function to ensure the function is independently executable.\nArgument Standardization: Upon encapsulating the functions, we observe that while gpt-3.5turbo adeptly transformed essential codes into function form, it exhibited challenges in accurately extracting function arguments. Further analysis suggests that the variation in argument names and the number of arguments pose a significant challenge (Yin et al., 2023b), potentially introducing the risk of hallucination, ambiguity and complicating the parsing process during argument evaluations. To address the aforementioned discrepancies, we introduce an argument standardization process. Consider a function set F d within a given domain d. We define a standardized argument set A d by manually reviewing all functions within d to determine the commonly recurring arguments intrinsic to the do-main's functionality. As a result, for any functions within d, we require:\n∀f 1 , f 2 ∈ F d , args(f 1 ) = args(f 2 ) = A d (1)\nFor instance, within the Text to Image domain, functions generate images in response to user prompts. Consequently, the indispensable arguments for this domain are prompt and output_path. The detailed mappings between domains and required arguments are listed in Table 1.\nUsing this collated reference table, human experts are introduced to refine the generated functions ensuring: 1) Incorporation of the minimum required arguments, named in line with the reference table . 2) Listing other arguments as default arguments with default values. 3) Ensuring the function remains executable within Python environments.\nGround Truth Transformation: As shown in the upper segment of Figure 1(c), instruction-code pairs represent specific instructions with their corresponding code blocks. To maintain consistency with our previous steps, we use a similar humansupervised approach with gpt-3.5-turbo to transform these pairs into instruction-function pairs. The results are depicted in the bottom code block of Figure 1(c). This ensures a streamlined and consistent framework for both model definitions and their corresponding instructions." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b16", "b17", "b7", "b9", "b4" ], "table_ref": [ "tab_0" ], "text": "The outputs of multimodal tasks are contingent on varying input modalities, leading to unpredictable results even with identical inputs (Rombach et al., 2022;Saharia et al., 2022), which makes direct evaluation on output unreliable. Moreover, crafting robust evaluation metrics for each individual domain poses significant challenges for future versatility. However, benefiting from diligent data col-lection steps, we bypass these issues by assessing the LLM's tool usage ability based on the function calls selected. In function-calling context, user's requirement would be fulfilled if the model correctly selects the appropriate function and fills in the accurate arguments. This approach streamlines the evaluation into a universal domain-agnostic textmatching task with some necessary adaptions.\nInspired by Li et al. (2023b), we design a stepwise, four-level evaluation framework for a comprehensive assessment of LLMs' tool usage in multimodal tasks. This framework includes:\n1. Invocation Assessment: Tests if LLMs can discern when a user instruction necessitates an auxiliary function.\n2. Domain Match: Evaluates the LLMs' ability to match the function's domain to the ground truth by leveraging domain annotations in our dataset.\n3. Function Match: Conducts a detailed assessment to confirm whether the LLM correctly identifies the specific tool within the matched domain via their descriptions.\n4. Argument Match: Verifies the LLM's proficiency in translating user instructions into precise arguments for successful function invocation. The distinction in evaluating multimodal task functions lies in the API arguments. We classify arguments defined in Table 1 into two distinct categories: exact-match arguments and concept-match arguments. Exact-match arguments, such as file paths, demand precise, verbatim replication. Any deviation in these arguments can impede the successful invocation of the function. On the other hand, concept-match arguments, like generative prompts, offer more flexibility in wording, though they must maintain fidelity in conveying the intended meaning. Inaccuracies in generating concept-match arguments, while not hindering the function invocation, can lead to outputs that diverge from the expected results.\nIn our experiments, exact-match arguments undergo text matching for exact path alignment, while concept-match prompts are semantically evaluated using ROUGE F-scores (Lin, 2004) and cosine similarity (Lahitani et al., 2016) for both statistical and vectorized analysis." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct extensive experiments on our proposed MutilAPI benchmark to assess the capabilities of LLMs in handling multimodal tasks through tool integration. Our evaluation spans both API-based models and open-source models.\nFor each model, we implement a variety of prompt configurations, aiming to identify the most effective prompt settings specifically tailored for multimodal tasks." }, { "figure_ref": [], "heading": "Task Formulation", "publication_ref": [], "table_ref": [], "text": "Given a multimodal task instruction i, the model's objective is to generate an API function f from a set of available functions F and its corresponding set of arguments A f . Formally, for f ∈ F the generation process can be represented as:\np(f, A|i, F ) = p(f |i, F ) × p(A|f, i) (2)" }, { "figure_ref": [], "heading": "Models and Prompt Configurations", "publication_ref": [ "b21", "b32", "b23", "b1", "b24" ], "table_ref": [], "text": "Current LLMs can be categorized into API-based models and open-sourced models. Our evaluation performs on both categories. For API-based models, we use gpt-3.5-turbo-0613 as the candidate.\nFor open-sourced models, we leverage Llama2-13B (Touvron et al., 2023) provided by Hugging-Face2 . Furthermore, previous research proved prompt configurations can significantly affect the performance of LLMs (Zhang et al., 2023c;Wei et al., 2022). To investigate whether these configurations remain effective on our task. We implemented the following prompt configurations in our experiments:\nIn-context Learning: Previous research demonstrated the few-shot performance of language models can be significantly boosted by providing exemplar input-ground truth pairs (Brown et al., 2020).\nIn our in-context setting, we provide 2 instructionfunction call pairs to assist the model in reasoning the predictions.\nChain-of-Thought: Chain-of-Thought (Wei et al., 2023) adapts the concept of divide-andconquer. It allows LLMs to address problems in a step-by-step fashion, by deconstructing the primary task into smaller, manageable queries. This approach not only simplifies the task but also bolsters the reasoning capabilities of the models.\nWe apply this framework by breaking down the task into 4 questions aligned with our evaluation metrics introduced in 3.2.\nFunction Calling: Recently introduced by Ope-nAI 3 , Function Calling is a feature tailored for GPT models. The models are finetuned on a specialized function-call dataset. The intent is to enable the models to better recognize scenarios necessitating function calls, thereby facilitating the generation of more structured outputs." }, { "figure_ref": [], "heading": "Context Token Limitation", "publication_ref": [], "table_ref": [], "text": "Given the constraint of a maximum context window of 4,096 tokens for those LLMs used in our experiments, we face a limitation in the number of functions that can be included within this token budget. Our calculations suggest that approximately 25 functions can be accommodated. To effectively manage this constraint, we initially shuffle the entire dataset. Subsequently, we divide it into 10 segments, each containing 25 functions, except for the final segment which may vary in size due to the distribution of the remaining functions. For each experiment configuration, we conduct separate trials on each of these 10 splits. The overall experimental results are then derived by calculating the average across these 10 segments." }, { "figure_ref": [], "heading": "Function Invocation", "publication_ref": [], "table_ref": [], "text": "In this section, we will focus on the function invocation aspect of LLMs to evaluate their ability to understand user instructions and locate the proper tool function. The results are demonstrated in Table 2.\nLLMs face challenges in multimodal domain selection: By observing across columns, we could conclude both GPT-3.5 and Llama models exhibit commendable accuracy in determining the necessity of function invocation based on user instructions. However, a significant drop in performance occurs when it comes to identifying the specific domain of multimodal tasks and selecting the precise function to effectively address these tasks. This finding implies that, while LLMs possess robust common-sense knowledge, they still struggle with accurately comprehending the nuances and definitions unique to each domain of multimodal tasks. Function Calling enhancement performance varied by prompt configuration: Upon comparing the results in the first and second blocks of Table 2, it is evident that enabling Function Calling significantly enhances performance in the GPT-3.5 and GPT-3.5-ict-cot configurations, while it appears to slightly impede performance in settings where only a single prompt configuration is employed. This observation could potentially be attributed to the complex interplay between the Function Calling mechanism and the prompt configurations. Such findings underscore the importance of carefully considering the compatibility of various features and configurations when augmenting LLMs for specific tasks.\nIn-context learning impairs multimodal function invocation: Our analysis of the effectiveness of prompt configurations, conducted through a cross-row examination within each block, revealed consistent patterns across both GPT-3.5 and Llama models. A prominent observation is that the incorporation of contextual elements tends to negatively impact performance, a trend that is especially pronounced with the introduction of in-context learning. This significant impairment in performance is contrary to the widespread belief that providing reference context generally improves model performance across a variety of tasks. Such a result suggests that in multimodal function invocation scenarios, the addition of contextual information might inadvertently introduce complexity or irrelevant data, thus diminishing the model's efficiency. This counterintuitive finding points to a need for deeper inquiry into how and why the incorporation of contextual elements in LLMs affects their function invocation capabilities, challenging existing assumptions and opening new avenues for research in the field." }, { "figure_ref": [], "heading": "Argument Generation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The capabilities of LLMs in generating arguments for multimodal tasks are detailed in Table 3. It's noteworthy that Llama was excluded from this analysis due to its inferior performance in function locating. The results indicate a significant challenge for GPT models in accurately generating both exact-match and concept-match arguments based on user instructions. The success rate for matching exact-match arguments falls below 50%, and the semantic similarity of the generated conceptmatch arguments is similarly subpar. This suggests that argument generation set a more critical bottleneck hindering LLMs' ability to effectively invoke multimodal functions, compared to the function invocation ability in the previous sections. Additionally, a distinct observation from the data is that, while the exact-match argument accuracy aligns with previous insights, the inclusion of additional context appears to positively impact the generation of concept-match arguments. This highlights a nuanced aspect of LLM performance, where contextual information plays a more beneficial role in generating arguments that rely on semantic rather than verbatim accuracy. This finding suggests potential areas for optimization in LLMs, particularly in enhancing their ability to handle concept-match arguments in the context of multimodal task execution while reducing the performance impairment on the exact-match arguments. " }, { "figure_ref": [], "heading": "Sequential API Invocation", "publication_ref": [], "table_ref": [], "text": "In real-world applications, user instructions frequently necessitate the invocation of multiple APIs for resolution. Particularly in our multimodal scenario, this requires LLMs to possess a thorough understanding of each modality and associated tasks, as well as the interaction between modalities. Analyzing models' capabilities in sequential API invocation is more representative of real-life applications and offers valuable insights for application development. To address this need, we introduce MultiAPI-SEQ, a dataset specifically designed for assessing sequential function invocation. This dataset has been carefully curated by human experts who have manually crafted 30 distinct instructions. Each of these instructions necessitates the sequential invocation of two functions from the MultiAPI dataset. By limiting each instruction to require just two functions, we aim to simplify the analysis process while still effectively evaluating the models' ability to handle multi-step task execution.\nAs shown in Table 4, the analysis of GPT-3.5 and GPT-3.5-fc models demonstrate significant inconsistency in maintaining performance across sequential tasks. Both models exhibit high invocation accuracy initially, yet GPT-3.5-fc's accuracy notably diminishes during the second task. This indicates that while fine-tuning may enhance singlefunction call performance, it could adversely affect task planning in sequential API call tasks. This trend toward relying on built-in parametric knowledge instead of external tools raises concerns about the potential for hallucination. Additionally, both models show a reduction in domain and function accuracy, with GPT-3.5-fc's argument accuracy notably less affected, implying a relatively stable understanding of argument relevance. The linguistic similarity metrics across functionalities indicate that GPT-3.5 demonstrates more consistent performance, hinting at its robustness in generating contextually appropriate responses throughout the task sequence." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "To investigate the potential causes of LLMs' underperformance, we performed a detailed error analysis at the domain and function levels for GPT-3.5-fc." }, { "figure_ref": [ "fig_2" ], "heading": "Domain Mismatch", "publication_ref": [], "table_ref": [], "text": "Section 4.4 suggests LLMs struggle to differentiate multimodal task domains. We analyze model errors to identify these shortcomings. We summarize the result as a misclassification network indicating LLM's domain confusion in Figure 2.\nFor visual analysis APIs, the model demonstrates an inclination to misinterpret classification and segmentation tasks as object detection. Besides, it also frequently fails the identification between image classification and image segmentation. This pattern indicates a fundamental challenge in the LLM's ability to identify domains based on user instruction, particularly in discerning whether the analysis should encompass the entire image or focus on the specific content within the image. The asymmetries in bidirectional error between these nodes further suggest that LLM bias towards local rather than global image analysis.\nAdditionally, with image generation APIs, the model often struggles to determine whether a task is conditional or unconditional, commonly misidentifying text-to-image and image-to-image tasks as unconditional image generation. It also faces challenges in recognizing the input modality for conditioned generation tasks, as evidenced by errors between image-to-image and text-to-image tasks. Those two observations may suggest that LLMs lack an understanding of different modalities, possibly because they are predominantly trained on textual data." }, { "figure_ref": [ "fig_2" ], "heading": "Function Mismatch", "publication_ref": [], "table_ref": [], "text": "To assess the LLMs' function selection accuracy, we randomly sampled 10 functions and corresponding instructions from each domain and prompted the model to choose the most appropriate function within that domain. As shown in Figure 2, the histogram reflecting function accuracy across domains, demonstrates the uneven function selection proficiency of LLMs in handling different multimodal tasks. Domains with more straightforward, visually dense tasks like image-to-image and object detection demonstrate relatively high accuracy, indicating that models perform better with tasks requiring less complex language-to-function mapping. In contrast, the low accuracy in depth estimation' and video classification points to the models' limitations in understanding and translating more abstract or dynamic task requirements into accurate function calls. " }, { "figure_ref": [], "heading": "Improvement Framework", "publication_ref": [ "b10", "b32" ], "table_ref": [], "text": "Our analysis in Sections 4 and 5 reveals that LLMs primarily struggle with distinguishing domain differences and modalities, with argument generation as a significant bottleneck. To mitigate these challenges, we propose two intuitive yet effective solutions: domain description prompting and argument revision. Domain description prompting involves adding a sentence to the model's system prompt to clearly define each domain. In addition, in visual analysis tasks, we specify whether the domain conducts global or local image analysis.\nBuilding on research showing LLMs' effectiveness in evaluation and revision tasks (Liu et al., 2023;Zhang et al., 2023c), we employ a secondary LLM as an argument editor. This LLM checks and revises argument predictions to ensure they align with user instructions, reducing task complexity and the context length for the primary LLM.\nTo avoid the noise arising from complex interactions between function calling feature and input context, we conducted our experiments using the GPT-3.5 model. Table 5 illustrates that our approach enhanced performance across all evaluation metrics. Notably, there was a significant improvement in domain accuracy, argument exact matching, and semantic evaluation. This significant improvement not only affirms the effectiveness of our approach but also strongly validates the accuracy of our analysis. Furthermore, we observed a notable enhancement in function accuracy, attributed to the incorporation of domain descriptions." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we presented a comprehensive study on the application of Large Language Models to multimodal tasks with external API functions, us-ing the newly introduced MultiAPI dataset. Our findings highlight the capabilities and limitations of LLMs in function calling. We revealed a significant discrepancy between the models' ability to recognize the need for function calls and their accuracy in selecting appropriate domains, functions, and arguments. This insight led us to propose a novel approach focusing on domain description prompting and argument revision, which demonstrated improved performance in addressing these challenges. Our work contributes to the field by introducing the first large-scale multimodal instructionfunction benchmark dataset and providing a detailed analysis of LLMs in multimodal task execution. We hope our dataset and findings could assist the development of tool-augmented LLMs and more sophisticated models for complex realworld applications." } ]
The proliferation of Large Language Models like ChatGPT has significantly advanced language understanding and generation, impacting a broad spectrum of applications. However, these models predominantly excel in text-based tasks, overlooking the complexity of real-world multimodal information. This study introduces MultiAPI, a pioneering comprehensive largescale API benchmark dataset aimed at expanding LLMs' proficiency in multimodal contexts. Developed collaboratively through ChatGPT, MultiAPI consists of 235 diverse API calls and 2,038 contextual prompts, offering a unique platform evaluation of tool-augmented LLMs handling multimodal tasks. Through comprehensive experiments, our findings reveal that while LLMs demonstrate proficiency in API call decision-making, they face challenges in domain identification, function selection, and argument generation. What's more, we surprisingly notice that auxiliary context can actually impair the performance. An in-depth error analysis paves the way for a new paradigm to address these challenges, suggesting a potential direction for future LLM research.
Beyond Text: Unveiling Multimodal Proficiency of Large Language Models with MultiAPI Benchmark
[ { "figure_caption": "/ddpm-celebahq-256').to('cuda') image = pipeline().images[0] image.save(output_path) return os.path.abspath(output_path)", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Workflow for adapting the HuggingFace dataset for MultiAPI collaboration with GPT model: (a) the Description Verification process where model descriptions are assessed for precision and detail. (b) the Model Encapsulation and Argument Standardization procedure, transitioning from an 'example code' format to an argumentstandardized Python function and ensuring the function is executable. (c) the Ground Truth Transformation, showing the conversion of instruction-code pairs into instruction-function pairs.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Domain misclassification network. Nodes in this graph represent distinct domains, with directed arrows illustrating instances where the model incorrectly applies a function from domain b intended for an instruction in domain a. The thickness of the arrows indicates the frequency of these errors, with thicker lines showing more common misclassifications.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Function accuracy distribution for each domain.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The domains of MultiAPI and their required arguments. # Functions represents the number of functions that each domain contains.", "figure_data": "DomainsRequired Arguments# FunctionsText to Image(prompt: str, output_path:str)11Depth Estimation(image_path:str, output_path:str)10Object Detection(image_path: str)30Video Classification (video_path:str)23Image Classification (image_path: str)48Image to Text(image_path: str)28Image Generation(output_path:str)33Image Segmentation (image_path: str, prompt:str)29Image to Image(control_image_path:str, output_image_path:str)23", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": ": Comparative evaluation of GPT-3.5 model con-figurations in argument generation. The first sectionshows the match accuracy of exact-match argumentswhile the second demonstrate the evaluation metrics ofconcept-match parameters. R1/2/L represents ROUGE-1/2/L scores respectively, and Sim represents cosinesimilarity.", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Xiao Liu; Jianfeng Lin; Jiawei Zhang
[ { "authors": "Muhammad Awais; Muzammal Naseer; Salman Khan; Rao Muhammad Anwer; Hisham Cholakkal; Mubarak Shah; Ming-Hsuan Yang; Fahad Shahbaz Khan", "journal": "", "ref_id": "b0", "title": "Foundational models defining a new era in vision: A survey and outlook", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg; Harsha Nori; Hamid Palangi; Marco Tulio Ribeiro; Yi Zhang", "journal": "", "ref_id": "b2", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Cheng-Yu Hsieh; Si-An Chen; Chun-Liang Li; Yasuhisa Fujii; Alexander Ratner; Chen-Yu Lee; Ranjay Krishna; Tomas Pfister", "journal": "", "ref_id": "b3", "title": "Tool documentation enables zero-shot tool-usage with large language models", "year": "2023" }, { "authors": "Alfirna Rizqi Lahitani; Adhistya ; Erna Permanasari; Noor Akhmad Setiawan", "journal": "", "ref_id": "b4", "title": "Cosine similarity to determine similarity measure: Study case in online essay assessment", "year": "2016" }, { "authors": "Md Tahmid Rahman Laskar; M Saiful Bari; Mizanur Rahman; Md Amran Hossen Bhuiyan; Shafiq Joty; Jimmy Huang", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets", "year": "2023" }, { "authors": "Chunyuan Li; Zhe Gan; Zhengyuan Yang; Jianwei Yang; Linjie Li; Lijuan Wang; Jianfeng Gao", "journal": "", "ref_id": "b6", "title": "Multimodal foundation models: From specialists to general-purpose assistants", "year": "2023" }, { "authors": "Minghao Li; Yingxiu Zhao; Bowen Yu; Feifan Song; Hangyu Li; Haiyang Yu; Zhoujun Li; Fei Huang; Yongbin Li", "journal": "", "ref_id": "b7", "title": "Api-bank: A comprehensive benchmark for tool-augmented llms", "year": "2023" }, { "authors": "Zihao Li; Zhuoran Yang; Mengdi Wang", "journal": "", "ref_id": "b8", "title": "Reinforcement learning with human feedback: Learning dynamic choices via pessimism", "year": "2023" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b9", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yang Liu; Dan Iter; Yichong Xu; Shuohang Wang; Ruochen Xu; Chenguang Zhu", "journal": "", "ref_id": "b10", "title": "Gpteval: Nlg evaluation using gpt-4 with better human alignment", "year": "2023" }, { "authors": "Chenyang Lyu; Minghao Wu; Longyue Wang; Xinting Huang; Bingshuai Liu; Zefeng Du; Shuming Shi; Zhaopeng Tu", "journal": "", "ref_id": "b11", "title": "Macaw-llm: Multi-modal language modeling with image, audio, video, and text integration", "year": "2023" }, { "authors": "Muhammad Maaz; Hanoona Rasheed; Salman Khan; Fahad Shahbaz Khan", "journal": "OpenAI", "ref_id": "b12", "title": "Video-chatgpt: Towards detailed video understanding via large vision and language models", "year": "2023" }, { "authors": "G Shishir; Tianjun Patil; Xin Zhang; Joseph E Wang; Gonzalez", "journal": "", "ref_id": "b13", "title": "Gorilla: Large language model connected with massive apis", "year": "2023" }, { "authors": "Yujia Qin; Shihao Liang; Yining Ye; Kunlun Zhu; Lan Yan; Yaxi Lu; Yankai Lin; Xin Cong; Xiangru Tang; Bill Qian", "journal": "", "ref_id": "b14", "title": "Toolllm: Facilitating large language models to master 16000+ real-world apis", "year": "2023" }, { "authors": "Jielin Qiu; Jiacheng Zhu; William Han; Aditesh Kumar; Karthik Mittal; Claire Jin; Zhengyuan Yang; Linjie Li; Jianfeng Wang; Bo Li; Ding Zhao; Lijuan Wang", "journal": "", "ref_id": "b15", "title": "Multisum: A dataset for multimodal summarization and thumbnail generation of videos", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b16", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Timo Schick; Jane Dwivedi-Yu; Roberto Dessì; Roberta Raileanu; Maria Lomeli; Luke Zettlemoyer; Nicola Cancedda; Thomas Scialom", "journal": "", "ref_id": "b18", "title": "Toolformer: Language models can teach themselves to use tools", "year": "2023" }, { "authors": "Yongliang Shen; Kaitao Song; Xu Tan; Dongsheng Li; Weiming Lu; Yueting Zhuang", "journal": "", "ref_id": "b19", "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face", "year": "2023" }, { "authors": "Yifan Song; Weimin Xiong; Dawei Zhu; Cheng Li; Ke Wang; Ye Tian; Sujian Li", "journal": "", "ref_id": "b20", "title": "Restgpt: Connecting large language models with realworld applications via restful apis", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b21", "title": "Llama 2: Open foundation and finetuned chat models", "year": "2023" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b22", "title": "Self-instruct: Aligning language models with self-generated instructions", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b23", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b24", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2023" }, { "authors": "Shengqiong Wu; Hao Fei; Leigang Qu; Wei Ji; Tat-Seng Chua", "journal": "", "ref_id": "b25", "title": "Next-gpt: Any-to-any multimodal llm", "year": "2023" }, { "authors": "Zhengyuan Yang; Linjie Li; Kevin Lin; Jianfeng Wang; Chung-Ching Lin; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b26", "title": "The dawn of lmms: Preliminary explorations with gpt-4v(ision)", "year": "2023" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Nan Du; Izhak Shafran; Karthik Narasimhan; Yuan Cao", "journal": "", "ref_id": "b27", "title": "React: Synergizing reasoning and acting in language models", "year": "2023" }, { "authors": "Shukang Yin; Chaoyou Fu; Sirui Zhao; Ke Li; Xing Sun; Tong Xu; Enhong Chen", "journal": "", "ref_id": "b28", "title": "A survey on multimodal large language models", "year": "2023" }, { "authors": "Shukang Yin; Chaoyou Fu; Sirui Zhao; Tong Xu; Hao Wang; Dianbo Sui; Yunhang Shen; Ke Li; Xing Sun; Enhong Chen", "journal": "", "ref_id": "b29", "title": "Woodpecker: Hallucination correction for multimodal large language models", "year": "2023" }, { "authors": "Hang Zhang; Xin Li; Lidong Bing", "journal": "", "ref_id": "b30", "title": "a. Videollama: An instruction-tuned audio-visual language model for video understanding", "year": "2023" }, { "authors": "Haopeng Zhang; Xiao Liu; Jiawei Zhang", "journal": "", "ref_id": "b31", "title": "Extractive summarization via chatgpt for faithful summary generation", "year": "2023" }, { "authors": "Haopeng Zhang; Xiao Liu; Jiawei Zhang", "journal": "", "ref_id": "b32", "title": "Summit: Iterative text summarization via chatgpt", "year": "2023" }, { "authors": "Jiawei Zhang", "journal": "", "ref_id": "b33", "title": "Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt", "year": "2023" }, { "authors": "Muru Zhang; Ofir Press; William Merrill; Alisa Liu; Noah A Smith", "journal": "", "ref_id": "b34", "title": "How language model hallucinations can snowball", "year": "2023" }, { "authors": "Yuchen Zhuang; Yue Yu; Kuan Wang; Haotian Sun; Chao Zhang", "journal": "", "ref_id": "b35", "title": "Toolqa: A dataset for llm question answering with external tools", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 315.85, 276.38, 209.29, 10.77 ], "formula_id": "formula_0", "formula_text": "∀f 1 , f 2 ∈ F d , args(f 1 ) = args(f 2 ) = A d (1)" }, { "formula_coordinates": [ 5, 334.62, 336.38, 190.52, 9.81 ], "formula_id": "formula_1", "formula_text": "p(f, A|i, F ) = p(f |i, F ) × p(A|f, i) (2)" } ]
10.18653/v1/2020.blackboxnlp-1.14
2023-11-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b6", "b53", "b9", "b25", "b16", "b78", "b29", "b63", "b53", "b70", "b32", "b14", "b41", "b30", "b82", "b32", "b27", "b19" ], "table_ref": [], "text": "Human production in dialogue is influenced by many factors within the recent conversational history, leading speakers to repeat recently used lexical and structural elements of their own and their partners' language. These factors can involve conceptual pacts speakers make in order to establish common ground (Brennan and Clark, 1996), priming of lexical or syntactic cues which influences their subsequent re-use (Bock, 1986), and other social, interpersonal, cognitive, or neural influences (Pickering and Garrod, 2005;Danescu-Niculescu-Mizil et al., 2012;Hasson et al., 2012;Fusaroli et al., 2014).\nLanguage models, which are often used as the backbone of modern dialogue systems, should learn to attend to such factors in order to successfully mimic human linguistic behaviour in interaction. The pre-training data of these models typically contains fluent monologic language and little diverse dialogue data-and indeed one goal of building language generators is having them produce fluent language. A key aspect of achieving fluency is the avoidance of repetition: repetitions are typically thought of as evidence of degenerate production (Li et al., 2016a,b;Welleck et al., 2019;Holtzman et al., 2019).\nRecent advances in conversational language models, such as ChatGPT, demonstrate neural models' impressive performance in producing humanlike, proficient language. However, despite these advances, they are yet to display human-like communicative behaviour (i.e., adhering to Gricean maxims-the verbosity of such models can be high), and more nuanced, local, and partnerspecific interactions. Humans in dialogue use specific communication strategies which rely on repetition, and, in particular, these are local and partnerspecific (Schlangen, 2004;Pickering and Garrod, 2005;Sinclair and Fernández, 2023). We start from the desideratum that dialogue response generation models should also produce human-like levels of repetition. While excessive levels of repetition, designed to mimic alignment, can hinder naturalness (Isard et al., 2006;Foster et al., 2009), humans generally prefer generated dialogue that contains higher levels of alignment (Lopes et al., 2015;Hu et al., 2016), which also lead to more successful communication in human-human dialogue (Xi et al., 2021;Isard et al., 2006). Moreover, elements of alignment have been successfully incorporated in chat bots (Hoegen et al., 2019;Gao et al., 2019).\nInvestigating and understanding the mechanisms which drive more human-like patterns of repetition is critical to creating more human-like natural language generation and dialogue systems. We therefore study whether models reproduce the repetition behaviour humans display in spoken dialogue, and the extent to which this repetition is affected by contextual cues. In particular, we focus on locality effects, comparing repetition patterns of speakers with respect to their own, and their partner's language. We investigate language models' production behaviour, via measuring the extent to which they generate similar local repetitions to humans, and their comprehension behaviour, through measuring the salience they assign to a given portion of the local dialogue context when comprehending an utterance." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Human Repetition and Alignment", "publication_ref": [ "b53", "b7", "b53", "b20", "b79", "b18", "b74", "b7", "b49", "b28", "b57", "b83", "b67", "b69", "b22", "b16", "b77", "b15", "b71", "b45", "b67", "b68" ], "table_ref": [], "text": "Local repetition of shared language between speakers is one of many lower-level linguistic signals indicating the presence of interactive alignment between speakers (Pickering and Garrod, 2004a). It is thought to contribute to more successful communication (Pickering and Garrod, 2005) as it allows speakers to establish and maintain shared common ground (Brennan and Clark, 1996;Pickering and Garrod, 2004b). Developing local routinesshared sequences of repeated language (Pickering and Garrod, 2005;Garrod and Pickering, 2007)can also indicate mutual understanding between speakers (Wilkes- Gibbs and Clark, 1992;Gallotti et al., 2017). Producing repeated language in dialogue, either at a word level, or, in the case of routines, a construction level, is influenced by many factors in the local context. Speakers can be primed by language they have been recently exposed to, which may, in addition to the coordination and alignment factors mentioned above, play a role in the choice to repeat language locally (Tooley and Traxler, 2010). Priming effects can take place at multiple levels (from phonetic, lexical and syntactic to gesture, gaze and body posture), and are well attested in human dialogue (Brennan and Clark, 1996;Pardo, 2006;Reitter et al., 2006a;Holler and Wilkin, 2011;Rasenberg et al., 2020).\nAlignment and coordination between speakers in dialogue are often measured in terms of local linguistic 'alignment effects', i.e., whether adjacent utterances contain high linguistic overlap, and whether the incidence of repetitions decays with the distance between utterances (Reitter et al., 2006b;Xu and Reitter, 2015;Sinclair et al., 2018;Sinclair and Fernández, 2021;Giulianelli et al., 2022). Local shared construction use has been linked to more successful grounded communication (Fusaroli et al., 2014;Reitter andMoore, 2007, 2014;Ward and Litman, 2007;Friedberg et al., 2012;Sinclair and Schneider, 2021;Norman et al., 2022). Local alignment is also affected by whether a speaker repeats their own or their partner's language, both in humans and in human-agent dialogue settings (Re-itter et al., 2006b;Sinclair et al., 2018;Duplessis et al., 2017;Sinclair et al., 2019). We focus our attention on these short term, local repetition effects and structure our analyses accordingly." }, { "figure_ref": [], "heading": "Understanding the Behaviour of Language Models", "publication_ref": [ "b17", "b66", "b8", "b84", "b2", "b33", "b35", "b81", "b44", "b34", "b31", "b75", "b13" ], "table_ref": [], "text": "Analysing model behaviour is a key approach when investigating patterns of model repetition, for example, paradigms from psycholinguistics can be repurposed to this end (e.g., Futrell et al., 2019).\nDuring language comprehension, language models have been shown to be prone to structural priming effects, in a manner with parallels to findings in humans. In particular, recency of prime to target within the input context heavily influences the likelihood of the congruent structure (Sinclair et al., 2022). It is less clear, however, to what extent models are affected by priming and repetition during language production, or generation, and what the mechanisms are that drive their comprehension behaviour. One method for explaining model behaviour is to employ interpretability techniques such as attribution methods. Attribution methods (Covert et al., 2021) allow for a highlevel explanation of model behaviour that aligns strongly with how humans explain their decisionmaking, i.e., based on counterfactual examples (Yin and Neubig, 2022): how would the prediction have changed if a particular input feature was not present? Attribution methods have been used to examine linguistic patterns in model behaviour, and it has been argued they provide more comprehensive insights than attention heatmaps (Bastings and Filippova, 2020), because attention only determines feature importance within a particular attention head, and not for model predictions as a whole (Jain and Wallace, 2019). Linguistic phenomena investigated using attribution methods include co-reference, negation, and syntactic structure (Jumelet et al., 2019;Wu et al., 2021;Nayak and Timmapathini, 2021;Jumelet and Zuidema, 2023). Within conversational NLP, feature attribution methods have been used to identify salient features in task-oriented dialogue modelling (Huang et al., 2020), dialogue response generation (Tuan et al., 2021), and turn-taking prediction (Ekstedt and Skantze, 2020). However, relatively little work involves these techniques used to analyse human alignment behaviour in dialogue, in terms of patterns of local repetition, which we make our focus.\nIn this study, we investigate (a) to what extent repetition patterns in dialogue can be explained in terms of the re-use of lexical material in the local context; (b) whether LMs learn to generate repetitions with properties similar to those observed in human interaction and (c) how this relates to generation quality, as well as (d) whether LMs are influenced by the presence of repetitions in the local context when comprehending dialogue utterances. This section introduces the dialogue data and the language models used to study these four questions. 1" }, { "figure_ref": [], "heading": "Corpora", "publication_ref": [ "b69", "b0", "b23" ], "table_ref": [], "text": "We choose two high-quality, naturalistic dialogue corpora, transcribed from spoken human interactions, with different conversational dynamics and well attested local repetition patterns at a lexical and structural level (Reitter et al., 2006a;Sinclair and Fernández, 2021). Although larger scale conversational corpora exist, often these consist of more artificial interactions (e.g., very short or highly closed-domain).\nMap Task. The Map Task corpus (Anderson et al., 1991) comprises 128 dialogues between speakers participating in a navigational task. Speakers have either an instruction giver or instruction-follower role: they either describe a route, or attempt to follow and mark the described route, on their map.\nSwitchboard. The Switchboard corpus (Godfrey et al., 1992) contains 1,155 dialogues between participants making conversation over the telephone about one of a pre-specified range of common conversational topics. Speakers in this setting have equal status, with no pre-defined roles.\nExtracting sample contexts. We are interested in evaluating the extent to which repetition occurs at a local level, therefore we extract sample contexts of 10 utterances, using a sliding window approach. Of these, utterances 1-9 are the context, and utterance 10 is the target utterance which we investigate. Since we are interested in between-vs. within-speaker effects, we define utterances based on speech turns-i. " }, { "figure_ref": [], "heading": "Language Models", "publication_ref": [ "b88", "b56", "b86", "b64", "b46", "b47", "b50" ], "table_ref": [], "text": "We select three autoregressive neural language models for our analysis: DialoGPT (DGPT; Zhang et al., 2020), GPT2 (Radford et al., 2019), and OPT (Zhang et al., 2022). We select DGPT as a model specifically designed for dialogue (yet still trained on written language, which differs significantly from our transcribed spoken language); GPT2 as its estimates are shown to be predictive of comprehension behaviour, even more so than larger LM variants (Shain et al., 2022;Oh and Schuler, 2023); and OPT, which has demonstrated competitive performance across a range of benchmarks (Paperno et al., 2016;Park, 2023). We fine-tune for 20 epochs, using an early stopping technique to save the best performing model based on perplexity.2 " }, { "figure_ref": [], "heading": "Producing Repetitions", "publication_ref": [ "b60", "b67", "b69", "b16" ], "table_ref": [], "text": "We expect human repetition patterns to be highly local, given prior results showing priming effects in the same corpora (e.g., Reitter and Moore, 2007;Sinclair et al., 2018;Sinclair and Fernández, 2021). We also expect repetition patterns to be modulated by which dialogue partner is being repeated. In particular, we expect between-speaker repetition patterns to be the strongest given that developing shared routines can signal alignment and coordination of speakers' mental models or interpersonal synergy (Pickering andGarrod, 2005, 2004a;Fusaroli et al., 2014). We firstly analyse locality and between-vs. within-speaker repetition in human-produced utterances, then investigate whether the same patterns occur in model generations." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Measures of Repetition", "publication_ref": [], "table_ref": [], "text": "To differentiate between routines vs. shared language, we compute two main measures of lexical repetition, at the word level, and in terms of shared word sequences (constructions; see Section 4.1.2), with which we hope to capture between-speaker routines. We measure repetition between utterance pairs, at varying distances from one another within a given context sample. We define additional measures to capture established human dialogue behaviours.\nVocabulary Overlap. To compute vocabulary overlap, VO, we exclude punctuation, and calculate VO as the proportion of words w in the current turn t c that also appear in a previous turn t p :\nV O = |wt c ∩ wt p | |wt c | (1)\nConstruction Repetition. After extracting a shared inventory of constructions (Section 4.1.2) for a dialogue, we measure the proportion of repetition of shared constructions C as construction overlap CO as:\nCO = |Ct c ∩ Ct p | |wt c | (2)\nBetween vs. Within-Speaker Repetition. This binary measure describes whether the producer of utterance t c and t p is the same (within) or different (between).\nLocality. We measure locality as the distance in utterance index between t c and t p . We take repetition decay, a negative effect of distance d on the shared constructions between t c and t p , as evidence of a local repetition effect.\nSpecificity. We calculate how sample-specific the extracted constructions are, and for each t c , report average specificity of the repeated constructions. We measure specificity using pointwise mutual information (PMI), computed as follows:\nP M I(c, s) = log 2 P (c|s) P (c)(3)\nHigher PMI indicates a construction c is more strongly associated with, or specific to, the sample s it occurs within due to the frequency of occurrence in this context being higher relative to its general usage." }, { "figure_ref": [], "heading": "Construction Extraction Procedure", "publication_ref": [ "b69" ], "table_ref": [], "text": "To extract repeated constructions we make use of dialign, a framework for sequential pattern mining (Dubuisson Duplessis et al., 2017). 3 We then discard repeated expressions with fewer than two alphanumeric tokens (following Sinclair and Fernández, 2021). Repeated expressions consisting solely 3 https://github.com/GuillaumeDD/dialign of punctuation or of more than half filled pauses are also excluded. We further discard constructions which contain periods, commas and question marks, to avoid constructions which include sentence boundaries: these do not contain the lexical elements we are interested in. We define the resulting shared lexicon as constructions. " }, { "figure_ref": [], "heading": "Generating Dialogue Utterances", "publication_ref": [ "b5", "b37", "b21", "b48", "b87", "b3", "b55" ], "table_ref": [], "text": "For each sample in our dataset of extracted dialogue excerpts, we precede each of the 9 utterances in the context with its speaker label, and append a final speaker label, corresponding to the upcoming target speaker, to the end. We then generate the target utterance using ancestral sampling (Bishop, 2006;Koller and Friedman, 2009) to study an unbiased representation of the model's predictive distribution. We set the maximum generation length to 64 tokens, and take the presence of a newline to indicate the end of an utterance, discarding any further generated text beyond this. 5 The resulting text we refer to as the target. To ensure that we take into account that a given context could support multiple targets-production variability is known to be high in dialogue (see, e.g., Giulianelli et al., 2023)-and to ensure our results are robust, we generate 5 utterances per context sample.\nEvaluating generation quality. We measure the quality of a generated target utterance compared to the human reference in terms of their n-gram overlap (BLEU; Papineni et al., 2002) and semantic similarity (BERTScore; Zhang et al., 2019). We also evaluate generations using perplexity, as computed using independent models, both independently of (P P L ii ), and conditioned on the context (P P L id ); we choose GPT-2 for the same reasons highlighted in Section 3.2, and Pythia (pythia-1.4b) (Biderman et al., 2023) for its open-source, highly performant properties. We additionally make use of MAUVE (Pillutla et al., 2021) to capture higherlevel distributional differences between human-vs. model-produced text." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Human vs. Model Repetitions", "publication_ref": [ "b67", "b53", "b69", "b54" ], "table_ref": [], "text": "To analyse local production behaviour, we evaluate the extent to which human and model-produced utterances' CO is sensitive to between-speaker repetition, locality, and context-specificity.\nThe speaker being repeated affects CO and VO in humans and models. Dialogue partners differ in terms of what they repeat of their own vs. their partner's language (Reitter et al., 2006a;Sinclair et al., 2018), thus we expect to find differences in our human data. We also expect that if speakers make use of local routines (Pickering and Garrod, 2005), then between-speaker CO will be relatively higher. We observe that humans do indeed repeat constructions shared with their dialogue partner more so than they do those not shared (CO: Map Task: t = 12.78, p < 0.05. Switchboard: t = 17.74, p < 0.05 ). We observe the inverse effect for VO, showing speakers repeat their own language relatively more so than they do their dialogue partner (VO. Map Task: t = -13.64, p < 0.05. Switchboard: t = -26.66, p < 0.05). While models exhibit global human-like CO and VO patterns to some degree, for example GPT2 tuned is no different to human CO for within-speaker in Switchboard (t = -0.18, p = 0.86), and between-speaker in Map Task (t = -1.86, p = 0.06), these effects are not consistent across models or corpora. Figure 1 illustrates these results, details of statistical differences in Appendix E.\nHumans produce repetitions locally. To evaluate the local effects of repetition, we employ linear mixed-effect models, including dialogue, sample and speaker identifiers as random effects. 6 We confirm that CO decays with the distance between a given utterance and those preceding it (β = -0.001, p < 0.05, 95% CI = [-0.001 : -0.001]); this is not the case for VO (Figure 2a). Decay effects for CO are stronger for between-speaker repetition in both corpora. That is, speakers are more likely to repeat their partner's language locally. Interestingly, in Switchboard, decay effect are not observable when looking at the dialogue as a whole (Sinclair and Fernández, 2021). We hypothesise that other, less locally repeated constructions may drive down this effect when analysing the dialogues as a whole, or that some constructions may have multiple short bursts of local repetition over the course of a dialogue (Pierrehumbert, 2012).\nModels learn some patterns of local repetition.\nWe find that fine-tuned models learn turn-sensitive patterns of local repetition to some extent. Figure 2b demonstrates that models can learn similar patterns of local repetition to those observed in human dialogue. The most dramatic improvement in similarity to human behaviour is for DGPT. We find that in Switchboard, both models and humans show significant local repetition effects of CO independent of VO effects. Investigating CO in more detail, while human repetitions are sensitive to the length of the construction (longer constructions predict CO: β = 0.035, p < 0.05, 95% CI = [0.025 : 0.045]), this is not the case for models, for which the frequency of the repetition in the sample plays an important role in predicting CO (e.g. GPT2 repetition frequency: (β = 0.01, p < 0.05, 95% CI = [0.007 : 0.013])). For Map Task, we find that humans repeat highly specific repetitions locally (CO β = 0.006, p < 0.05, 95% CI = [0.003 : 0.009]), however this is only true for GPT2 (β = 0.001, p < 0.05, 95% CI = [0.0 : 0.002]). Full model results in Appendix H.1. Models don't consistently produce speakerspecific repetitions. We find that while all models display significant CO speaker effects similar to humans, when taking into account other contextual factors, their behaviour with respect to specificity varies. While Figure 2c demonstrates that the PMI of constructions decays with distance, human speakers show no significant independent effect of PMI when predicting CO in either corpus. GPT2 exhibits the most similar behaviour to the human data in terms of the effect of distance and speaker on PMI in Map Task, however learns a significant negative relationship with PMI for Switchboard, not present in the human data. Full model results in Appendix H.1 " }, { "figure_ref": [], "heading": "Repetition vs. Quality", "publication_ref": [ "b85", "b40" ], "table_ref": [ "tab_3" ], "text": "Finally, we investigate whether automatic NLG metrics capture human-likeness of repetition. This is an important aspect of naturalness in dialogue which the metrics are not explicitly designed for.\nTable 3 shows the relative generation quality of our base and fine-tuned models. Extended results can be found in Appendix B. All models demonstrate improvement with fine-tuning, although GPT2 base as an evaluator detects less difference than Pythia. This is expected, given their training data contains either little dialogue data, or a comparatively very different style of dialogue.\nWe find that the closer the levels of CO and VO are to human-produced language,7 the higher BertF1, BLEU, and the lower the evaluation model perplexity both dependent and independent of the context. This correlation is strongest for GPT2 with ρ = -0.395, p < 0.05 for VO and ρ = -0.258, p < 0.05 for CO. This is perhaps to be expected for reference-based metrics, so we additionally inspect whether human-like CO levels correlate with MAUVE, a corpus-level metric, finding that more similar CO levels between human and model inversely correlate with MAUVE quality (above ρ = 0.7, p < 0.05 across models).8 This tells us either that better corpus-level metrics need to be defined or, perhaps, that corpus-level evaluation is not really appropriate for dialogue where quality is determined by local and highly contextually dependent cues. This is in keeping with challenges in evaluating dialogue (Zhang et al., 2021;Liu et al., 2016), and suggests standard NLG evaluation approaches should be complemented by dialogue-specific metrics like the ones we use in our analysis." }, { "figure_ref": [], "heading": "Interpreting Model Comprehension Behaviour", "publication_ref": [ "b80", "b13", "b24", "b66" ], "table_ref": [], "text": "In the previous section, we investigated patterns of repetition in models' production behaviour. Now we turn our attention to their comprehension behaviour, making use of interpretability techniques to analyse what properties of the utterances in the context are more salient in determining expectations for a given target utterance. We expect models to learn patterns of turn-taking from the structure and contents of the context utterances (Wolf et al., 2019;Ekstedt and Skantze, 2020;Gu et al., 2020). We also expect that higher salience will be assigned to repetitions with local antecedents, in line with recency effects observed in model priming behaviour (Sinclair et al., 2022)." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Feature Attribution", "publication_ref": [ "b43", "b1", "b72", "b65" ], "table_ref": [], "text": "We obtain attributions over the dialogue context for a given target utterance, extracting scores for each token over the entire preceding context. 9 We are interested in examining behavioural patterns at the utterance level, in order to investigate the influence of their distance from the target, and design a measure to capture the relative boosting effects of the context for a given target utterance. This approach allows us to inspect attribution patterns across the context with respect to properties of the target utterance as a whole, allowing us to conduct similar, complementary analyses to the previous section. A wide range of feature attribution methods exist (Lundberg and Lee, 2017; Murdoch et al., 2019). It remains an open question, however, which of these methods are most faithful with respect to the true model behaviour (Bastings et al., 2022). Some methods resolve this through defining theoretical properties that need to be satisfied by the method (Sundararajan et al., 2017). We focus on one such method, DeepLift (Shrikumar et al., 2017), which, besides its attractive theoretical properties, is also considerably more compute friendly than alternative attribution methods." }, { "figure_ref": [], "heading": "Attribution Aggregation Procedure", "publication_ref": [], "table_ref": [], "text": "We design a measure that allows us to capture the relative effects that individual utterances in the local context have on models' utterance comprehension. Our measure aggregates over per-token attri-butions for a full utterance, returning relative prediction boosting effects of tokens within context utterances, speaker label tokens, and the target itself.\nA given sample will consist of speaker label tokens, indicative of the change in speaker, e.g. 'A:' and 'B:', the 9 context utterances, and the target utterance text. This can look like the following, with the speaker label tokens in orange, context utterances in dark blue, and the final target utterance of interest in light blue: A: how are you? B: great, it's sunny A: about time B: agreed. A: I love sun B: me too A: makes me think of the beach B: the beach is great A: so great B:great, we should go to the beach! Firstly, we create the feature attribution scores of each token in the input w i with respect to the prediction of each token in the target utterance w t :\nΦ ∈ R |w i |×|w t |×n emb (4)\nSince feature attribution methods provide an importance score on the embedding level, we sum these scores along the embedding dimension n emb .10 Next, we sum the Φ matrix along the dimension of the tokens in the target utterance (w t ): creating a single score for each input token with respect to the target as a whole. Then, we create a single importance score for each individual input utterance or turn separator, denoted as a set T i that contains the indices of the i th utterance:\nΦ ′ ∈ R |T | , Φ ′ i = j∈T i k l Φ j,k,l(5)\nNote that the target utterance itself also yields importance scores of earlier tokens in the target with respect to later predictions. The scores of Φ ′ are still unbounded, and can vary greatly between samples and models. We apply two further operations to allow sample and model comparison: we normalise the scores by the maximum absolute Φ ′ score, which maps the scores between -1 and 1, and we then centre the scores around the mean. This expresses the contribution of each element in the input as its relative boosting effect with respect to the other elements in the input\nΦ ′′ = Φ ′ max (|Φ ′ |) (6) ϕ = Φ ′′ -mean(Φ ′′ ) (7)" }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [ "b25", "b17", "b26" ], "table_ref": [], "text": "We now investigate model attribution patterns over the dialogue context. Our goal is to find out whether a model's comprehension behaviour exhibits robust patterns explainable through known psycholinguistic effects thought to influence human language producers, in particular local, betweenspeaker repetition patterns. While we are currently unable to understand precisely where humans place salience when comprehending, a large body of psycholinguistic research points to patterns of priming and alignment behaviour detectable from brain signals (Hasson et al., 2012;Futrell et al., 2019), and uses our understanding of the brain to inform analysis of neural language models (Hasson et al., 2020). We will contrast this analysis of model comprehension behaviour to the previous study of their production behaviour. We expect tuned models, the more human-like producers, to comprehend human language in a manner better predicted by factors thought to influence human processes-such as locality and priming effects-than base models." }, { "figure_ref": [], "heading": "Attributions Over Human Utterances", "publication_ref": [ "b66" ], "table_ref": [], "text": "Humans and models display priming effects, which can be explained via accounts of residual activation, Utterance comprehension is influenced by context locality in open domain dialogue. When comprehending utterances from a given speaker, models fine-tuned on Switchboard learn to attribute more salience to utterances in the nearby context, more strongly so when these are produced by the other speaker. This effect is strongest for GPT2 (β = -0.009, p < 0.05, 95% CI = [-0.011 : -0.007]).\nFor Map Task, we do not see such a clear trend, with different behaviours between models. Even though evidence for sensitivity to utterance position and speaker shifts in comprehension is only found in one of the two corpora, this is an interesting result when juxtaposed to our analysis of production behaviour. It seems to indicate that while models learn to understand differences in speakers and in distance within the local context of open-domain dialogue, this does not always translate to human-likeness of production behaviour. Construction repetition in the local context predicts attribution patterns. High lexical repetition between context and target has been shown to boost priming effects in models (Sinclair et al., 2022), however, less is known about how this translates to attribution patterns. In line with priming results, we expect that attribution patterns over context utterances will be predicted by both construction and vocabulary overlap. We see mixed results across models, finding that only for Switchboard, GPT2 displays significant positive effect of CO (β = 0.277, p < 0.05, 95% CI = [0.239 : 0.315]) on attribution strength, independent of VO and distance effects. Surprisingly, however, the effect of VO on attribution strength is negative (β = -0.308, p < 0.05, 95% CI = [-0.346 : -0.270]). More remains to be done to precisely understand the relationship between the repetitions themselves and the local attribution patterns we observe, as well as to identify other factors driving this behaviour." }, { "figure_ref": [ "fig_5" ], "heading": "Attribution Over Special Tokens", "publication_ref": [ "b80", "b24", "b13", "b76" ], "table_ref": [], "text": "While we are most interested in models' comprehension behaviour with respect to the utterance text in the context, we also investigate their behaviour over speaker labels. The effect of structural tokens on the performance and behaviour of LMs is an ongoing area of research (Wolf et al., 2019;Gu et al., 2020;Ekstedt and Skantze, 2020;Wallbridge et al., 2023). Speaker labels like 'A:' and 'B:' provide models with important information about the turn-taking dynamics of dialogues.\nFigure 4 shows that models learn, through finetuning, to attribute salience to speaker labels in a more uniform manner (note how the curves of tuned models are flatter). We find significant differences between base and tuned models in both corpora, with the highest boost in uniformity for DGPT (Switchboard: β = 0.002, p < 0.05, 95% CI = [0.002 : 0.002], Map Task: β = 0.005, p < 0.05, 95% CI = [0.005 : 0.005]). 11 Speculatively, this could be taken as an indication that the models have learned to more consistently use these as structural markers of turn-taking. The discrepancy between the uniform attribution patterns over speaker labels and the decaying salience assigned to utterance text is an interesting finding that deserves more attention in future research." }, { "figure_ref": [], "heading": "Discussion & Conclusion", "publication_ref": [ "b6", "b7", "b53", "b20", "b69", "b54", "b10", "b21" ], "table_ref": [], "text": "Repetition behaviour in dialogue, whether driven by local priming (Bock, 1986), alignment effects (Pickering and Garrod, 2004b), conceptual pacts (Brennan and Clark, 1996), or routinisation (Pickering and Garrod, 2005;Garrod and Pickering, 2007), is well attested in humans. In this study, we investigate the extent to which language models are sensitive to, and display the same local, context-specific, and shared patterns of construction repetition observed in human dialogue. We conduct an in-depth analysis using two corpora of English task-oriented and open-domain dialogue, and three autoregressive neural language models. Analysing human interactions, we find that within highly local contexts (we consider dialogue samples consisting of 10 utterances), repetition effects decay with distance from antecedents, particularly when repetitions are between dialogue partners, rather than of a speaker's own language. This contrasts with and complements previous work finding no evidence of locality effects within Switchboard, the same open domain corpus, when considering dialogues as a whole rather than in short excerpts (Sinclair and Fernández, 2021), suggesting that some repeated constructions may occur in multiple short bursts (Pierrehumbert, 2012) over the course of a dialogue-a phenomenon that is not easily captured by more 'global' analyses.\n11 Full breakdown of results in Appendix H.2.\nWe then evaluate model behaviour under two lenses: production behaviour, analysed in terms of the repetition of shared constructions (i.e., word sequences re-used by both dialogue participants) in model generations, and comprehension behaviour, measured by models' attribution of salience to contextual units when processing human-produced dialogue. We find that models learn, via fine-tuning, to generate more human-like patterns of construction re-use, although the degree to which repetitions are local, context-specific, and shared varies by model. We also find that while reference-based generation quality metrics correlate with the human-likeness of the repetitions produced, corpus-level metrics like MAUVE fail to capture this important aspect of dialogue quality. This highlights the need for more refined corpus-level approaches to statistical evaluation which take into account local and highly contextually dependent phenomena, or at least for their integration with instance-level analyses (Deng et al., 2022;Giulianelli et al., 2023). Making use of feature attribution techniques, which provide interpretations of models' comprehension behaviour, we then explore the extent to which models are sensitive to properties of the context thought to influence human propensity to produce aligned (i.e., locally repeated and context-specific) language. We observe that when comprehending utterances, tuned models assign salience to speaker labels in a more uniform manner, and that in opendomain dialogue, models learn to assign salience over the context in a more local manner.\nWe will follow up this study with experiments where our proposed attribution aggregation procedure is performed specifically over construction tokens in the target utterance. This may allow for more fine-grained interpretation of the relationship between repetitions and the observed local effects, as well as to investigate further psycholinguistic factors which may drive the tight coupling of local context and next utterance generation. We hope our experimental setup will inspire future work that attempts to create stronger connections between language model behaviour and findings from psycholinguistics. In particular, we look forward to seeing our attribution-based methodology being applied to other dialogue-specific phenomena, and the local, dyad-specific repetition measures we investigate applied to the development and evaluation of more adaptive and context-sensitive dialogue response generation systems." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Limitations of our work are that it is only conducted on English-spoken corpora, for two kinds types of dialogue context (conversational given a range of popular topics, and navigational task-oriented) and of that, native speakers of English only. Repetition patterns of dialogues in different conversational contexts, with language users of different cultures and in different languages may vary, and the patterns that models learn for these may also vary." }, { "figure_ref": [], "heading": "A Contributions", "publication_ref": [], "table_ref": [], "text": "Conceptualisation: AS. Methodology: AS, JJ. Software: AM. Experiments: AM, AS. Analysis: AM, AS, MG, JJ. Writing -Original Draft: AM, AS. Writing -Review & Editing: AS, JJ, MG. Supervision & Project Administration: AS. Order alphabetical." }, { "figure_ref": [], "heading": "B Language Model Fine-Tuning", "publication_ref": [ "b56", "b86", "b88" ], "table_ref": [ "tab_4" ], "text": "We fine-tune GPT-2 (Radford et al., 2019), OPT (Zhang et al., 2022), and DialoGPT (Zhang et al., 2020) for 20 epochs, using an early stopping technique to save the best performing model (based on its perplexity). Table 4 shows the perplexity of all models, pre-trained and fine-tuned, on the evaluation set.\nModels significantly adapt to the domain in training, given the low fine-tuned perplexities." }, { "figure_ref": [], "heading": "C Language Model Sizes", "publication_ref": [], "table_ref": [], "text": "The considered language models have the following number of parameters. GPT2: 124M, OPT: 125M, DGPT: 117M, PYTHIA: 1.4B. " }, { "figure_ref": [], "heading": "D Filled Pauses", "publication_ref": [], "table_ref": [], "text": "We define filled pauses using the part-of-speech tags in Map Task and Switchboard. Map Task: uh-huh, er, um, mm-mm, eh, uh, mm, uh-uh, nah, mm-hmm, erm, ehm, huh, hmm, mmhmm. Switchboard: hm, huh, uh, um-hum, huh, huh-uh, uh, uh-huh, um. -19.15, p < 0.05). After fine-tuned on Map Task, models learn to generate less dialogue-specific constructions (t: 19.83, 27.43, 22.85, p < 0.05). Models learn to produce more distant shared constructions after trained on both open-ended and task-oriented dialogue data (SW: t: -4.34, -10.2, -20.6, MT: t: -10.76, -0.19 (p ≥ 0.05, exception), -8.53, p < 0.05). DGPT exhibits higher levels of construction overlap (CO) after fine-tuned on both Switchboard and Map Task (both between and within speakers), closely approximating human patterns (SW: t: -23.09, -11.45, MT: t: -29.75, -14.75, p < 0.05). GPT2 and OPT generally learn to produce lower CO values, but they already exhibit highly human-like construction overlap scores in their pre-trained states (SW: t: 6.83, 2.68, 16.52, 3.18, p < 0.05, MT: t: -1.62, -1.4, 0.75, 1.05, p ≥ 0.05). " }, { "figure_ref": [], "heading": "E Construction Repetitions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E.1 Construction Examples", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_7" ], "heading": "F Attributions To Target", "publication_ref": [ "b80", "b24", "b76", "b13" ], "table_ref": [], "text": "We additionally analyse Target vs. Context vs. Speaker Label salience patterns. Regarding the speaker labels in the context (i.e., sequences containing non-utterance tokens: A:, <eos>), the effect of special or structural tokens on the performance and behaviour of LLMs is an ongoing area of research (Wolf et al., 2019;Gu et al., 2020;Wallbridge et al., 2023;Ekstedt and Skantze, 2020), we expect model attribution behaviour to be more similar between tuned models.\nFrom Figure 5, we observe far higher variance in attribution over the target utterance than over the utterances in the context, with a similar relative difference between the speaker label in the target vs. those in the context. We observe very few consistent patterns across models in terms of relative boosting effects, except for speaker label Ctx, which becomes more relatively uniform (and closer to 0) with tuning. We observe that GPT2 learns to attribute relatively higher salience over the text in the context utterances than to that in the target. In other words, they learn to place relatively more importance on the target utterance itself (Switchboard: t = -8.01, p < 0.05; Map Task: t = -14.42, p < 0.05)." }, { "figure_ref": [], "heading": "G Generation Quality", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "To perform a comparable correlation analysis of MAUVE scores and possibly influencing factors, we treat each model generation (we generate five responses to each sample) as a separate corpus. This allows us to compute multiple MAUVE scores for each model (instead of just one score that is based on all the model generations). For best practices, MAUVE requires at least a few thousand examples to run (the original paper uses 5000). Since we have 2, 395 samples in Map Task and 8, 705 samples in Switchboard, we select the number of samples used for MAUVE score computation to be 3, 000. We make use of all the Map Task samples for computation, and randomly sample model generations when we have more than 3, 000 examples available. We obtain five MAUVE scores for each model (base and fine-tuned), resulting in 30 scores for each corpus.\nTable 9 shows a full breakdown of the most consistent results across models. Since we are interested in general properties which apply to conversational corpora, we combine both Map Taskand Switchboardin this analysis. We find a strong ρ correlation across models, weakest for DGPT." }, { "figure_ref": [], "heading": "H Linear Mixed Effects Regression Results", "publication_ref": [], "table_ref": [], "text": "To evaluate local effects, specifically the relationship between utterances in the context and the target utterance, we employ linear mixed-effect models, including dialogue and sample identifiers as random effects." }, { "figure_ref": [], "heading": "H.1 Production: Repetition Effects", "publication_ref": [], "table_ref": [ "tab_0", "tab_0" ], "text": "To measure repetition effects we fit separate models for construction overlap CO, and vocabulary overlap VO, making these the dependent variables. We include dialogue and sample as random effects to allow for group-level variability in the linear model. We firstly investigate the effects of speaker, and distance. To measure repetition in the human data, we include speaker, and distance given speaker as fixed effects. To measure repetition in models, we follow the same process as for the human data, but adding model type (base or tuned) and their interaction with distance as additional fixed effects.\nResults for VO can be found in Table 10, and CO in Table 11.\nWe then conduct a second analysis, this time to investigate the impact of different properties of constructions on the CO effects. We include speaker, distance, construction length, specificity (PMI) and frequency as independent fixed effects. Results can " }, { "figure_ref": [], "heading": "H.2 Comprehension: Attribution Effects", "publication_ref": [], "table_ref": [], "text": "To measure Attribution strengths over the context utterances during model comprehension of humanproduced target utterances, we made attribution the dependent variable." }, { "figure_ref": [], "heading": "H.3 Attribution Over Human Utterances", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "To investigate the effect of local context repetition on model attribution strengths to context utterance text during target utterance comprehension, we include speaker, distance, construction overlap, vocabulary overlap, average construction PMI, and construction frequency as fixed effects. Results can be found in Table 13." }, { "figure_ref": [], "heading": "H.4 Attribution Over Special Tokens", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To investigate the effect of distance on model attribution to speaker labels within the context during target utterance comprehension, we include distance, model type (base or tuned) and their interaction as fixed effects. Results can be found in Table 14." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank the anonymous reviewers for their thoughtful and useful reviews and comments. We also wish to thank Ehud Reiter for his useful comments on this work at an early stage. MG is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "-0.001 0.000 -1.868 0.062 -0.001 0.000 -0.005 0.001 -6.592 0.000 -0.006 -0.003 dist:S[same] -0.005 0.001 -10.705 0.000 -0.006 -0.004 -0.002 0.001 -1.488 0.137 -0.004 0.000 GPT2 Intercept 0.129 0.001 110.696 0.000 0.127 0.132 0.129 0.002 67.475 0.000 0.125 0.133 S[T.same] 0.076 0.002 48.199 0.000 0.073 0.080 0.050 0.003 19.480 0.000 0.045 0.056 type [T.tuned] -0.011 0.001 -10.672 0.000 -0.013 -0.009 -0.002 0.002 -1.357 0.175 -0.006 0.001 dist:S[diff]:type [base] 0.000 0.000 2.142 0.032 0.000 0.001 -0.003 0.000 -9.877 0.000 -0.003 -0.002 dist:S[same]:type[base] -0.008 0.000 -36.207 0.000 -0.009 -0.008 -0.008 0.000 -20.167 0.000 -0.008 -0.007 dist:S[diff]:type [tuned] 0.002 0.000 11.460 0.000 0.002 0.002 -0.002 0.000 -8.011 0.000 -0.003 -0.002 dist:S[same]:type[tuned] -0.006 0.000 -28.161 0.000 -0.007 -0.006 -0.004 0.000 -10.058 0.000 -0.005 -0.003 -0.000 0.000 -10.319 0.000 -0.000 -0.000 -0.002 0.000 -19.909 0.000 -0.003 -0.002 dist:S[same]:type [base] 0.000 0.000 3.740 0.000 0.000 0.000 0.000 0.000 0.303 0.762 -0.000 0.000 dist:S[diff]:type[tuned] -0.000 0.000 -10.197 0.000 -0.000 -0.000 -0.002 0.000 -17.875 0.000 -0.002 -0.002 dist:S[same]:type[tuned] -0.000 0.000 -8.171 0.000 -0.000 -0.000 -0.001 0.000 -9.446 0.000 -0.002 -0.001 " } ]
Language models are often used as the backbone of modern dialogue systems. These models are pre-trained on large amounts of written fluent language. Repetition is typically penalised when evaluating language model generations. However, it is a key component of dialogue. Humans use local and partner specific repetitions; these are preferred by human users and lead to more successful communication in dialogue. In this study, we evaluate (a) whether language models produce humanlike levels of repetition in dialogue, and (b) what are the processing mechanisms related to lexical re-use they use during comprehension. We believe that such joint analysis of model production and comprehension behaviour can inform the development of cognitively inspired dialogue generation systems.
Attribution and Alignment: Effects of Local Context Repetition on Utterance Production and Comprehension in Dialogue
[ { "figure_caption": "Figure 1 :1Figure 1: Human and model repetition properties. B indicates base models, T tuned models.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "(a) Human CO, VO & PMI (b) Human vs. Model CO & VO.(c) Specificity (PMI ) of repeated constructions.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Repetition effects for construction overlap CO and vocabulary overlap VO. Patterns of human vs. model repetition across contexts.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "and they are sensitive to turn-taking(Ten Bosch et al., 2005;Tooley and Traxler, 2010;Ekstedt and Skantze, 2020;Sinclair et al., 2022). We thus expect attribution patterns to be sensitive to utterance position and speaker shifts within the context. Figure 3 shows how results change with fine-tuning.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Relative attribution properties to human utterances over the dialogue context.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Relative attribution importance of speaker labels over the dialogue context.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Selected local dialogue sample excerptsA: but . that is a very good point. i am from west virginia so i understand what you are saying there B: uh-huh. A: and it's it's a very good point. B: (OPT) uh-huh. yeah, uh, where i was, uh, i went to a church in west virginia and uh, they always wore suits (...) B: and there's a lot of graft, like people trying to tell them, oh, giving them information that was free to them anyway if they just knew how to get it. A: uh, yeah, exactly. B: yeah. A: (OPT) yeah, there's a lot of corruption, B: right, so that's that's right, so i'm coming back back down the paper again A: ah, yeah back down the paper B: uh-huh aye turn right A: (DGPT) and then you go through the paper and past the chapel, to the right of the page A: okay right, you went down past burnt forest B: i went underneath burnt forest A: well, you weren't meant to B: well you said draw round the cottage A: okay right, you're meant to come down from the start B: (OPT) okay right A: oh, yeah, yeah, yeah. B: in the summer or like in the easter time, like around now? A: (HUMAN) no, usually in the summer time.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Attribution patterns for Speaker labels and Utterances in the dialouge Context (Ctx) during model comprehension of human Target (Tgt) utterances. The y-axis measures the relative boosting effect.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Corpus statistics.", "figure_data": "e. each time a speaker changes,we consider this a new utterance. Details of thecorpora and extracted samples are in Table 1.1 https://github.com/the-context-lab/attribalign", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table 2 provides details of their properties. 4", "figure_data": "SwitchboardMapTaskM±StdMed. MaxM±StdMed. MaxConstructionLength2.1 ± 0.42.05 2.4 ± 0.82.0 11Frequency 3.0 ± 1.23.06 3.3 ± 1.13.06Rep. Dist.3.6 ± 2.73.08 3.3 ± 2.73.08Incidence1.6 ± 1.11.0 10 2.0 ± 1.12.08PMI6.8 ± 3.46.6 11.5 7.2 ± 2.27.6 9.6UtteranceCO0.004 ± 0.035 0.0 1.0 0.024 ± 0.13 0.0 2.8VO0.13 ± 0.23 0.008 1.0 0.13 ± 0.240.0 1.0", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Generation quality results. SW: Switch-Board. MT: MapTask. P P L m : Perplexity of the models under scrutiny on the analysis set. Perplexity of GPT2 (P P Lg ix ) and PYTHIA (P P Lp ix ) on modelproduced utterances (ii independent of, and id dependent on context). B: base models, T: fine-tuned models. Mve: MAUVE score. Bold indicates the better value between base and fine-tuned variants.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Post-training metrics of models. SW: Switchboard. MT: Map Task. Precision (Prec), recall (Rec) and F1 are averages over multiple samples and part of BERTScore. LR: length ratio (BLEU). BP: brevity penalty (BLEU). PPL: Perplexity. B: base models. T: tuned models. Mve: MAUVE score. L: mean target utterance length (in words). Bold indicates best values across models per corpora per metric.", "figure_data": "PPL ↓ Prec RecF1 BLEU BP ↓ LR ↓ MveL±StdSWGPT2 B15.110 0.722 0.704 0.710 0.009 0.744 0.772 0.035 11.9 ± 14.7T12.020 0.745 0.720 0.730 0.010 0.496 0.588 0.049 8.8 ± 10.5OPT B37.540 0.703 0.702 0.700 0.010 0.859 0.868 0.052 13.0 ± 13.8T15.130 0.737 0.733 0.733 0.014 0.824 0.838 0.069 12.6 ± 12.9DGPT B 6935.000 0.667 0.648 0.656 0.000 0.148 0.343 0.006 3.3 ± 3.5T10.910 0.737 0.728 0.730 0.016 0.955 0.956 0.049 14.3 ± 15.8MTGPT2 B16.170 0.681 0.680 0.679 0.006 0.827 0.841 0.101 7.1 ± 6.2T7.930 0.705 0.702 0.702 0.014 0.849 0.859 0.245 7.4 ± 6.1OPT B72.100 0.686 0.681 0.682 0.006 0.701 0.738 0.103 6.1 ± 6.4T9.700 0.723 0.705 0.712 0.016 0.631 0.685 0.339 5.7 ± 5.2DGPT B 13014.000 0.668 0.659 0.662 0.002 0.391 0.516 0.041 3.7 ± 2.8T8.050 0.701 0.700 0.699 0.016 0.990 0.990 0.176 8.5 ± 7.9", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "contains two dialogue excerpts with re-sponses generated by a tuned OPT model. Phraseshighlighted bold refer to constructions generatedby the model.Table 6 lists the most frequent constructionsgenerated by fine-tuned models, grouped by lo-cality. Local and global constructions are definedas having a repetition distance of ≤ 4 and > 4,respectively. The table contains the top three mostfrequent produced constructions per model, perdataset, per locality.E.2 Repetition PropertiesTables 7 and 8 contain detailed repetition statis-tics with statistical significance test results. Inboth corpora, DGPT learns to best approximate hu-man target lengths after fine-tuning (TH columnsof all models: -15, -92.8, and -38.59 (t) forDGPT, GPT2, and OPT, respectively. p < 0.05 forall). It generates significantly longer responses(t = -412.64, p < 0.05). Models robustly gener-ate more dialogue-specific shared constructions af-ter fine-tuned on Switchboard (t: -109.41, 57.44,", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Example local repetitions produced by tuned models.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Example constructions from tuned models. MT: Map Task, SW: Switchboard. Local: repetition distance ≤ 4; global: repetition distance > 4.", "figure_data": "HDGPTGPT2OPTBTBHTHBTBTBHTHBTBTBHTHBTSWtarget len. 15.369 3.251 14.271 -174.840 -15.000 -412.640 11.925 8.802 -47.420 -92.800 108.160 13.026 12.599 -32.460 -38.590 14.090constr. len. 2.176 2.117 2.185 -30.6605.200 -55.900 2.196 2.186 11.0705.7509.400 2.239 2.215 33.810 21.410 19.790PMI8.520 8.053 8.821 -42.450 25.740 -109.410 8.424 8.907 -8.020 33.190 -57.440 9.147 9.303 53.330 67.020 -19.150freq.2.689 2.607 2.662 -21.530 -7.460 -22.690 2.778 2.672 24.660 -4.600 49.790 2.677 2.648 -3.230 -11.610 14.530rep. dist.3.525 3.363 3.891-1.2205.840-4.340 3.586 3.9900.9807.040 -10.200 3.104 3.774 -6.8703.950 -20.600CObetween0.006 0.002 0.006 -16.910 -1.270 -23.090 0.008 0.0056.830 -2.520 16.070 0.011 0.007 16.5204.340 23.460within0.001 0.000 0.001-9.860 -2.060 -11.450 0.002 0.0012.680 -0.1804.600 0.002 0.0013.180 -0.4006.340VObetween0.116 0.107 0.122-6.3505.340 -15.770 0.132 0.125 12.7007.9208.530 0.137 0.126 18.6208.920 17.100within0.161 0.106 0.149 -34.490 -7.960 -38.130 0.172 0.1706.7205.9801.470 0.146 0.159 -10.800 -1.190 -16.190", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Switchboard", "figure_data": "", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "statistics with statistical significance tests. Red values indicate statistical insignificance (p ≥ .05). All values not highlighted red are statistically significant. The human (H), base model (B), and tuned model (T) columns contain averages. The base model-human (BH), tuned model-human (TH), and base model-tuned model (BT) comparison columns contain computed t-statistics. Rep. dist.: repetition distance. Target len.: target utterance length (in words). Constr. len.: construction length (in words). Between/within: between-and within-speaker. Freq.: frequency.", "figure_data": "520", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "MAUVE ρ correlation results. Metrics are the absolute value of the difference between model and human levels of CO and repetition, thus a positive correlation indicates an inverse correlation of the two metrics of human-likeness be found in Table12.", "figure_data": "", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" } ]
Aron Molnar; Jaap Jumelet; Mario Giulianelli; Arabella Sinclair
[ { "authors": "Miles Anne H Anderson; Ellen Gurman Bader; Elizabeth Bard; Gwyneth Boyle; Simon Doherty; Stephen Garrod; Jacqueline Isard; Jan Kowtko; Jim Mcallister; Miller", "journal": "Language and speech", "ref_id": "b0", "title": "The HCRC Map Task corpus", "year": "1991" }, { "authors": "Jasmijn Bastings; Sebastian Ebert; Polina Zablotskaia; Anders Sandholm; Katja Filippova", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Will you find these shortcuts?\" A protocol for evaluating the faithfulness of input salience methods for text classification", "year": "2022-12-07" }, { "authors": "Jasmijn Bastings; Katja Filippova", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "The elephant in the interpretability room: Why use attention as explanation when we have saliency methods", "year": "2020" }, { "authors": "Stella Biderman; Hailey Schoelkopf; Quentin Gregory Anthony; Herbie Bradley; O' Kyle; Eric Brien; Mohammad Hallahan; Shivanshu Aflah Khan; Purohit; Edward Usvsn Sai Prashanth; Raff", "journal": "", "ref_id": "b3", "title": "Pythia: A suite for analyzing large language models across training and scaling", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b4", "title": "", "year": "" }, { "authors": "Christopher M Bishop", "journal": "Springer-Verlag", "ref_id": "b5", "title": "Pattern Recognition and Machine Learning (Information Science and Statistics)", "year": "2006" }, { "authors": "Kathryn Bock", "journal": "Cognitive psychology", "ref_id": "b6", "title": "Syntactic persistence in language production", "year": "1986" }, { "authors": "Susan E Brennan; Herbert H Clark", "journal": "Journal of experimental psychology: Learning, memory, and cognition", "ref_id": "b7", "title": "Conceptual pacts and lexical choice in conversation", "year": "1996" }, { "authors": "Ian Covert; Scott M Lundberg; Su-In Lee", "journal": "J. Mach. Learn. Res", "ref_id": "b8", "title": "Explaining by removing: A unified framework for model explanation", "year": "2021" }, { "authors": "Cristian Danescu-Niculescu-Mizil; Lillian Lee; Bo Pang; Jon Kleinberg", "journal": "", "ref_id": "b9", "title": "Echoes of power: Language effects and power differences in social interaction", "year": "2012" }, { "authors": "Yuntian Deng; Volodymyr Kuleshov; Alexander M Rush", "journal": "", "ref_id": "b10", "title": "Model criticism for long-form text generation", "year": "2022" }, { "authors": "Chloé Guillaume Dubuisson Duplessis; Frederic Clavel; Landragin", "journal": "", "ref_id": "b11", "title": "Automatic measures to characterise verbal alignment in human-agent interaction", "year": "2017" }, { "authors": "Chloé Guillaume Dubuisson Duplessis; Frédéric Clavel; Landragin", "journal": "", "ref_id": "b12", "title": "Automatic measures to characterise verbal alignment in human-agent interaction", "year": "2017" }, { "authors": "Erik Ekstedt; Gabriel Skantze", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "TurnGPT: a Transformer-based language model for predicting turn-taking in spoken dialog", "year": "2020" }, { "authors": "Ellen Mary; Manuel Foster; Amy Giuliani; Colin Isard; Jon Matheson; Alois Oberlander; Knoll", "journal": "", "ref_id": "b14", "title": "Evaluating description and reference strategies in a cooperative human-robot dialogue system", "year": "2009" }, { "authors": "Heather Friedberg; Diane Litman; Susannah Bf Paletz", "journal": "IEEE", "ref_id": "b15", "title": "Lexical entrainment and success in student engineering groups", "year": "2012" }, { "authors": "Riccardo Fusaroli; Kristian Joanna R Ączaszek-Leonardi; Tylén", "journal": "New Ideas in Psychology", "ref_id": "b16", "title": "Dialog as interpersonal synergy", "year": "2014" }, { "authors": "Richard Futrell; Ethan Wilcox; Takashi Morita; Peng Qian; Miguel Ballesteros; Roger Levy", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Neural language models as psycholinguistic subjects: Representations of syntactic state", "year": "2019" }, { "authors": "M Gallotti; M T Fairhurst; C D Frith", "journal": "Consciousness and Cognition", "ref_id": "b18", "title": "Alignment in social interactions", "year": "2017" }, { "authors": "Xiang Gao; Yizhe Zhang; Sungjin Lee; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "", "ref_id": "b19", "title": "Structuring latent spaces for stylized response generation", "year": "2019" }, { "authors": "Simon Garrod; Martin J Pickering", "journal": "", "ref_id": "b20", "title": "Alignment in dialogue", "year": "2007" }, { "authors": "Mario Giulianelli; Joris Baan; Wilker Aziz; Raquel Fernández; Barbara Plank", "journal": "", "ref_id": "b21", "title": "What comes next? Evaluating uncertainty in neural text generators against human production variability", "year": "2023" }, { "authors": "Mario Giulianelli; Arabella Sinclair; Raquel Fernández", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Construction repetition reduces information rate in dialogue", "year": "2022" }, { "authors": "J Godfrey; E Holliman; J Mcdaniel", "journal": "", "ref_id": "b23", "title": "Switchboard: telephone speech corpus for research and development", "year": "1992" }, { "authors": "Jia-Chen Gu; Tianda Li; Quan Liu; Zhen-Hua Ling; Zhiming Su; Si Wei; Xiaodan Zhu", "journal": "Association for Computing Machinery", "ref_id": "b24", "title": "Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots", "year": "2020" }, { "authors": "Uri Hasson; A Asif; Bruno Ghazanfar; Simon Galantucci; Christian Garrod; Keysers", "journal": "Trends in cognitive sciences", "ref_id": "b25", "title": "Brainto-brain coupling: a mechanism for creating and sharing a social world", "year": "2012" }, { "authors": "Uri Hasson; Ariel Samuel A Nastase; Goldstein", "journal": "Neuron", "ref_id": "b26", "title": "Direct fit to nature: an evolutionary perspective on biological and artificial neural networks", "year": "2020" }, { "authors": "Rens Hoegen; Deepali Aneja; Daniel Mcduff; Mary Czerwinski", "journal": "", "ref_id": "b27", "title": "An end-to-end conversational style matching agent", "year": "2019" }, { "authors": "Judith Holler; Katie Wilkin", "journal": "Journal of Nonverbal Behavior", "ref_id": "b28", "title": "Co-speech gesture mimicry in the process of collaborative referring during face-to-face dialogue", "year": "2011" }, { "authors": "Ari Holtzman; Jan Buys; Leo Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b29", "title": "The Curious Case of Neural Text Degeneration", "year": "2019" }, { "authors": "Zhichao Hu; Gabrielle Halberg; Carolynn R Jimenez; Marilyn A Walker", "journal": "", "ref_id": "b30", "title": "Entrainment in pedestrian direction giving: How many kinds of entrainment? Situated dialog in speech-based humancomputer interaction", "year": "2016" }, { "authors": "Xinting Huang; Jianzhong Qi; Yu Sun; Rui Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Generalizable and explainable dialogue generation via explicit action learning", "year": "2020" }, { "authors": "Amy Isard; Carsten Brockmann; Jon Oberlander", "journal": "", "ref_id": "b32", "title": "Individuality and alignment in generated dialogues", "year": "2006" }, { "authors": "Sarthak Jain; Byron C Wallace", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Attention is not explanation", "year": "2019-06-02" }, { "authors": "Jaap Jumelet; Willem Zuidema", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Feature interactions reveal linguistic structure in language models", "year": "2023" }, { "authors": "Jaap Jumelet; Willem H Zuidema; Dieuwke Hupkes", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Analysing neural language models: Contextual decomposition reveals default reasoning in number and gender assignment", "year": "2019-11-03" }, { "authors": "Narine Kokhlikyan; Vivek Miglani; Miguel Martin; Edward Wang; Bilal Alsallakh; Jonathan Reynolds; Alexander Melnikov; Natalia Kliushkina; Carlos Araya; Siqi Yan; Orion Reblitz-Richardson", "journal": "", "ref_id": "b36", "title": "Captum: A unified and generic model interpretability library for pytorch", "year": "2020" }, { "authors": "Daphne Koller; Nir Friedman", "journal": "MIT press", "ref_id": "b37", "title": "Probabilistic graphical models: Principles and techniques", "year": "2009" }, { "authors": "Jiwei Li; Will Monroe; Dan Jurafsky", "journal": "", "ref_id": "b38", "title": "a. A simple, fast diverse decoding algorithm for neural generation", "year": "2016" }, { "authors": "Jiwei Li; Will Monroe; Alan Ritter; Dan Jurafsky; Michel Galley; Jianfeng Gao", "journal": "", "ref_id": "b39", "title": "Deep reinforcement learning for dialogue generation", "year": "2016" }, { "authors": "Chia-Wei Liu; Ryan Lowe; Michael Iulian V Serban; Laurent Noseworthy; Joelle Charlin; Pineau", "journal": "", "ref_id": "b40", "title": "How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation", "year": "2016" }, { "authors": "José Lopes; Maxine Eskenazi; Isabel Trancoso", "journal": "Computer Speech & Language", "ref_id": "b41", "title": "From rule-based to data-driven lexical entrainment models in spoken dialog systems", "year": "2015" }, { "authors": "M Scott; Su-In Lundberg; Lee", "journal": "", "ref_id": "b42", "title": "A unified approach to interpreting model predictions", "year": "2017-09" }, { "authors": "W James Murdoch; Chandan Singh; Karl Kumbier; Reza Abbasi-Asl; Bin Yu", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b43", "title": "Definitions, methods, and applications in interpretable machine learning", "year": "2019" }, { "authors": "Anmol Nayak; Hari Prasad Timmapathini", "journal": "NLP Association of India (NLPAI", "ref_id": "b44", "title": "Using integrated gradients and constituency parse trees to explain linguistic acceptability learnt by BERT", "year": "2021" }, { "authors": "Utku Norman; Tanvi Dinkar; Barbara Bruno; Chloé Clavel", "journal": "Dialogue & Discourse", "ref_id": "b45", "title": "Studying alignment in a collaborative learning activity via automatic methods: The link between what we say and do", "year": "2022" }, { "authors": "Byung-Doh Oh; William Schuler", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b46", "title": "Why Does Surprisal From Larger Transformer-Based Language Models Provide a Poorer Fit to Human Reading Times", "year": "2023" }, { "authors": "Denis Paperno; Germán Kruszewski; Angeliki Lazaridou; Ngoc Quan Pham; Raffaella Bernardi; Sandro Pezzelle; Marco Baroni; Gemma Boleda; Raquel Fernández", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "The LAMBADA dataset: Word prediction requiring a broad discourse context", "year": "2016" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b48", "title": "Bleu: a Method for Automatic Evaluation of Machine Translation", "year": "2002" }, { "authors": "Jennifer S Pardo", "journal": "The Journal of the Acoustical Society of America", "ref_id": "b49", "title": "On phonetic convergence during conversational interaction", "year": "2006" }, { "authors": "Daniel Park", "journal": "", "ref_id": "b50", "title": "", "year": "2023" }, { "authors": "Martin J Pickering; Simon Garrod", "journal": "Behavioral and Brain Sciences", "ref_id": "b51", "title": "a. The interactive-alignment model: Developments and refinements", "year": "2004" }, { "authors": "J Martin; Simon Pickering; Garrod", "journal": "Behavioral and Brain Sciences", "ref_id": "b52", "title": "Toward a mechanistic psychology of dialogue", "year": "2004" }, { "authors": "J Martin; Simon Pickering; Garrod", "journal": "", "ref_id": "b53", "title": "Establishing and using routines during dialogue: Implications for psychology and linguistics", "year": "2005" }, { "authors": "Janet B Pierrehumbert", "journal": "", "ref_id": "b54", "title": "Burstiness of verbs and derived nouns. Shall We Play the Festschrift Game? Essays on the Occasion of Lauri Carlson's 60th Birthday", "year": "2012" }, { "authors": "Krishna Pillutla; Swabha Swayamdipta; Rowan Zellers; John Thickstun; Sean Welleck; Yejin Choi; Zaid Harchaoui", "journal": "", "ref_id": "b55", "title": "MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers", "year": "2021" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b56", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Marlou Rasenberg; Asli Özyürek; Mark Dingemanse", "journal": "Cognitive science", "ref_id": "b57", "title": "Alignment in multimodal interaction: An integrative framework", "year": "2020" }, { "authors": "David Reitter; Frank Keller; Johanna D Moore", "journal": "Association for Computational Linguistics", "ref_id": "b58", "title": "a. Computational modelling of structural priming in dialogue", "year": "2006" }, { "authors": "David Reitter; Frank Keller; Johanna D Moore", "journal": "", "ref_id": "b59", "title": "Computational modelling of structural priming in dialogue", "year": "2006" }, { "authors": "David Reitter; Johanna D Moore", "journal": "Association for Computational Linguistics", "ref_id": "b60", "title": "Predicting success in dialogue", "year": "2007" }, { "authors": "David Reitter; Johanna D Moore", "journal": "Journal of Memory and Language", "ref_id": "b61", "title": "Alignment and task success in spoken dialogue", "year": "2014" }, { "authors": "Gabriele Sarti; Nils Feldhus; Ludwig Sickert; Oskar Van Der Wal; Malvina Nissim; Arianna Bisazza", "journal": "", "ref_id": "b62", "title": "Inseq: An interpretability toolkit for sequence generation models", "year": "2023" }, { "authors": "David Schlangen", "journal": "", "ref_id": "b63", "title": "Causes and strategies for requesting clarification in dialogue", "year": "2004" }, { "authors": "Cory Shain; Clara Meister; Tiago Pimentel; Ryan Cotterell; Roger Philip; Levy ", "journal": "", "ref_id": "b64", "title": "Large-scale evidence for logarithmic effects of word predictability on reading time", "year": "2022" }, { "authors": "Avanti Shrikumar; Peyton Greenside; Anshul Kundaje", "journal": "ICML", "ref_id": "b65", "title": "Learning Important Features Through Propagating Activation Differences", "year": "2017" }, { "authors": "Arabella Sinclair; Jaap Jumelet; Willem Zuidema; Raquel Fernández", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b66", "title": "Structural persistence in language models: Priming as a window into abstract language representations", "year": "2022" }, { "authors": "Arabella Sinclair; Adam Lopez; Dragan Gasevic", "journal": "", "ref_id": "b67", "title": "Does ability affect alignment in second language tutorial dialogue", "year": "2018" }, { "authors": "Arabella Sinclair; Kate Mccurdy; Christopher G Lucas; Adam Lopez; Dragan Gaševic", "journal": "International Educational Data Mining Society", "ref_id": "b68", "title": "Tutorbot corpus: Evidence of human-agent verbal alignment in second language learner dialogues", "year": "2019" }, { "authors": "J Arabella; Raquel Sinclair; Fernández", "journal": "", "ref_id": "b69", "title": "Construction coordination in first and second language acquisition", "year": "2021" }, { "authors": "J Arabella; Raquel Sinclair; Fernández", "journal": "System", "ref_id": "b70", "title": "Alignment of code switching varies with proficiency in second language learning dialogue", "year": "2023" }, { "authors": "J Arabella; Bertrand Sinclair; Schneider", "journal": "International Educational Data Mining Society", "ref_id": "b71", "title": "Linguistic and gestural coordination: Do learners converge in collaborative dialogue?", "year": "2021" }, { "authors": "Mukund Sundararajan; Ankur Taly; Qiqi Yan", "journal": "PMLR", "ref_id": "b72", "title": "Axiomatic attribution for deep networks", "year": "2017-06-11" }, { "authors": "Louis Ten Bosch; Nelleke Oostdijk; Lou Boves", "journal": "Speech Communication", "ref_id": "b73", "title": "On temporal aspects of turn taking in conversational dialogues", "year": "2005" }, { "authors": "M Kristen; Matthew J Tooley; Traxler", "journal": "Language and Linguistics Compass", "ref_id": "b74", "title": "Syntactic priming effects in comprehension: A critical review", "year": "2010" }, { "authors": "Yi-Lin Tuan; Connor Pryor; Wenhu Chen; Lise Getoor; William Yang; Wang ", "journal": "", "ref_id": "b75", "title": "Local explanation of dialogue response generation", "year": "2021-12-06" }, { "authors": "Sarenne Wallbridge; Peter Bell; Catherine Lai", "journal": "", "ref_id": "b76", "title": "Do dialogue representations align with perception? An empirical study", "year": "2023" }, { "authors": "Arthur Ward; Diane Litman", "journal": "Frontiers in Artificial Intelligence and Applications", "ref_id": "b77", "title": "Dialog convergence and learning", "year": "2007" }, { "authors": "Sean Welleck; Ilia Kulikov; Stephen Roller; Emily Dinan; Kyunghyun Cho; Jason Weston", "journal": "", "ref_id": "b78", "title": "Neural text generation with unlikelihood training", "year": "2019" }, { "authors": "Deanna Wilkes-Gibbs; Herbert H Clark", "journal": "Journal of Memory and Language", "ref_id": "b79", "title": "Coordinating beliefs in conversation", "year": "1992" }, { "authors": "Thomas Wolf; Victor Sanh; Julien Chaumond; Clement Delangue", "journal": "", "ref_id": "b80", "title": "TransferTransfo: A transfer learning approach for neural network based conversational agents", "year": "2019" }, { "authors": "Tongshuang Wu; Marco Tulio Ribeiro; Jeffrey Heer; Daniel Weld", "journal": "Association for Computational Linguistics", "ref_id": "b81", "title": "Polyjuice: Generating counterfactuals for explaining, evaluating, and improving models", "year": "2021" }, { "authors": "Yadong Xi; Jiashu Pu; Xiaoxi Mao", "journal": "", "ref_id": "b82", "title": "Taming repetition in dialogue generation", "year": "2021" }, { "authors": "Yang Xu; David Reitter", "journal": "", "ref_id": "b83", "title": "An Evaluation and Comparison of Linguistic Alignment Measures", "year": "2015" }, { "authors": "Kayo Yin; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b84", "title": "Interpreting language models with contrastive explanations", "year": "2022" }, { "authors": "Chen Zhang; Grandee Lee; Luis Fernando; D' Haro; Haizhou Li", "journal": "IEEE/ACM Transactions on Audio, Speech, and Language Processing", "ref_id": "b85", "title": "D-score: Holistic dialogue evaluation without reference", "year": "2021" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b86", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b87", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "", "ref_id": "b88", "title": "DIALOGPT : Large-scale generative pre-training for conversational response generation", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 143.73, 225.48, 146.01, 21.69 ], "formula_id": "formula_0", "formula_text": "V O = |wt c ∩ wt p | |wt c | (1)" }, { "formula_coordinates": [ 4, 143.86, 325.76, 145.88, 21.69 ], "formula_id": "formula_1", "formula_text": "CO = |Ct c ∩ Ct p | |wt c | (2)" }, { "formula_coordinates": [ 4, 130.62, 560.65, 159.11, 19.75 ], "formula_id": "formula_2", "formula_text": "P M I(c, s) = log 2 P (c|s) P (c)(3)" }, { "formula_coordinates": [ 7, 375.09, 327.82, 149.92, 10.63 ], "formula_id": "formula_3", "formula_text": "Φ ∈ R |w i |×|w t |×n emb (4)" }, { "formula_coordinates": [ 7, 345.92, 496.33, 179.09, 21.13 ], "formula_id": "formula_4", "formula_text": "Φ ′ ∈ R |T | , Φ ′ i = j∈T i k l Φ j,k,l(5)" }, { "formula_coordinates": [ 7, 375.94, 700.75, 149.07, 44.66 ], "formula_id": "formula_5", "formula_text": "Φ ′′ = Φ ′ max (|Φ ′ |) (6) ϕ = Φ ′′ -mean(Φ ′′ ) (7)" } ]
10.1145/3090051
2023-11-25
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b20", "b23", "b45", "b51", "b54", "b60", "b5", "b10", "b60", "b30", "b16", "b39", "b12", "b40", "b60" ], "table_ref": [], "text": "Mobile and wearable sensors that collect health and fitness data have seen explosive growth in the past five years [21,24]. Products such as Fitbit and Apple Watch have dramatically advanced in their sensing capabilities beyond simple step counts to include optical heart rate sensors and two lead electrocardiogram (ECG) measurements that could provide clinicians valuable information about a patient's symptoms outside their practice. Beyond just tracking a run, or checking for high heart rate, researchers have shown the potential of leveraging such passive sensing data to model high-level, complex behaviors, such as mental health [46,52,55,61].\nDespite their potential, mobile and wearable data use in clinical mental health practice faces three key challenges. First, there is a lack of trust and clinically validated data; for instance, there is high uncertainty in correlations between activity levels and mental states like depression. Second, clinicians struggle with the overwhelming volume of longitudinal sensor data and have difficulty interpreting non-standard signals like phone usage. Third, the qualitative nature of mental health limits the utility of binary data classifications, particularly without a broader context of the patient's lifestyle and history.\nTo address these challenges, we take a novel approach that leverages AI tools based on large language models (LLMs) to synthesize clinically useful insights from multi-modal wearable sensor data. We develop methods to use LLMs to process self-tracking data from mobile devices measuring signals like step count and sleep to generate reasoning about how trends from multiple sensors relate to mental health conditions like depression and anxiety. We first use this reasoning to perform binary classification; although our classification accuracy exceeds the state of the art it is not high enough for clinical use. This leads to our key finding that, even more impactful than classification, is a new human-AI approach in which clinician experts interactively query these tools and combine AI-generated reasoning with their domain knowledge and context of the patient's experiences.\nIn this work, we propose two key paradigm shifts in the way we design tools for analyzing health data, which differs significantly from traditional signal processing or conventional ML techniques. The typical approach is to train a model to classify a specific condition (e.g. depression) based on a specific set of time series sensor data inputs, and re-train or validate for each new sensor device model [6,11]. This presents a fundamental barrier to adoption, as new mobile and wearable device models are released annually and require significant resources for validation. These approaches are very rigid in their input requirements meaning changing sampling parameters or leaving out a sensor input could cause models to break completely. Additionally, the notable variability in both quality and format of data gathered by commercially available sensors and devices presents a considerable challenge in current methodologies. Moreover, conventional machine learning approaches perform poorly on abstract relations. For example, recent work evaluating the ability of 19 different ML models to predict depression using self-tracking data revealed many achieved accuracies below 50% [61].\nGiven these challenges inherent to current machine learning approaches for health data analysis, it becomes imperative to explore alternative methods that offer flexibility, adaptability, and deeper analytical capabilities. This leads us to consider the potential of LLMs in revolutionizing the way we approach sensor data in health settings. Unlike traditional ML techniques, LLMs possess a broad and dynamic understanding of cross-domain knowledge, enabling them to process multifaceted sensor data with greater contextual understanding. This shift in methodology from rigid, classification-focused models to the more versatile and interpretative capabilities of LLMs offers a promising avenue for advancing health data analysis. The following insights delve deeper into this paradigm shift, highlighting how LLMs can be effectively employed to overcome the limitations of conventional approaches and potentially transform clinical practice.\n• Insight 1: Using LLMs to process ubiquitous sensor data. General purpose foundation models with a robust representation of cross-domain topics and the ability to take flexible inputs present a unique set of capabilities to address these challenges. Using LLMs to process multi-sensor mobile and wearable data, however, raises fundamental research questions. The first and most basic is how can we input multi-sensor data to LLMs? While recent work has shown preliminary findings on processing single time series [31], their ability to synthesize multiple time series signals and connect them to abstract cross-domain concepts remains unexplored. How can we design prompts with context for LLMs to effectively interpret multi-sensor data? Prompting strategies may also bias LLM responses and constrains their outputs. Is it possible to develop prompts that can perform binary classification similar to conventional ML models? Most importantly, how could these tools be used in clinical practice? We systematically investigate these questions and develop input and prompting strategies to use LLMs to process multi-sensor data. We show through chain of thought prompting that LLMs can produce more accurate depression classifications than state-of-the-art systems.\n• Insight 2: Shifting from classification towards reasoning to support clinical decision making. Mental health diagnoses are nuanced and clinicians may disagree on diagnoses [17], which makes it difficult to use and interpret binary classifications; however LLMs are generative tools that can output analysis and reasoning about sensor data to aid and empower clinicians for a collaborative human-AI approach to psychotherapy. We find LLMs can generate text explanations correctly identifying anomalies and trends in time-series data and indicate how one or more sensors may relate to depression and mental health. For example, in one instance the model identifies that a patient only slept for 75 minutes one night. This insight could allow a clinician to prompt the patient for potential causes or related symptoms. We perform a series of user studies to validate these clinical insights with domain experts and investigate clinical use scenarios.\nBuilding upon the insights previously discussed, this paper systematically examines the interaction of LLMs with multi-modal wearable sensor data. We delve into the performance of LLMs in executing a challenging mental health classification task and explore how LLM-based human-AI collaboration impacts clinical practice. This comprehensive study aims to unravel the multifaceted capabilities of LLMs in processing and interpreting health-related sensor data, thereby opening new avenues in health data analytics. We summarize our contributions below:\n• We perform the first exploration of using LLMs to process multi-sensor ubiquitous sensor data. We develop a series of prompting and fine-tuning strategies enabling LLMs to perform binary depression classification through chain-of-thought prompting. We compare results across GPT-4 [40], PaLM 2 [13] and GPT-3.5 [41] and achieve accuracies as high as 61.1% which exceeds the state of the art on this dataset [61].\n• We propose a novel approach to developing AI systems for processing ubiquitous health sensor data that focuses on generating insights to empower human clinicians. We find that LLMs can generate reasoning about multi-sensor data observing trends and anomalies in the data and make connections between multiple input signals and relevant mental health scenarios. We evaluate all three models and find that GPT-4 references numerical data correctly 75% of the time and agreement across clinician experts (N=8) that the models correctly interpreted patterns related to mental health.\n• We evaluate potential clinical use cases for this human-AI approach through an interactive user study with mental health professionals. We find strong agreement across users that our approach would help interpret self-tracking data, is preferable to binary classification, and agreement that such tools could help enhance treatment. Based on these interviews we present an in-depth discussion of the potential opportunities, concerns, and directions for future research." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Multi-sensor Passive Sensing for Health and Well-being", "publication_ref": [ "b2", "b34", "b61", "b52", "b59", "b33", "b35", "b53", "b64", "b46", "b44", "b3", "b11", "b36", "b50", "b54", "b60", "b8" ], "table_ref": [], "text": "Smartphones and wearable devices, now ubiquitous in our lives, function as passive sensors, seamlessly capturing a vast range of data. Their near-constant presence allows for the unobtrusive and continuous monitoring of behavior, activity, and physiological signals. Over the last decade, significant progress has been made in passive sensing and behavioral modeling, impacting areas such as physiological health condition detection [3,35,62], monitoring mental health status [53,60], measuring job performance [34,36], tracking education outcomes [54,65], and tracing social justice [47]. Researchers employ various methods, including statistical analysis and conventional machine learning (ML) models, to explore these areas. In a mental health context, initial research established statistical correlations between mental health conditions and mobile sensing data. For instance, Saeb et al. [45] found significant correlations between depression scores and smartphone usage patterns, and Ben-Zeev et al. [4] identified links between changes in depression severity levels and features related to sleep duration, speech duration, and mobility. More recent efforts have focused on leveraging these results to build ML models for mental health disorder diagnosis and detection [12,37,51,55]. To further growth in this area, Xu et al. [61] collected and released a multi-year passive sensing dataset and platform that covers a wide range of physical health, mental health, and social well-being measurements. However, most research in this domain relies on conventional statistical and ML methods, and the recent improvements in the performance of large foundation models [9] present an opportunity to explore new techniques for analyzing passively-collected sensor data." }, { "figure_ref": [], "heading": "LLMs for Health Applications", "publication_ref": [ "b15", "b41", "b7", "b42", "b56", "b39", "b13", "b14", "b49", "b48", "b38", "b43", "b57", "b66", "b4", "b18", "b25", "b29", "b30", "b37", "b47", "b58", "b47", "b58", "b25", "b26", "b40", "b65", "b28", "b1", "b63", "b31" ], "table_ref": [], "text": "The success of Transformer-based language models such as BERT [16] and GPT [42], have led to the development of larger and more powerful language models (e.g. GPT-3 [8] and T5 [43]). Instruction finetuning by including instructions (i.e. prompts) from a range of datasets and task domains during both the training and generation phases has led to the development of single models that are capable of performing a wide range of tasks [57]. These instruction-finetuned LLMs, such as GPT-4 [40], PaLM [14], FLAN-T5 [15], LLaMA [50], and Alpaca [49], contain tens to hundreds of billions of parameters and achieve a promising level of performance on a variety of tasks, such as question answering [39,44], logic reasoning [58,67], machine translation [5,19], and more.\nIn the health sector, these LLMs have been applied in several studies [26,30,31,38,48,59]. For example, Singhal et al. [48] utilized a finetuned version PaLM-2 to score as high as 86.5% on MedQA dataset. Similarly, Wu et al. [59] finetuned LLaMA on a corpus of academic medical papers and textbooks resulting in promising results on multiple biomedical QA datasets. Jiang et al. [26] trained a medical language model on unstructured clinical notes from the electronic health record and finetuned for performance across a wide range of clinical and operational predictive tasks. These examples underscore the versatility and potential effectiveness of LLMs in the medical space.\nIn the mental health domain, LLMs have been explored for applications such as sentiment analysis and emotional reasoning [27,41,66]. Lamichhane [29], Amin et al. [2], and Yang et al. [64] tested the performance of ChatGPT on multiple classification tasks (stress, depression, and suicide risk) and found that ChatGPT shows initial potential for these mental health applications, but it still has room for significant improvement.\nDespite this, there has been little work that focuses specifically on integration with mobile and wearable health data, with most of the existing literature focusing on text data rather than multi-sensor streams. Closer to our work, Liu et al. [32] demonstrated that with only few-shot tuning, a large language model is capable of grounding various physiological and behavioral time-series data as well as making meaningful inferences on numerous health tasks (e.g., heart rate measurement, atrial fibrillation detection, and mood score prediction). However, their work is based on self-curated toy datasets consisting of well-described physiological signals and behaviors." }, { "figure_ref": [], "heading": "MOTIVATIONS AND RESEARCH QUESTIONS 3.1 Reasons for Using Language Models", "publication_ref": [], "table_ref": [], "text": "In this paper, we explore the application of contemporary LLMs (GPT-3.5, GPT-4, and PaLM 2) in analyzing data from diverse real-world sources, particularly wearable and phone-based sensors. Our goal is to evaluate the potential of these models in predicting depression and generating insightful analyses from mobile health data, thereby assisting therapists in clinical settings. We undertake an interactive interview study with mental health professionals to critically assess the practicality and limitations of our approach, and to consider the prospective role of language models in mental health contexts.\nOur methodology initially addresses the issue of trust and the necessity for rigorous sensor validation. We shift from a black-box automated diagnosis to a 'human-in-the-loop' approach, wherein AI-generated summaries offer hypotheses for clinicians to investigate and consider. This method enhances trust and augments the clinician's decision-making process.Furthermore, the LLMs' capability to synthesize data into natural language summaries akin to expert notes substantially lowers the barrier to adoption. The ability of these models to integrate and process various data streams, coupled with their comprehension of the physical world, potentially allows for the identification of patterns and relationships that might elude human experts.\nA particularly compelling aspect of our approach is the LLMs' adeptness in handling data from multiple sensor streams, as well as their proficiency in processing information from different types of devices with varying resolutions. This capability is critical in the context of wearable health applications, where there is a significant variance in the quality and format of data collected by commercially available sensors and devices. Such variance poses a substantial challenge in current practices, yet it is precisely this variability in data types and formats that LLMs are well-equipped to manage. Consequently, these models show remarkable promise in generalizing across diverse input data, positioning them as promising candidates for addressing the challenges posed by the heterogeneity of data in the field of mobile health." }, { "figure_ref": [], "heading": "Research Questions", "publication_ref": [], "table_ref": [], "text": "In this paper, we aim to answer the following research questions:\nRQ1: How can we input multi-variable sensor data to LLMs? Language models are designed to accept natural language text inputs. We will investigate different formats of data inputs as well as methods of prompt construction to best convey the context of multi-sensor wearable data to the LLM.\nRQ2: How can we perform classification tasks on multi-variable sensor data using LLMs? Our data inputs are abstract and many are not directly related to mental health. We will explore ways of prompting the model to synthesize the multisensor data and output a binary classification.\nRQ3: Can LLMs generate analysis of multi-variable sensor data that is both accurate and consistent with the raw data? In our process to perform binary classification we observe the models generate intermediate steps of clinical reasoning grounded in the input data. We will evaluate whether these model outputs correctly reflect real values and trends in the data.\nRQ4: Does the reasoning generated by LLMs based on wearable data align with clinical understanding of mental health conditions? Beyond verifying that the models can identify trends and anomalies in the data it is even more important to verify the models are connecting observations to valid clinical reasoning." }, { "figure_ref": [], "heading": "RQ5:", "publication_ref": [], "table_ref": [], "text": "In what ways could human-AI collaboration contribute to the clinical workflows of therapists? Beyond validating that the models produce clinically and factually correct outputs, it is important to also consider the implications of such a tool and how it could be used for therapy." }, { "figure_ref": [], "heading": "USING LLMS FOR BINARY CLASSIFICATION WITH MULTI-SENSOR DATA", "publication_ref": [], "table_ref": [], "text": "In this section, we describe our experiment setup and the results for answering our first two research questions (RQ1 and RQ2) focused on how to input multi-sensor data to LLMs and how to use this data for binary classification of depression and anxiety. " }, { "figure_ref": [ "fig_1" ], "heading": "Dataset", "publication_ref": [ "b62", "b27", "b21", "b22", "b24" ], "table_ref": [], "text": "In this paper, we use the Globem dataset [63], with its extensive collection of unique user data, sourced from mobile and wearable sensors, and together with a wide range of well-being metrics. Globem includes weekly Ecological Momentary Assessment (EMA) surveys focused on capturing participants' recent sense of their mental health, including Patient Health Questionnaire 4 (PHQ-4) [28] which we use as the ground truth in our depression classification task [22,23,25]. Paired with this survey data, Globem passively collected sensor data 24×7 from a mobile app and wearables passively, including measures of activity such as steps, GPS locations, phone calls, social activity proxies and more. The dataset then extracts hundreds of features (time at home, time asleep, etc) from these raw measurements.\nTo constrain the input token length, we select a subset of 16 diverse features, including Location, Phone Usage, Bluetooth, Calls, Physical Activity, and Sleep. The detailed feature types are described in the \"Collected Data\" section of Figure 2. We set the time length of the data as 28 days. The time window includes survey data on the last day. Therefore, the mobile and wearable sensor data for each sample is 28×16 (28 days × 16 features). We randomly sampled 30 data points from each year with an equal distribution of labels, half of which have a PHQ-4 score of less than 1 as negative samples, and half of which have a PHQ-4 score of greater than 5 as positive samples. We picked the thresholds following the standard PHQ-4 criteria, where the negative samples are at the normal level and the positive samples are at the moderate or severe level of depression and anxiety. The final test data set contains a total of 90 class-balanced samples from three years." }, { "figure_ref": [ "fig_0" ], "heading": "How Can We Input Multi-Variable Sensor Data Into LLMs?", "publication_ref": [ "b60" ], "table_ref": [], "text": "4.2.1 Inputting raw data to LLMs. The mobile and wearable sensing data from Globem [61] are raw data which are not easy to interpret, and it is unclear whether the LLM will be able to synthesize multi-sensor data. We evaluate multiple data input formats to determine which one is most suitable for LLMs. We tried four ways to format these raw data, including comma-separated values(CSV), Tabluar, Markdown and LaTeX.\nInput format results. We use the classification results of depression as a measure of these formats. From Figure 1, we can see CSV, Tabular and Markdown formats exhibit comparable performance levels and these results are consistent in both GPT-4 and PaLM 2. In contrast, the LaTeX format demonstrates a performance gap when compared to the other three formats. The observed results align logically with expectations, considering the predominant sources of training data for LLMs. The vast majority of this data is sourced from the Internet, where formats like CSV and Markdown are far more prevalent than LaTeX. Given this disparity in data availability, it stands to reason that LLMs would exhibit higher accuracy in processing and interpreting CSV and Markdown inputs as compared to LaTeX. Since Markdown shows the best overall performance, we chose Markdown as the data format for our subsequent experiments." }, { "figure_ref": [ "fig_1" ], "heading": "Classification", "publication_ref": [ "b0", "b55", "b57", "b19", "b6", "b60", "b60" ], "table_ref": [], "text": "Tasks on Multi-Variable Sensor Data Using LLMs 4.3.1 Method and Prompt Design. To create the full prompt, we concatenate the formatted sensor data with variable descriptions and instructions for the task. A sample prompt in this format is shown in Figure 2. We focus on three different variants of prompting, including\n(1) Direct Prediction: we directly ask the LLM to perform depression classification with prompts that only include the basic information and the formatted sensor data. (2) Chain-of-Thought (CoT) prompting: building upon direct prediction, we induce models to perform stepby-step reasoning with carefully crafted instructions to hypothesize about the subject's overall mental health.\n(3) Reasoning with extra data: based on CoT, we provide extra task-related domain information such as more detailed explanations of input variables (Exp), and the depression criteria from the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V) [1].\nLLMs results. We test these prompting variants on top of three state-of-the-art LLM models, GPT-3.5, GPT-4 and PaLM2. Figure 3 reveals that while GPT-3.5 attains an accuracy rate of 50%. Upon a closer look however, we observe that it does not effectively address the question posed, consistently defaulting to a response of 'No', which results in 50% accuracy due to our balanced dataset. Through Figure 3, we can see that CoT improves the accuracy of the model consistently compared to direct prediction, as found in many related studies [56,58]. By CoT + Exp. method, PaLM2 achieved the highest accuracy of 61.11%. Note that adding additional information, even if accurate and pertinent to the topic, does not always result in an increase in performance. Both PaLM 2 and GPT-4 perform their worst, at 48.89% and 51.11% respectively, when provided with the DSM-V description of depression as part of the prompt. In these cases, we see that this results in a significant increase in the percentage of samples classified as depressed, with GPT-4 classifying as high as 98.89% of samples as positive. These results suggest that even mentioning depression via the DSM criteria may drastically bias the model toward a single output, highlighting the importance of neutral prompt design.\nBaseline and results. We also compare these classification accuracies to a few baselines, including classic machine learning methods, Support Vector Machine (SVMs) [20] and Random Forest (RF) [7], as well as the state-of-the-art self-superivsed learning method, Reorder proposed by Xu et al. [61] on this same dataset. We trained SVM and RF using a structure of 28 days × 16 features. For Reorder, we implemented two versions, one mimicking the original implementation in [61] with 54 features, the other with the same 16 features as others.\nGiven that the baseline models do not support a zero-shot setting, we curated a training dataset for comparative analysis. This dataset comprises the remaining data points that meet the PHQ-4 thresholds, totaling 384 samples. These were formatted identically to the test set samples. We conducted evaluations using Support Vector Machine (SVM), Random Forest (RF), and Reorder algorithms, alongside the Reorder algorithm optimized with 54 features (Reorder-54 Features). These evaluations were performed on the identical test set used for assessing the LLMs.\nThe performance metrics reveal that the accuracies of SVM, RF, Reorder, and Reorder-54 Features are 51.11%, 53.33%, 51.11%, and 58.89% respectively. Notably, among the three baseline models utilizing the same set of 16 features, none could surpass the zero-shot Chain of Thought (CoT) results achieved using the GPT-4 and PaLM2 models. This outcome underscores the superior overall performance of LLMs compared to traditional baselines. However, it is important to highlight that the performance advantage of LLMs, while notable, remains marginal. Furthermore, the current accuracy levels are not yet adequate for practical deployment scenarios." }, { "figure_ref": [ "fig_1" ], "heading": "Fine-tuning.", "publication_ref": [], "table_ref": [], "text": "Next we explore methods of improving model performance. Evaluating the CoT reasoning produced by GPT-3.5 reveals that unlike GPT-4, smaller LLMs rarely incorporates analysis that relates specifically to the numerical values of the mobile health data. This raises an important question: can we use instruction fine-tuning to enable smaller LLMs to perform more effectively on such challenging wearable classification task?\nTo answer this question, we explore using the reasoning responses generated from the GPT-4 (see Figure 2) to fine-tune the GPT-3.5 model. For the fine-tuning experiment, the prompt design was kept consistent with the same methodology outlined above. Utilizing GPT-4, we generate a collection of high-quality reasoning responses based on data external to the test set. From this assortment, correctly-classified reasoning responses were selected to form a candidate training set. Finally, we make a balanced instruction training set, comprising 70 sample with high-quality reasoning, evenly distributed between positive and negative examples.\nFine-tuing results. After fine-tuning GPT-3.5 with 2 epochs using this balanced instruction-tuning set, we see an improvement in the performance of GPT-3.5. While it initially fails to properly perform classification, our fine-tuned version of GPT-3.5 achieves an accuracy of 56.67% on the test set, which is very close to the GPT-4 performance on accuracy. In the following sections, we can also see the improvement of fine-tuned GPT-3.5 in the evaluation of the reasoning." }, { "figure_ref": [], "heading": "Generalization on Other Mental Health Classification", "publication_ref": [], "table_ref": [], "text": "Tasks. In addition to depression, we seek to explore the generalization abilities of LLMs to other mental health classification tasks. Specifically, we use the same prompt design as depression classification and only change instructions at the end to conduct experiments on anxiety classification.\nGeneralization Results. We pick RF as the baseline, which has demonstrated the best performance among non-LLM methods in classifying depression. The training and evaluation process is the same as for the depression evaluation. It achieves an accuracy of 55.56%. We also evaluated 4 LLM models, including GPT-3.5, our fine-tuned" }, { "figure_ref": [ "fig_3" ], "heading": "Random", "publication_ref": [], "table_ref": [], "text": "Forest GPT-3. GPT-3.5 for depression classification, GPT-4, and PaLM2. We employ the top-performing CoT prompting strategy without DSM-V context or additional variable information. The results are summarized in Figure 5. We can see that GPT-3.5 still struggles to answer the question, although it can generate some analysis through the data. The GPT-4 can achieve the accuracy of 55.56%, while PaLM2 can get 56.67%. Notably, this result mirrors the trend observed in the depression classification results. For our fine-tuned GPT-3.5 model, although the model is fine-tuned for depression, it still shows some improvement in the anxiety detection task compared to the original GPT-3.5, indicating the potential of our fine-tuning method." }, { "figure_ref": [ "fig_1" ], "heading": "GENERATIVE REASONING", "publication_ref": [ "b17" ], "table_ref": [], "text": "Our classification methods above suggest that language models have certain key advantages such as flexibility in data inputs and potential to generalize, however the results above also indicate that their accuracy on tasks like depression classification are only 61.1%, which is insufficient for clinical diagnosis. While this is significantly lower than the accuracy reported by AI and ML systems on other tasks, we note that clinical measurement of depression is itself debated. For example, a recent study showed inter-clinician agreement for classifying depression could be as low as 43% [18], highlighting the inherent difficulty of evaluating model accuracy. Additionally, the models can only process the data they are given, and there are numerous other factors that could affect mental health that clinicians gain through in-person interaction over time. We notice however through our chain of thought prompting required to produce a classification output that the models output an intermediate step of reasoning as shown in Fig 2 explicitly connecting data to mental health concepts.\nUpon closer inspection, we find that regardless of whether the binary classification is correct, the summarized reasoning of the time series data contains what may be be clinically valid and valuable insights. Building on this observation we propose a new method of using LLMs to process multimodal sensor data: instead of producing quantitiative outputs, use qualitative natural language summaries to inform human clinicians. In the sections below we investigate the quality, accuracy, and potential for human-AI collaborative use in clinical settings.\nWe begin by investigating the accuracy of the model outputs (RQ 3). Are the models able to identify real and specific trends in the data? Do they hallucinate numbers or are the outputs truly reflective of the input data? To evaluate this, we utilize human graders to check the LLM responses against the input timeseries data according to an objective rubric. We note that for this study of accuracy, we employ layperson graders and will evaluate the clinical reasoning in Section 6. We score a total of 480 responses evenly split across four different models: PaLM 2, GPT-3.5, fine-tuned GPT-3.5, and GPT-4." }, { "figure_ref": [], "heading": "Role:", "publication_ref": [], "table_ref": [], "text": "You are a data analyst helping a psychiatrist understand human activity data. Task: You will be shown data gathered from a smartphone and smart watch worn by an individual. Your " }, { "figure_ref": [ "fig_4" ], "heading": "Producing Reasoning Samples.", "publication_ref": [], "table_ref": [], "text": "To elicit reasoning based on the provided data, we use the prompt as shown in Figure 6. Instead of asking to hypothesize about the health of the patient, we tune this prompt to produce analysis on trends in the data as shown in Figure 7. For input data, we select random samples from the test set with a PHQ-4 score of greater than 5, indicating likely moderate to severe depression. We select these samples as they are most likely to have a trends or anomalies we can evaluate the model on. These excerpts consist of the same 16 features used in section 4 formatted in markdown format. We produce 8 samples from each of the four models per set of input data. Each sample includes visualization of sensor features as well as interpretation from LLMs (see Figure 7)." }, { "figure_ref": [], "heading": "Evaluation.", "publication_ref": [], "table_ref": [], "text": "We recruit 15 individuals over the age of 18 to grade reasoning excerpts. Each participant was sent an online form that contained a set of 32 randomly-ordered responses (8 from each model). To reduce grader burden, all 32 responses were generated from the same input data. Graders were also provided the raw data input to the model in tabular form as well as time-series plots of each of the data features. For each reasoning excerpt, graders were asked to answer the following four questions:\n(1) Does this response include numbers? (2) Are these numbers consistent with the provided data? (3) Does this response identify specific trends? (4) Are these trends consistent with the provided data?\nGraders were provided with instructions on how to evaluate each of these questions, as well as a series of examples of responses. The rubric provided to graders can be seen in Appendix C. We provide explicit instructions to evaluate only the numbers and trends against the provided data table and plots, ignoring any conclusions the responses may make about how these numbers or trends might relate to mental health or other factors. Layperson graders each evaluated a single set of 32 responses and were offered a $20 gift card as compensation." }, { "figure_ref": [], "heading": "Time Spent at Home:", "publication_ref": [], "table_ref": [], "text": "-e time spent at home seems to be decreasing around the middle of the month, notably on 2019-05-14 and 2019-05-15 where it dips to 456 and 167 minutes respectively. is could indicate some unusual activity or a signi cant change in routine." }, { "figure_ref": [], "heading": "Sleep Patterns:", "publication_ref": [], "table_ref": [], "text": "-e total time asleep varies across the month. On 2019-05-04, the individual slept for only 75 minutes. On the other hand, on 2019-05-13, they slept for 514 minutes. Erratic sleep pa erns can be an indicator of various issues, including stress, anxiety, or other disorders. -e total time spent awake while in bed also sees uctuations, with some nights having prolonged awake times, suggesting possible insomnia or restless sleep.\nGPT-4 Reasoning: Raw Data: " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Figure 8 shows the results of our study. Interestingly, we find that while PaLM 2 performed slightly higher on the classification tasks above, GPT-4 performs significantly better across our study rubrics. For example, while PaLM 2 also identifies trends at a high rate, GPT-4 is more likely to identify all trends correctly and include references to the numerical data and achieves scores exceeding 75% accuracy. We further note here that the graders evaluated data as simple yes/no questions meaning that all numbers and trends had to be correct. Although we observe some errors in the model outputs, even these responses often contain correct trends as well that could be used in a collaborative human-AI approach identified above. For example, if the LLM can observe an outlier or anomaly in the data, this is often easily visible to the user as well for confirmation. " }, { "figure_ref": [], "heading": "Study Team Clinician Screen Share of LLM Interface", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "CLINICIAN INTERVIEW AND INTERACTIVE EVALUATION", "publication_ref": [], "table_ref": [], "text": "The results above suggest that the GPT-4 in particular can correctly identify factually correct trends and anomalies in the data. However, we have not yet evaluated if the models' inferences about the subject's mental health are clinically valid and useful. To address this uncertainty and investigate how these insights could be used in practice (RQ4 and RQ5), we conduct a user study with clinician experts." }, { "figure_ref": [ "fig_6" ], "heading": "Methods", "publication_ref": [ "b0", "b60" ], "table_ref": [], "text": "6.1.1 Participants. Eight mental health professionals participated in our study: six with PhDs in Clinical Psychology and two with master's degrees. Participants described their approaches as Cognitive-Behavioral Therapy (2), Acceptance and Commitment Therapy (1), Dialectical Behavioral Therapy (1), Psychodynamic (1), Relational/Interpersonal (2), and Family Systems (1). Participants reported working in a variety of settings including academic medicine, group private practice, individual private practice, and community mental health, across four states in the U.S. We recruited by word of mouth and postings on group practice mailing lists and social media groups.\n6.1.2 Interview. Interviews began with a discussion of the clinician's practice, particularly as it includes patient self-monitoring. We asked participants to imagine a scenario in which they had received a month of self-tracking data in advance of their first meeting with a patient. We then presented examples of self-tracking data from the same publicly available data set used above [61]. Data was first presented as a set of 16 timeseries plots (in a Google doc shared with the participant) and clinicians were asked to provide feedback on anything they noticed in the data that they might use in a therapeutic context.\nNext, we began a live interactive session with GPT-4 (with ChatGPT) through Zoom screen sharing as shown in Figure 9. The mobile health data was then input into GPT-4 to produce a text response in real-time. We asked participants to talk about any reactions they had, including ways the data and GPT-4's observations about the data might shape their thinking about the patient. Participants were asked to type or say aloud follow-up queries that they had for GPT-4 (which were entered by the interviewer running the GPT-4 session). We also discussed their reaction to the responses that GPT-4 gave to these queries. We then asked participants to make up a hypothetical example in which they used this tool with a therapy patient, prompting them for what types of data or inputs should be available, what kinds of analyses they would like to see, and how they would envision using this tool with the patient. Finally, we asked general questions about how this tool might affect relationships with patients and the treatment. The model's statements responses were clear and easy to understand. It observed patterns that are relevant to mental health. I trusted its observations (e.g. about differences in sleep and activity from day to day).\nIt made reasonable interpretations and conclusions from those patterns. Those interpretations are consistent with clinical understanding of mental health conditions.\nMy patients self-track and self-monitor at my encouragement. My patients self-track due to their own interest. Currently, I rely on patient recall or quick glances at self-tracking data to form impressions of it.\nThis type of tool could help me make sense of self-tracking or self-monitoring data. This tool could enhance treatment. This tool could worsen treatment. I would like to have access to a tool like this at my practice. 7) with statements about their experience interacting with GPT-4. We observe positive feedback and enthusiasm across most questions." }, { "figure_ref": [ "fig_8" ], "heading": "Data Format.", "publication_ref": [ "b62" ], "table_ref": [], "text": "For study data, we selected a random 28-day sample from a subject in the GLOBEM dataset [63]. Only individuals with a PHQ-4 > 5 were selected similar to the study above to obtain a sample representative of individuals who might be likely to seek therapy. Full-page time-series plots that included each feature were generated and presented to the clinician in an online document. Data input into the language model included the same 16 features and format used for the generative reasoning evaluation and listed in Appendix A. The chat thread was started with the following prompt, similar to the prompt used in the previous section, but altered slightly for compatibility with the web-based chat interface of GPT-4: Below is some data gathered from a fitness tracking smartwatch and a smartphone. Although it does not contain explicit information on mood, trends in physiological signals have been shown to correlate with mental health symptoms. Examine this data and point out any specific trends or data points that could spark fruitful conversation with a mental health professional. <formatted-data>.\nInterviews were one hour in length and conducted over Zoom to enable recording and transcript generation. Interviews were conducted by two researchers, a clinical psychologist, and a computer science graduate student.\n6.1.4 Survey. After the interview, participants completed a online short survey about their reactions to GPT-4 where they indicated their agreement or disagreement to several statements. The statements as well as the distribution of responses selected by clinicians are shown in Figure 10." }, { "figure_ref": [], "heading": "General.", "publication_ref": [], "table_ref": [], "text": "The study was deemed exempt by the university IRB. Participants were sent a study information sheet before the study which described the interview and data management. To thank participants for their time, $50 gift cards were sent after completion of the study." }, { "figure_ref": [], "heading": "Results: Clinician Reactions to LLM Reasoning", "publication_ref": [], "table_ref": [], "text": "Therapists generally found GPT-4's analyses plausible, although at times too far a stretch or too limited in the range of issues that were considered. One clinician mentioned that GPT-4 commented on some of the same outliers she had noticed in the plots. She and others commented that the text was easier for them to take in than the plots, \"I think I picked up on some of the same things that it's reporting, but I just don't know how to interpret them. . . . It definitely seems to be pointing to things of clinical significance\" (P5). Another clinician commented \"[low activity as] indicative of inconsistent energy levels -that feels like a reasonable conclusion\" (P8) about the straight forward connection that GPT-4 drew between activity levels and depressive symptoms.\nGPT-4's statements about mental health concerns were limited by a range of issues. We elaborate on limitations in detail in Section 6.3, but two that merit mention at the outset are errors and limited concept range. When one clinician asked a follow-up question about which the model did not have data, he noted that the model appeared to fabricate a response \"So, is this real? Like this is totally based on the data that was provided?\" (P1). He continued \"Ideally what it would tell you would be that that's not in the data. And the fact that it just made up an answer is a little concerning\" (P1).\nAnother clinician pointed out that GPT-4 tended to explain data in terms of either depression or anxiety, rather than considering a broad range of factors and disorders. As mentioned further below, the limited variable set used for this study also limited the analyses from the model.\nClinicians most appreciated the analysis of sleep, which they considered relevant and valuable as they considered a variety of mental health conditions. Further, the text observations (for example about a night on which an individual got very little sleep) generally made more of an impression than the data plots. Almost all participants wanted more computations about sleep, particularly about routines (such as whether an individual had predictable times of going to sleep and waking).\nThe model's reasoning was more frequently critiqued when commenting on variables like phone use and location that could be explained by many factors other than mental health concerns. Clinicians described such interpretations, e.g. of not leaving home as an indicator of depression, as \"leaps.' They were concerned about pathologizing behaviors that could be explained by situations such as working from home. Responses that offered multiple explanations on how identified trends could affect health were appreciated more than those that expressed a firm conclusion. Additionally, clinicians felt more confident in reasoning that combined several variables, even if each was only ambiguously linked to mental health, than statements based on a single variable.\nClinicians also objected to the one-size-fits-all nature of the model's reasoning. One clinician who treats individuals with eating disorders pointed out that something like walking, while culturally valued, is not always positively linked to mental health. She wants her clients to develop a broad repertoire of coping strategies that they can flexibly draw upon, rather than relying on one, such as exercising. She also attends to how her patients often under-report exercise as well as negative emotions such as anger, and how both can function as avoidance. Clinicians bring an understanding of flexible coping and reporting biases to their interpretations of self-tracking data. While high activity and low anger ratings might often be interpreted as positive by a language model, these data could prompt a clinician to consider questions about emotional avoidance. These observations suggest that clinicians may use these tools in different ways." }, { "figure_ref": [ "fig_8" ], "heading": "Clinical Use Scenarios", "publication_ref": [], "table_ref": [], "text": "In interviews, most therapists said that the tool could be useful in their practice if it was HIPAA compliant and there was a way for patients to opt in for the purpose of optimizing their treatment. They emphasized that they would use it selectively, both in terms of clients who might benefit and drawing on the observations that were relevant. One clinician described interest in having access to this type of tool in her practice, presuming it included mood data and qualifications about what kinds of changes were significant, \"Overall, if I was working with the client, and I was able to input this data, I would love this. I think the session would go so much more smoothly than me, just looking at the data in the graph format\" (P6). In a survey following interview, all participants indicated moderate to strong interest in having this kind of tool in their treatment and moderate to strong agreement with the idea that it could enhance treatment as shown in Fig 10.\nTherapists offered ideas on additional model inputs or variables that would be needed for it to be relevant. These included at a minimum, mood and thought records, contextual anchors such as day of the week, and more nuanced social metrics to indicate who a patient was interacting with and describe the emotional tenor of those interactions. Some therapists imagined inputting data themselves to increase the model's relevance. Although very controversial due to privacy concerns, some imagined inputting session notes or transcripts. Several of the therapists were already using AI tools for note-taking and basic analysis of dialogue within a given session (such as turn-taking and sentiment) and were interested in analysis across sessions." }, { "figure_ref": [], "heading": "Collaborative In-Session Investigation.", "publication_ref": [ "b9", "b32" ], "table_ref": [], "text": "A unique aspect of these generative language models is their ability to enable interactive investigations of self-tracking data compared to a black box model that outputs a classification result. Clinicians generally saw the most value in this tool to aid collaboration with patients by using it as an interactive data explorer during a therapy session. For example, one clinician imagined querying the model to identify triggers for panic attacks and other anxiety symptoms. Another clinician outlined the high-level steps in which she would consider using the model, \"First set the goal... What's bringing them in then agree on some metrics that are relevant\" (P4).\nOthers imagined using it to find evidence of change early in treatment (for example, indications of more energy or better sleep) as a patient struggled with whether to continue a particular medication or therapy. One therapist gave an example of how she might, through discussion with a patient, tie a concern such as relationship anxiety to the average duration of phone use periods, and then use that metric to assess whether therapy was helping with the patient's anxiety. Several imagined using it not only for retrospective analysis but also to forecast improvement. Another envisioned use was asking the model questions during sessions to boost creativity, for example, to challenge a patient's worries or other negative thoughts.\nAcross these and other forms of collaborative use, some clinicians wanted their patients to be able to query the model. In addition to identifying patterns, joint use was envisioned as a way to build a more general feeling of collaboration in the therapy, something that is important for the therapeutic alliance and positive outcomes [10,33]." }, { "figure_ref": [], "heading": "Identifying", "publication_ref": [], "table_ref": [], "text": "Issues to Explore with the Patient. Clinicians appreciated the ability of the model to list out concerns with reference to specific dates (e.g., days with little sleep or little movement). One clinician added that it would be useful if GPT-4 prioritized the questions. Clinicians imagined that they would share their observations with patients, e.g., about particular days with anomalies such as decreased sleep, as a way of opening up discussion and jogging the patient's memory.\nSeveral wanted the analyses to consider longer stretches of time (e.g. several months), different computations, and comparisons to population norms to help them think about patterns of symptoms. We note that the duration used in this study was a limitation of our input dataset, and that longer outputs could be accommodated by the LLM context window by breaking the data into chunks for iterative analysis or reprocessing to extract low dimensional features representing longer term trends\nIn outlining potential concerns, clinicians wanted the model to raise questions and supply clear data. As one clinician said, the model \"could cause less harm if it provided questions for a therapist to ask instead of conclusions for a therapist to rely on\" (P8). They did not want the model to apply diagnostic labels to individuals or their behaviors." }, { "figure_ref": [], "heading": "Generating Documentation.", "publication_ref": [], "table_ref": [], "text": "Clinicians varied in their thoughts on whether models would meaningfully aid in documentation. One participant appreciated the neutral boilerplate language used by the model and easily imagined it as the basis for an intake summary. Others bristled at this idea, pointing out that it might miss major insights or, by removing the analysis that comes from writing notes, be a shortcut that ends up diluting or possibly derailing therapy.\nThese responses highlight the diverse approaches and preferences of individual clinicians. This diversity might be served by an LLM's flexible accommodation of different inputs and types of querying or prompting. It will clearly be important for clinicians to define how they use these tools in their practice and to tailor this use for each patient." }, { "figure_ref": [], "heading": "Concerns", "publication_ref": [], "table_ref": [], "text": "6.4.1 Privacy and Relevance. All the clinicians in the study sought data that was more germane to mental health. Pouring over logs of sleep and activity data in the examples, one lamented \"All of this data, but not linked to any of the things we care about like mood\" (P1). However, the idea of inputting a patient's mood and thought records into a model brought up concerns about privacy and agency. In general, inputs that increased relevance also carried greater risk. Therapists had mixed reactions to more sensitive inputs such as therapy session transcripts but all recognised privacy concerns.\nEven if a model were private and HIPAA compliant, it could be hard to obtain meaningful consent for use, one clinician worried; explaining how the system worked could be difficult. Others expressed comfort with the idea of using a tool that was HIPAA compliant, presuming a patient opted in. At a more general level, clinicians were concerned that patients might feel monitored or policed, threatening their sense of agency and control over what is shared in therapy. This was especially a concern for patients who might feel scrutinized by others and those whose symptoms are associated with feelings of shame. As discussed below, these sensitivities highlight the need for a collaborative human-AI approach that leverages a clinician's context about a patient to decide when and how to use these tools, as well as a collaborative clinician-patient approach that relies on in-session dialogue to interpret and apply the observations from a model. 6.4.2 Limited Data. In addition to the lack of data on individuals' mood, thoughts, symptoms, and population norms for comparison, one of the most glaring omissions in the example data set pertained to social determinants of health and family systems. One clinician imagined a situation where this limited data could be harmful. She gave the example of a distressed adolescent whose parents were drinking heavily and financially stressed. The model might attribute the adolescent's distress to use of a particular app such as TikTok simply because it had data on social media use and not the broader context. The broader family and socio-cultural context shapes how therapists understand an individual's mental health struggles and resources. These factors, like other inputs that would increase relevance, are sensitive: a model could end up stereotyping or generating simplistic analyses that perpetuate biases in mental health care. 6.4.3 Quality of Care. Another concern pertained to relying on the model as a shortcut. Clinicians imagined the problems that would result from overconfidence in the model's analysis. One clinician describes the possibility of missing the insights that would come from pouring over data herself. \"If I was like doing this very quickly, like, before I see the client I could totally see myself or anyone like just relying on on GPT-4. And just think . . . 'This is the answer. This is the knowledge,' ... as like mental shortcuts as opposed to pouring over the data yourself. . . . And I would wanna ask, 'Am I missing anything?' (P6). Another participant worried that not taking the time to write one's own notes and process each session could degrade the clinician's memory of the session and ultimately shortchange the therapy. 6.4.4 Eliciting a Chain of Reasoning. While clinicians generally agreed that a classification result alone would not be helpful to them, they found value in iterative querying of the model. Their queries ranged from explanations of variables to requests for additional computations, for example, to illuminate the specific nature of someone's sleep-wake cycles. Several also asked the model to find days on which there were spikes on several variables. In the course of such prompting, they appreciated the ability of the model to comb through large amounts of data, describe to them verbally, and give a rationale for its relevance to mental health. Some clinicians may require training for chain of thought prompting: eliciting clear reasoning outputs from the model on the connection between the raw signal, behaviors, and mental health relevance." }, { "figure_ref": [], "heading": "DISCUSSION", "publication_ref": [], "table_ref": [], "text": "Our interviews and interactive exercises with clinicians highlight challenges for models that are designed to be used in the context of psychotherapy and for future research. The first challenge relates to hosting models in a way that both protects privacy and supports collaborative use. The second challenge relates to designing the tool in a way that supports rather than simulates therapy, specifically maintaining interpretation by clinicians in dialogue with patients." }, { "figure_ref": [], "heading": "Data Privacy and Collaborative Use", "publication_ref": [ "b9", "b32" ], "table_ref": [], "text": "To inform mental health treatment and illuminate the factors associated with a particular individual's struggles, models need to have data that are directly relevant to mental health, and ideally personalized to individual patients. The clinicians we interviewed expected the model to have, at a minimum, the daily mood, thought, and symptom tracking that they currently request of patients and ideally more in-depth data related to mood, social interactions, and behavioral routines. In addition to this data that would be gathered from a patient's logs or devices, are the materials a clinician may want to input such as session notes or transcripts, manuals, or other documents describing relevant mental health issues and treatments. Much of this data is sensitive and not appropriate as an input for language model services such as ChatGPT, which may use this data to improve their models or for other commercial purposes. Privately hosted models or robust user data protections are required for this purpose.\nThe solution to the privacy challenges for use in therapy is not as straightforward as HIPAA-compliant medical record systems. Such record systems primarily serve providers and other employees at medical institutions, limiting patients to read-only access to elements such as test results. In this study, we heard from clinicians that patients as well as clinicians should be able to interact with the model. This joint interaction, clinicians anticipated, could build patients' curiosity about factors associated with their mental health and foster the collaborative alliance between patients and therapists that is associated with positive outcome [10,33]. Such models therefore require a very different approach than that of hospital records; In this case, the patient should own their data but both parties (the patient and therapist) should be able to generate data and actively interact with it. Hosting such services may be challenging for providers in smaller practices.\nThis need for a model that securely stores private data and allows use by both patients and therapists pushes us to consider alternative secure data management strategies. We look to Apple and Google as alternative references for how an on-device model could be primarily owned by an individual and shared with a clinician. These approaches offer the converse of the medical record, with the clinician permitted read-only access. Although these data-sharing practices are informative, they are not entirely suitable for enhancing psychotherapy with language models. Meaningfully enhancing therapy and the therapeutic relationship requires a two-way collaboration where both the therapist and patient generate data and both can query the model. This raises questions about data ownership, particularly if the model itself is hosted by a third party." }, { "figure_ref": [], "heading": "Protecting Against Simulated Therapy", "publication_ref": [], "table_ref": [], "text": "Clinicians emphasized that they take many factors into consideration as they consider the meaning of any new data about a patient. Patterns and anomalies identified by a model would be considered with an understanding of the patient along with treatment approach and goals. As reviewed above, clinicians explained how the meaning of metrics (e.g., those relating to physical exercise and locations visited in a day) might reflect very different dynamics for different people. Some worried that a model would offer tempting shortcuts -one size fits all interpretations, generic suggestions, and automated documentation -that could offer an unhelpful and possibly dangerous simulation of therapy." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although we show the numerical correctness of GPT-4 reasoning responses reaches 75%, this level of accuracy may be insufficient, especially for users unfamiliar with the types of errors commonly made by language models. While we find clinicians were frequently able to identify errors made by GPT-4 when they arose during our interviews, in an actual therapy session the clinician may not be able to balance the demands of evaluating model output and engaging with the patient. Further evaluation in a more realistic therapeutic environment is needed to identify the ways in which clinicians respond to this additional burden. Our evaluation was confined to a subset of mobile and behavioral health data; thus, evaluation may vary when applied to the broader range of elements. Clinicians envisioned inputting both wide range of additional features as well as a longer time period of data into LLMs for analysis. However, the maximum context window of these models is limited. Pre-computing or summarizing portions of data could help address this technical challenge, but requiring this step could significantly reduce the ability of a tool to generalize across potential input data sources." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b16" ], "table_ref": [], "text": "Classification of mental health issues is vexing for both clinician diagnosticians and models, with reports of substantial disagreement even between clinician [17]. In this research we examine the value of models beyond classification to reasoning, exploring the potential of language models to enhance psychotherapy. We find that the value to clinicians lies not in classification or diagnostic labeling, but rather in rigorous analysis of diverse selftracking data to generate summaries and identify potential concerns. Clinicians envisioned using those insights in a variety of ways, principally for fostering collaborative investigation with patients. This collaboration was seen as potentially valuable for strengthening the therapeutic alliance. Conversely, clinicians expressed concern that over-reliance on a model could degrade therapy and harm patients. These findings highlight directions for impactful future research on human-AI collaborative tools for psychotherapy. " }, { "figure_ref": [], "heading": "A DATA ELEMENTS", "publication_ref": [], "table_ref": [], "text": "GLOBEM" }, { "figure_ref": [], "heading": "B INPUT FORMAT", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "See Figure11.", "publication_ref": [], "table_ref": [], "text": "No -There is no statement of specific trends that relate to the included data. For example:\n• \"An increase in sleep might indicate a disturbance\" or \"the individual makes phone calls\" would not be a specific trend relating to the provided data • \"The time spent asleep increased in the second half of the month\" would be a specific trend relating to the data 4. Are these trends consistent with the provided data? (yes/no) Yes -the listed trends are plausibly consistent with the provided data table and/or plots No -some or all of the listed trends are contradicted by the provided data and/or plots or there are no specific trends (3 was answered \"No\")\nIt is important to note that you should not evaluate further trends or reasoning as they may relate to, for example, mental health. For the purposes of grading these responses, it is only necessary to confirm if the response does or does not accurately describe the provided data.\nWe anticipate it will take 1.5-2 minutes to grade each statement. " }, { "figure_ref": [], "heading": "", "publication_ref": [ "b12" ], "table_ref": [], "text": "CSV: date,total_distance_traveled(meters),time_at_home(minutes),location_entropy,... 2019-04-29,49037.0,666.0,0.85,298.0,3.0,,,29.0,11430.0,40.0,1290.0,39.0,150.0,306.0,11.0 2019-04-30,69171.0,555.0,0.87,274.0,4.0,16.0,, 13 " }, { "figure_ref": [], "heading": "C REASONING GRADER INSTRUCTIONS", "publication_ref": [], "table_ref": [], "text": "Thank you for taking the time to contribute to this study.\nTo start, please open this document that contains a table of data as well as plots of the data. Link to document: [LINK HERE] You will now be asked to grade a series of 32 different statements analyzing this data. Your goal is to check the accuracy of these statements to ensure that references to the data are correct Here an explanation of the grading rubric. Please read this rubric carefully:\n1. Does this response include numbers? (yes/no) Yes -at least some part of the response lists or quotes specific numerical data or dates, regardless of correctness No -the response does not include any specific numbers Note -numbered lists don't count as numbers 2. Are these numbers consistent with the provided data? (yes/no) Yes -all of the mentioned numbers or dates are included in the provided data No -some or all of the numbers or dates are not consistent with the provided data, or there are no numbers (1 was answered \"No\")\nFor example:\n• The text statement says the highest sleep time occurred on May 9, but based on the graph you can see it is actually on June 2" }, { "figure_ref": [], "heading": "•", "publication_ref": [], "table_ref": [], "text": "The text statement lists the lowest distance travelled as 127 meters, but the lowest distance traveled listed in the table is 1270 meters.\n3. Does this response identify specific trends? (yes/no) Yes -the response makes statements relating to concepts like minimum, maximum, averages, variability, upward or downward trends, etc. as they pertain to the data" } ]
Passively collected behavioral health data from ubiquitous sensors holds significant promise to provide mental health professionals insights from patient's daily lives; however, developing analysis tools to use this data in clinical practice requires addressing challenges of generalization across devices and weak or ambiguous correlations between the measured signals and an individual's mental health. To address these challenges, we take a novel approach that leverages large language models (LLMs) to synthesize clinically useful insights from multi-sensor data. We develop chain of thought prompting methods that use LLMs to generate reasoning about how trends in data such as step count and sleep relate to conditions like depression and anxiety. We first demonstrate binary depression classification with LLMs achieving accuracies of 61.1% which exceed the state of the art. While it is not robust for clinical use, this leads us to our key finding: even more impactful and valued than classification is a new human-AI collaboration approach in which clinician experts interactively query these tools and combine their domain expertise and context about the patient with AI generated reasoning to support clinical decision-making. We find models like GPT-4 correctly reference numerical data 75% of the time, and clinician participants express strong interest in using this approach to interpret self-tracking data.
From Classification to Clinical Insights: Towards Analyzing and Reasoning About Mobile and Behavioral Health Data With Large Language Models
[ { "figure_caption": "Fig. 1 .1Fig. 1. Depression classification performance across a range of possible data input formats.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Actual example of our prompt (Top) and LLM output (Bottom) for classification experiments. Top: Completed prompts with method of CoT + Exp. + DSM. Bottom: The actual output generated by GPT-4", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .Fig. 4 .34Fig. 3. Comparison of classification results of GPT-3.5, GPT-4, and PaLM 2 across four different prompting strategies alongside results from Reorder and Random Forest models trained on the same dataset. The performance of the Reorder model trained on more features in Xu et al. [61] is included for comparison. Observe how the percent of positive and negative classifications varies significantly based on the prompting strategy used.", "figure_data": "", "figure_id": "fig_2", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Performance of GPT-3.5, fine-tuned GPT-3.5, GPT-4 and PaLM 2 in anxiety detection.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Prompt structure used to generate data reasoning for reasoning accuracy evaluation", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .Fig. 8 .78Fig. 7. A plotted excerpt of raw data and the resulting analysis generated by GPT-4.", "figure_data": "", "figure_id": "fig_5", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Setup for interactive clinician evaluation conducted via Zoom video chat between the study team and clinician participants. The study team interviewers allowed clinicians to interact with live GPT-4 sessions via screen sharing.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig.10. Post-study survey responses from clinicians indicating strong disagreement (1) to strong agreement(7) with statements about their experience interacting with GPT-4. We observe positive feedback and enthusiasm across most questions.", "figure_data": "", "figure_id": "fig_8", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Received 2020February 2007; revised 12 March 2009; accepted 5 June 2009", "figure_data": "", "figure_id": "fig_9", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "goal is to analyze this data. You are presented with the following: 1. A table consisting of twenty-eight days of collected activity tracking data [Collected Data] 2. Instructions on how to analyze the data [Instructions]InstructionsAlthough the data does not contain explicit information on mood, trends in physiological signals have been shown to correlate with mental health symptoms. Examine this data and point out any specific trends or data points that could spark fruitful conversation with a mental health professional.", "figure_data": "Collected Datadate|total_distance_traveled(meters)|time_at_home(minutes)|location_entropy|phone_screen_time(minutes)|average_phone_use_unlock_duration(minutes)|phone_call_incoming_duration(minutes)|phone_call_outgoing_duration(minutes)|unique_bluetooth_devices_found_nearby|step_count|number_of_sedentary_episodes|total_time_spent_sedentary(minutes)|number_of_activity_episodes|total_time_spent_active(minutes)|total_time_asleep(minutes)|total_time_spent_awake_while_in_bed(minutes)|2019-05-06|11996|1012|0|328|4|nan|nan|14|11417|69|1242|69|198|403|68|2019-05-07|10161|823|0|228|3|nan|nan|8|8172|42|1286|41|154|502|45|…2019-06-01|36861|993|0|443|7|nan|96|3|6054|38|1323|38|117|552|59|2019-06-02|16530|1043|0|384|7|44|315|9|7022|39|1317|39|123|475|28|", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Data Fields and Descriptions", "figure_data": "Data Feature", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Zachary Englhardt; Margaret E Morris; Daniel Mcduff; Shwetak Patel
[ { "authors": "", "journal": "American Psychiatric Association", "ref_id": "b0", "title": "Diagnostic and statistical manual of mental disorders : DSM-5", "year": "2013" }, { "authors": "M Mostafa; Erik Amin; Björn W Cambria; Schuller", "journal": "", "ref_id": "b1", "title": "Will Affective Computing Emerge from Foundation Models and General AI? A First Evaluation on ChatGPT", "year": "2023" }, { "authors": "Sangwon Bae; Denzil Ferreira; Brian Suffoletto; Juan C Puyana; Ryan Kurtz; Tammy Chung; Anind K Dey", "journal": "Proc. ACM Interact. Mob. Wearable Ubiquitous Technol", "ref_id": "b2", "title": "Detecting drinking episodes in young adults using smartphone-based sensors", "year": "2017-06" }, { "authors": "Dror Ben-Zeev; Emily A Scherer; Rui Wang; Haiyi Xie; Andrew T Campbell", "journal": "Psychiatric Rehabilitation Journal", "ref_id": "b3", "title": "Next-generation psychiatric assessment: Using smartphone sensors to monitor behavior and mental health", "year": "2015" }, { "authors": "Thorsten Brants; C Ashok; Peng Popat; Franz J Xu; Jeffrey Och; Dean", "journal": "", "ref_id": "b4", "title": "Large language models in machine translation", "year": "2007" }, { "authors": "Joseph Breda; Mastafa Springston; Alex Mariakakis; Shwetak Patel", "journal": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies", "ref_id": "b5", "title": "FeverPhone: Accessible Core-Body Temperature Sensing for Fever Monitoring Using Commodity Smartphones", "year": "2023" }, { "authors": "Leo Breiman", "journal": "Machine learning", "ref_id": "b6", "title": "Random forests", "year": "2001" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b7", "title": "Language Models are Few-Shot Learners", "year": "2020" }, { "authors": "Sébastien Bubeck; Varun Chandrasekaran; Ronen Eldan; Johannes Gehrke; Eric Horvitz; Ece Kamar; Peter Lee; Yin Tat Lee; Yuanzhi Li; Scott Lundberg; Harsha Nori; Hamid Palangi; Marco Tulio Ribeiro; Yi Zhang", "journal": "", "ref_id": "b8", "title": "Sparks of Artificial General Intelligence: Early experiments with GPT-4", "year": "2023" }, { "authors": "Kate Sarah; Jacqui Cameron; Dave Rodgers; Dagnan", "journal": "Clinical psychology & psychotherapy", "ref_id": "b9", "title": "The relationship between the therapeutic alliance and clinical outcomes in cognitive behaviour therapy for adults with depression: A meta-analytic review", "year": "2018" }, { "authors": "Justin Chan; Sharat Raju; Rajalakshmi Nandakumar; Randall Bly; Shyamnath Gollakota", "journal": "Science translational medicine", "ref_id": "b10", "title": "Detecting middle ear fluid using smartphones", "year": "2019" }, { "authors": "Prerna Chikersal; Afsaneh Doryab; Michael Tumminia; Daniella K Villalba; Janine M Dutcher; Xinwen Liu; Sheldon Cohen; Kasey G Creswell; Jennifer Mankoff; J David Creswell; Mayank Goel; Anind K Dey", "journal": "ACM Transactions on Computer-Human Interaction", "ref_id": "b11", "title": "Detecting Depression and Predicting its Onset Using Longitudinal Symptoms Captured by Passive Sensing", "year": "2021-01" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b12", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b13", "title": "PaLM: Scaling Language Modeling with Pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b14", "title": "Scaling Instruction-Finetuned Language Models", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b15", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019-05" }, { "authors": "I Eiko; Jessica K Fried; Donald J Flake; Robinaugh", "journal": "Nature Reviews Psychology", "ref_id": "b16", "title": "Revisiting the theoretical and methodological foundations of depression measurement", "year": "2022-04" }, { "authors": "Jessica K Eiko I Fried; Donald J Flake; Robinaugh", "journal": "Nature Reviews Psychology", "ref_id": "b17", "title": "Revisiting the theoretical and methodological foundations of depression measurement", "year": "2022" }, { "authors": "Caglar Gulcehre; Orhan Firat; Kelvin Xu; Kyunghyun Cho; Yoshua Bengio", "journal": "Computer Speech & Language", "ref_id": "b18", "title": "On integrating a language model into neural machine translation", "year": "2017" }, { "authors": "Marti A Hearst; Susan T Dumais; Edgar Osuna; John Platt; Bernhard Scholkopf", "journal": "IEEE Intelligent Systems and their applications", "ref_id": "b19", "title": "Support vector machines", "year": "1998" }, { "authors": "Aodhán Hickey", "journal": "Elsevier", "ref_id": "b20", "title": "The rise of wearables: From innovation to implementation", "year": "2021" }, { "authors": "Jeremy F Huckins; Alex W Dasilva; Elin L Hedlund; Eilis I Murphy; Courtney Rogers; Weichen Wang; Mikio Obuchi; Paul E Holtzheimer; Dylan D Wagner; Andrew T Campbell", "journal": "JMIR Mental Health", "ref_id": "b21", "title": "Causal Factors of Anxiety and Depression in College Students: Longitudinal Ecological Momentary Assessment and Causal Analysis Using Peter and Clark Momentary Conditional Independence", "year": "2020-06" }, { "authors": "Jeremy F Huckins; Alex W Dasilva; Weichen Wang; Elin Hedlund; Courtney Rogers; K Subigya; Jialing Nepal; Mikio Wu; Eilis I Obuchi; Meghan L Murphy; Dylan D Meyer; Paul E Wagner; Andrew T Holtzheimer; Campbell", "journal": "Journal of Medical Internet Research", "ref_id": "b22", "title": "Mental Health and Behavior of College Students During the Early Phases of the COVID-19 Pandemic: Longitudinal Smartphone and Ecological Momentary Assessment Study", "year": "2020-06" }, { "authors": " Indrakumari; P Poongodi; Suresh; Balamurugan", "journal": "Elsevier", "ref_id": "b23", "title": "The growing role of Internet of Things in healthcare wearables", "year": "2020" }, { "authors": "Nicholas C Jacobson; Yeon Joo; Chung ", "journal": "Sensors", "ref_id": "b24", "title": "Passive Sensing of Prediction of Moment-To-Moment Depressed Mood among Undergraduates with Clinical Levels of Depression Sample Using Smartphones", "year": "2020-06" }, { "authors": "Yao Lavender; Xujin Jiang; Chris Liu; Nima Pour Nejatian; Mustafa Nasir-Moin; Duo Wang; Anas Abidin; Kevin Eaton; Antony Howard; Ilya Riina; Paawan Laufer; Madeline Punjabi; Nora C Miceli; Cordelia Kim; Zane Orillac; Christopher Schnurman; Hannah Livia; David Weiss; Sean Kurland; Yosef Neifert; Douglas Dastagirzada; Kondziolka; T M Alexander; Grace Cheung; Ming Yang; Mona Cao; Anthony B Flores; Yindalon Costa; Kyunghyun Aphinyanaphongs; Eric Cho; Karl Oermann", "journal": "Nature", "ref_id": "b25", "title": "Health system-scale language models are all-purpose prediction engines", "year": "2023-06" }, { "authors": "Jan Kocoń; Igor Cichecki; Oliwier Kaszyca; Mateusz Kochanek; Dominika Szydło; Joanna Baran; Julita Bielaniewicz; Marcin Gruza; Arkadiusz Janz; Kamil Kanclerz", "journal": "Information Fusion", "ref_id": "b26", "title": "ChatGPT: Jack of all trades, master of none", "year": "2023" }, { "authors": "K Kroenke; R L Spitzer; J B W Williams; B Lowe", "journal": "Psychosomatics", "ref_id": "b27", "title": "An Ultra-Brief Screening Scale for Anxiety and Depression: The PHQ-4", "year": "2009-11" }, { "authors": "Bishal Lamichhane", "journal": "", "ref_id": "b28", "title": "Evaluation of ChatGPT for NLP-based Mental Health Applications", "year": "2023" }, { "authors": "Yunxiang Li; Zihan Li; Kai Zhang; Ruilong Dan; Steve Jiang; You Zhang", "journal": "", "ref_id": "b29", "title": "ChatDoctor: A Medical Chat Model Fine-Tuned on a Large Language Model Meta-AI (LLaMA) Using Medical Domain Knowledge", "year": "2023" }, { "authors": "Xin Liu; Daniel Mcduff; Geza Kovacs; Isaac Galatzer-Levy; Jacob Sunshine; Jiening Zhan; Ming-Zher Poh; Shun Liao; Paolo Di Achille; Shwetak Patel", "journal": "", "ref_id": "b30", "title": "Large Language Models are Few-Shot Health Learners", "year": "2023" }, { "authors": "Xin Liu; Daniel Mcduff; Geza Kovacs; Isaac Galatzer-Levy; Jacob Sunshine; Jiening Zhan; Ming-Zher Poh; Shun Liao; Paolo Di Achille; Shwetak Patel", "journal": "", "ref_id": "b31", "title": "Large Language Models are Few-Shot Health Learners", "year": "2023" }, { "authors": "John P Daniel J Martin; M Katherine Garske; Davis", "journal": "Journal of consulting and clinical psychology", "ref_id": "b32", "title": "Relation of the therapeutic alliance with outcome and other variables: a meta-analytic review", "year": "2000" }, { "authors": "Julie M Stephen M Mattingly; Pino Gregg; Ayse Elvan Audia; Andrew T Bayraktaroglu; Nitesh V Campbell; Vedant Chawla; Munmun De Das Swain; Sidney Choudhury; K D' Mello; Anind K Dey", "journal": "", "ref_id": "b33", "title": "The Tesserae project: Large-scale, longitudinal, in situ, multimodal sensing of information workers", "year": "2019" }, { "authors": "Jun-Ki Min; Afsaneh Doryab; Jason Wiese; Shahriyar Amini; John Zimmerman; Jason I Hong", "journal": "Association for Computing Machinery", "ref_id": "b34", "title": "Toss \"n\" turn: Smartphone as sleep and sleep quality detector", "year": "2014" }, { "authors": "Shayan Mirjafari; Kizito Masaba; Ted Grover; Weichen Wang; G Pino; Andrew T Audia; Nitesh V Campbell; Vedant Chawla; Munmun De Das Swain; Anind K Choudhury; Dey", "journal": "Proc. ACM Interact. Mob. Wearable Ubiquitous Technol", "ref_id": "b35", "title": "Differentiating higher and lower job performers in the workplace using mobile sensing", "year": "2019" }, { "authors": "Stefanie Nickels; Matthew D Edwards; Sarah F Poole; Dale Winter; Jessica Gronsbell; Bella Rozenkrants; David P Miller; Mathias Fleck; Alan Mclean; Bret Peterson", "journal": "JMIR mental health", "ref_id": "b36", "title": "Toward a mobile platform for real-world digital measurement of depression: User-centered design, data quality, and behavioral and clinical modeling", "year": "2021" }, { "authors": "Harsha Nori; Nicholas King; Scott Mayer Mckinney; Dean Carignan; Eric Horvitz", "journal": "", "ref_id": "b37", "title": "Capabilities of GPT-4 on Medical Challenge Problems", "year": "2023" }, { "authors": "Reham Omar; Omij Mangukiya; Panos Kalnis; Essam Mansour", "journal": "", "ref_id": "b38", "title": "Chatgpt versus traditional question answering for knowledge graphs: Current status and future directions towards knowledge graph chatbots", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b39", "title": "", "year": "2023" }, { "authors": "Chengwei Qin; Aston Zhang; Zhuosheng Zhang; Jiaao Chen; Michihiro Yasunaga; Diyi Yang", "journal": "", "ref_id": "b40", "title": "Is ChatGPT a general-purpose natural language processing task solver?", "year": "2023" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b41", "title": "Improving Language Understanding by Generative Pre-Training", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b42", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "year": "2020" }, { "authors": "Joshua Robinson; David Wingate", "journal": "", "ref_id": "b43", "title": "Leveraging Large Language Models for Multiple Choice Question Answering", "year": "2023" }, { "authors": "Sohrab Saeb; Mi Zhang; Christopher J Karr; Stephen M Schueller; Marya E Corden; Konrad P Kording; David C Mohr", "journal": "Journal of Medical Internet Research", "ref_id": "b44", "title": "Mobile phone sensor correlates of depressive symptom severity in daily-life behavior: An exploratory study", "year": "2015" }, { "authors": "Asif Salekin; Jeremy W Eberle; Jeffrey J Glenn; Bethany A Teachman; John A Stankovic", "journal": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies", "ref_id": "b45", "title": "A Weakly Supervised Learning Framework For Detecting Social Anxiety And Depression", "year": "2018" }, { "authors": "Yasaman S Sefidgar; Woosuk Seo; Kevin S Kuehn; Tim Althoff; Anne Browning; Eve Riskin; Paula S Nurius; Anind K Dey; Jennifer Mankoff", "journal": "Proc. ACM Hum.-Comput. Interact", "ref_id": "b46", "title": "Passively-sensed behavioral correlates of discrimination events in college students", "year": "2019-11" }, { "authors": "Karan Singhal; Tao Tu; Juraj Gottweis; Rory Sayres; Ellery Wulczyn; Le Hou; Kevin Clark; Stephen Pfohl; Heather Cole-Lewis; Darlene Neal; Mike Schaekermann; Amy Wang; Mohamed Amin; Sami Lachgar; Philip Mansfield; Sushant Prakash; Bradley Green; Ewa Dominowska; Blaise Aguera Y Arcas; Nenad Tomasev; Yun Liu; Renee Wong; Christopher Semturs; S Sara Mahdavi; Joelle Barral; Dale Webster; Greg S Corrado; Yossi Matias; Shekoofeh Azizi; Alan Karthikesalingam; Vivek Natarajan", "journal": "", "ref_id": "b47", "title": "Towards Expert-Level Medical Question Answering with Large Language Models", "year": "2023" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b48", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b49", "title": "LLaMA: Open and Efficient Foundation Language Models", "year": "2023" }, { "authors": "Fabian Wahle; Tobias Kowatsch; Elgar Fleisch; Michael Rufer; Steffi Weidt", "journal": "JMIR mHealth and uHealth", "ref_id": "b50", "title": "Mobile Sensing and Support for People With Depression: A Pilot Trial in the Wild", "year": "2016" }, { "authors": "Rui Wang; Min S H Aung; Saeed Abdullah; Rachel Brian; Andrew T Campbell; Tanzeem Choudhury; Marta Hauser; John Kane; Michael Merrill; Emily A Scherer; W S Vincent; Dror Tseng; Ben-Zeev", "journal": "", "ref_id": "b51", "title": "CrossCheck: Toward passive sensing and detection of mental health changes in people with schizophrenia", "year": "2016" }, { "authors": "Rui Wang; Fanglin Chen; Zhenyu Chen; Tianxing Li; Gabriella Harari; Stefanie Tignor; Xia Zhou; Dror Ben-Zeev; Andrew T Campbell", "journal": "ACM", "ref_id": "b52", "title": "StudentLife: Assessing mental health, academic performance and behavioral trends of college students using smartphones", "year": "2014" }, { "authors": "Rui Wang; Gabriella Harari; Peilin Hao; Xia Zhou; Andrew T Campbell", "journal": "", "ref_id": "b53", "title": "SmartGPA: how smartphones can assess and predict academic performance of college students", "year": "2015" }, { "authors": "Rui Wang; Weichen Wang; Alex Dasilva; Jeremy F Huckins; William M Kelley; Todd F Heatherton; Andrew T Campbell", "journal": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies", "ref_id": "b54", "title": "Tracking Depression Dynamics in College Students Using Mobile Phone and Wearable Sensing", "year": "2018" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc Le; Ed Chi; Sharan Narang; Aakanksha Chowdhery; Denny Zhou", "journal": "", "ref_id": "b55", "title": "Selfconsistency improves chain of thought reasoning in language models", "year": "2022" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Dai; V Quoc; Le", "journal": "", "ref_id": "b56", "title": "Finetuned Language Models Are Zero-Shot Learners", "year": "2022" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Brian Ichter; Fei Xia; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b57", "title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models", "year": "2023" }, { "authors": "Chaoyi Wu; Xiaoman Zhang; Ya Zhang; Yanfeng Wang; Weidi Xie", "journal": "", "ref_id": "b58", "title": "PMC-LLaMA: Further Finetuning LLaMA on Medical Papers", "year": "2023" }, { "authors": "Xuhai Xu; Prerna Chikersal; Janine M Dutcher; Yasaman S Sefidgar; Woosuk Seo; Michael J Tumminia; Daniella K Villalba; Sheldon Cohen; Kasey G Creswell; J David Creswell; Afsaneh Doryab; Paula S Nurius; Eve Riskin; Anind K Dey; Jennifer Mankoff", "journal": "", "ref_id": "b59", "title": "Leveraging Collaborative-Filtering for Personalized Behavior Modeling: A Case Study of Depression Detection among College Students", "year": "2021-03" }, { "authors": "Xuhai Xu; Xin Liu; Han Zhang; Weichen Wang; Subigya Nepal; Yasaman Sefidgar; Woosuk Seo; Kevin S Kuehn; Jeremy F Huckins; Margaret E Morris; Paula S Nurius; Eve A Riskin; Shwetak Patel; Tim Althoff; Andrew Campbell; Anind K Dey; Jennifer Mankoff", "journal": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies", "ref_id": "b60", "title": "GLOBEM: Cross-Dataset Generalization of Longitudinal Human Behavior Modeling", "year": "2023" }, { "authors": "Xuhai Xu; Ebrahim Nemati; Korosh Vatanparvar; Viswam Nathan; Tousif Ahmed; Md Mahbubur Rahman; Daniel Mccaffrey; Jilong Kuang; Jun ; Alex Gao", "journal": "", "ref_id": "b61", "title": "Listen2Cough: Leveraging End-to-End Deep Learning Cough Detection Model to Enhance Lung Health Assessment Using Passively Sensed Audio", "year": "2021-03" }, { "authors": "Xuhai Xu; Han Zhang; Yasaman Sefidgar; Yiyi Ren; Xin Liu; Woosuk Seo; Jennifer Brown; Kevin Kuehn; Mike Merrill; Paula Nurius; Shwetak Patel; Tim Althoff; Margaret E Morris; Eve Riskin; Jennifer Mankoff; Anind K Dey", "journal": "", "ref_id": "b62", "title": "GLOBEM Dataset: Multi-Year Datasets for Longitudinal Human Behavior Modeling Generalization", "year": "2022" }, { "authors": "Kailai Yang; Shaoxiong Ji; Tianlin Zhang; Qianqian Xie; Sophia Ananiadou", "journal": "", "ref_id": "b63", "title": "On the Evaluations of ChatGPT and Emotionenhanced Prompting for Mental Health Analysis", "year": "2023" }, { "authors": "Han Zhang; Margaret E Morris; Paula S Nurius; Kelly Mack; Jennifer Brown; Kevin S Kuehn; Yasaman S Sefidgar; Xuhai Xu; Eve A Riskin; Anind K Dey; Jennifer Mankoff", "journal": "ACM Transactions on Accessible Computing", "ref_id": "b64", "title": "Impact of Online Learning in the Context of COVID-19 on Undergraduates with Disabilities and Mental Health Concerns", "year": "2022-07" }, { "authors": "Qihuang Zhong; Liang Ding; Juhua Liu; Bo Du; Dacheng Tao", "journal": "", "ref_id": "b65", "title": "Can chatgpt understand too? a comparative study on chatgpt and fine-tuned bert", "year": "2023" }, { "authors": "Denny Zhou; Nathanael Schärli; Le Hou; Jason Wei; Nathan Scales; Xuezhi Wang; Dale Schuurmans; Claire Cui; Olivier Bousquet; Quoc Le; Ed Chi", "journal": "", "ref_id": "b66", "title": "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models", "year": "2023" } ]
[]
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b0", "b1", "b3", "b4", "b5", "b2", "b1", "b6" ], "table_ref": [], "text": "Recently, deep learning methods have achieved cutting-edge results in various computer vision tasks, including semantic segmentation [1]. However, training deep learning models typically requires large and well curated annotated datasets due to the millions of trainable parameters involved. The collection of such training data primarily relies on manual annotation, which is often quite costly, especially in the context of semantic segmentation, due to the need for pixel-level precision. Recently, self-supervised learning has emerged as a promising alternative to traditional well-curated labeled data. By extracting meaningful representations from the inherent structure of unlabeled data, it eliminates the reliance on supervised loss functions that require manual annotations. This approach has demonstrated success in a variety of medical imaging applications, including dermatological imaging [2] and radiology scans [3], among others. It is also important to acknowledge that semantic segmentation is intricately interwoven with local texture and global image context dependencies. Numerous studies have shown that simultaneous learning of local and global representations can significantly improve the accuracy of dense predictions [1,2,4]. In a related study, Ahn et al. [5] introduced the SGSCN network, which uses multiple loss functions to group spatially connected pixels with similar features, enabling an iterative learning of pixel features and clustering assignments from a single image. Taher et al. [6] developed the Context-Aware instance Discrimination (CAiD) framework to improve instance discrimination learning in medical images. CAiD extracts detailed and discriminative information from different local contexts in unlabeled medical images. Karimi et al. [3] presented a dualbranch transformer network that captures both global context and local details. This network utilizes self-supervised learning by considering semantic relationships between different scales, ensuring inter-scale consistency, and enforcing spatial stability within each scale for self-supervised content clustering. Another approach [2] aimed to address the lack of local and boundary representations by combining the CNN and vision transformer features. He et al. [7] introduced Geometric Visual Similarity Learning, a method that incorporates topological invariance to measure inter-image similarity and create consistent representations of semantic regions. sistency and enhance semantic segmentation? To address this challenge, we introduce FuseNet, a novel self-supervised approach to semantic segmentation that aims to achieve a balance between local texture details and global context dependencies. Our contributions include: ➊ A self-supervised approach for semantic segmentation that minimizes the reliance on expensive manual annotations. ➋ Integration of self-supervised learning to simultaneously capture local and global image characteristics. ➌ Introduction of cross-modal fusion, which enhances the model's ability to handle complex scenarios. ➍ Improved edge alignment and spatial consistency between adjacent pixels." }, { "figure_ref": [ "fig_0" ], "heading": "PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "We present FuseNet (see Figure 1), a dual-stream, selfsupervised approach for image segmentation, which obviates the need for manual annotations. In FuseNet, while one stream processes the original image, the other handles its augmented counterpart, fostering data diversity, and enhancing robustness, invariance, and segmentation quality for real-world scenarios. Crucially, our framework facilitates the exchange of information between these two pathways before finally fusing their insights, which contributes to the model's enhanced performance. Our approach also incorporates datadriven loss functions, facilitating effective content clustering." }, { "figure_ref": [ "fig_0" ], "heading": "Network Architecture", "publication_ref": [ "b2", "b7" ], "table_ref": [], "text": "Dual-path architectures have proven their effectiveness in self-supervised learning by leveraging the benefits of dual data views, leading to reduced overfitting and enhanced generalization [3]. They employ both the original and transformed data to enrich feature representations, fostering robustness. Inspired by this architecture, we have developed a dual-stream framework that simultaneously processes the original and augmented data, thereby enhancing adaptability and resilience across diverse scenarios. First, we apply data augmentation to the image (∈ R H×W ×C ), utilizing techniques such as ColorJitter and GaussianBlur, to create an augmented version of the image. This process is effective because it introduces controlled variations, i.e. color changes and blurring, which help the model learn to be invariant to different transformations while preserving the overall structural and semantic information. Notably, we refrain from employing transformation augmentation techniques that could potentially compromise the quality of the outcomes. Subsequently, both the original and augmented images are then simultaneously fed into a shared weight encoder, which consists of a 3 × 3 convolutional layer followed by batch normalization (BN), a 1 × 1 depth-wise convolution, and another BN layer. This straightforward architecture is designed to facilitate the embedding of the input images into a high-dimensional feature space, enabling the model to capture intricate and localized patterns within the data. The subsequent projection block plays a crucial role in disentangling and refining meaningful, invariant features within the input data. Post-projection batch normalization is employed to standardize feature distributions, effectively alleviating internal covariate shifts and enhancing network generalization and training stability. The projection block is as follows:\nProjection = LN(Linear 2 (GELU(Linear 1 (x)) + x),(1)\nwhere LN denotes LayerNorm layer. Next, the PatchEmbed and NormLayer are utilized to segment the input features into smaller patches and normalize these patches, respectively. The normalization is achieved by dividing the patch features by their L2 norms. This process yields two tokenized sequences, denoted as\nI ∈ R ( H p W p )×p 2 C and A ∈ R ( H p W p )×p 2 C\n, where p represents the patch size set as H 8 . These feature maps are then used for cross-modal fusion.\nTo facilitate effective information exchange between the image features and the augmented image features derived from the projection heads, we employ the cross-attention module as illustrated in Figure 1. This enhancement strengthens the model's ability to grasp and utilize the shared attributes and dissimilarities between the original and augmented views of the data, fostering a more comprehensive understanding of the data's distribution, cross-modal relationships, and local dependencies. To give greater emphasis to the input image features, we apply a coefficient weight α to scale these features. The cross-attention weight is also shared between the two streams. In the image stream, x 1 represents to the original image features and x 2 represents the augmented image features. In the augmented image stream, the roles of x 1 and x 2 are reversed. The detailed implementation of the cross-attention block is depicted in Equation 2:\nQ, K = Proj(LN(x 1 )), V = Proj(LN(x 2 )), X = [LN(x 1 )||LN(x 2 )], E = ρ q (Q)(ρ k (K) T V), T = X + LN(Conv1 × 1(E)), Output = T + MixFFN(LN(T)),(2)\nwhere proj refers to a linear projection layer, ρ k and ρ k are SoftMax normalization functions, and MixFFN is a feedforward network adopted from [8].\nFinally, the outputs from both streams are combined by summation, resulting in a soft prediction map P ∈ R H×W ×K , where K represents the number of clusters. To obtain the final semantic segmentation map Y of the same dimensions, we apply he ArgMax function to determine the cluster index for each spatial location. During training, our network iteratively minimizes the cross-entropy loss, which quantifies the discrepancy between the soft prediction map and the segmentation map. Equation 3shows the crossentropy loss in our framework:\nL ce (P, Y) = - 1 H × W H×W i=1 K j=1 Y i,j log (P i,j ) . (3)\nOur approach leverages cross-entropy loss to learn cluster distribution by bolstering the network's confidence in grouping similar pixels. However, it faces challenges in modeling local spatial relationships, which can impact performance in merging adjacent clusters. To address this limitation, we introduce two additional regularization terms: the cross-modal fusion loss and the edge refinement loss." }, { "figure_ref": [], "heading": "Cross-Modal Fusion", "publication_ref": [ "b8" ], "table_ref": [], "text": "In addition to cross-entropy loss, we introduce a cross-modal fusion approach that enhances the integration of information from both original and augmented image data. This approach encourages the model to develop a unified understanding of both augmented images and their corresponding originals, fostering robust learning. Our approach extends the principles of CLIP [9] by substituting textual data with augmented images, introducing novel advantages specific to our model. This adaptation enables our model to acquire intricate visual representations, effectively aligning with the complexity of the data at hand. The controlled variations introduced by augmentations promote robustness, similar to CLIP's invariance to textual variations, which is critical for real-world data with unpredictable transformations. The CLIP loss is as follows:\nLogit = (I • A T )/T, Target = SoftMax((I • I T + A • A T )/2T ), L CLIP = L ce (Logit, Target) + L ce (Logit T , Target T ) /2, (4\n)\nwhere T is a temperature parameter. The CLIP loss in our framework aims to align the feature representations of the original image with its augmented counterpart. This alignment strengthens the model's ability to comprehend the shared features and differences between these two perspectives of the same data." }, { "figure_ref": [], "heading": "Edge Refinement", "publication_ref": [], "table_ref": [], "text": "To improve edge alignment and promote spatial consistency among adjacent pixels, we introduce the edge refinement loss. This loss function aims to minimize the discrepancy between the segmentation map and its downsampled and subsequently upsampled counterpart, which generates an edge map. By minimizing this loss, our edge refinement technique enhances spatial coherence, encouraging the grouping of neighboring pixels with similar visual features. This approach involves downsampling an image by a factor of β and then upsampling it. This allows us to prioritize key objects within the image and then accurately delineate object boundaries by subtracting the upsampled image from the original segmentation map. As a result, this method leads to improved consistency in spatial relationships and more precise object boundary delineation. The edge refinement loss is defined as follows:\nL Boundary = i,j (|(Down-Up-Y) i,j -Y i,j | ,(5)\nwhere (Down-Up-Y) i,j and Y i,j , and shows the Downsampled-Upsampled-segmentation map and segmentation map at pixel location (i, j), respectively." }, { "figure_ref": [], "heading": "Joint Objective", "publication_ref": [], "table_ref": [], "text": "The final loss function used in our training process is a combination of three distinct loss terms, as outlined below:\nL joint = λ 1 L ce + λ 2 L CLIP + λ 3 L Boundary ,(6)\nwhere in order to control the relative importance of each loss term, we introduce weighting factors λ 1 , λ 2 , and λ 3 ." }, { "figure_ref": [], "heading": "EXPERIMENTS 3.1. Experimental Setup", "publication_ref": [ "b4", "b9", "b10", "b2", "b11", "b12", "b4", "b2", "b4", "b2", "b1", "b13" ], "table_ref": [], "text": "Dataset: First, we followed the same strategy outlined in [5] and utilized the PH2 dataset, introduced by Mendonc ¸a et al. [10], which comprises 200 RGB images of melanocytic lesions. This dataset encompasses a wide range of lesion types, presenting a challenging real-world problem. We used all 200 samples in our evaluation. Second, we segmented lungs within CT images using the publicly available lung analysis dataset from Kaggle, described in [11], which includes 2D and 3D CT images. We followed the dataset preparation and evaluation approach outlined in [3]. Evaluation Methodology: To assess our approach, we use a set of evaluation metrics, including the Dice (DSC) score, XOR metric, and Hammoud distance (HM). These metrics allow us to comprehensively compare our method to two benchmark techniques: the unsupervised k-means clustering method and recent self-supervised strategies, specifically DeepCluster [12], IIC [13], spatial guided selfsupervised strategy (SGSCN) [5], and MS-Former [3]. Following [5,3,2], we only consider the cluster with the highest overlap with the ground truth map as the target class prediction when evaluating our method. In addition, we optimize the weighting factors λ 1 , λ 2 , and λ 3 using [14] and set them to 2.5, 0.5, and 0.5, respectively for both datasets. We set α to 3 and the downsampling factor β to 16." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Table 1: The performance of the proposed method is compared to the SOTA approaches on the PH 2 and Lung datasets. In Table Table 1, we present the segmentation results for both the PH 2 and Lung organ segmentation datasets. Our method, FuseNet, achieves superior performance on both skin lesion and lung segmentation tasks, as evidenced by the higher DSC scores and lower HM and XOR values. Notably, FuseNet outperforms SGSCN and MS-Former by leveraging several key components. First, we use the CLIP method to model the consistency between two views of the image, harnessing contextual information. Additionally, we introduce an edge refinement loss function that minimizes the disparity between the segmentation map and its downsampled and then upsampled counterpart. This process generates an edge map, which is crucial for separating overlapped boundaries, especially in the case of skin lesions with deformable shapes. Our dual-stream method, guided by CLIP, is adept at modeling local texture details and global context dependencies among image views. This improves clustering, as shown in Table 1. Going beyond quantitative metrics, we also present qualitative results. Figure 2 illustrates the visual segmentation of both datasets, highlighting the effectiveness of our model in improving segmentation by increasing the number of true positives and reducing the number of false positives." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "ABLATION STUDY", "publication_ref": [], "table_ref": [], "text": "To assess the individual impact of the CLIP module and the spatial loss function within our architecture, we conducted a systematic experimental analysis by selectively deactivating these components (see Table 1). Our results show that a modest 0.9% reduction in the DSC score was observed in the PH 2 dataset when the CLIP module was excluded, underscoring its significant contribution to segmentation accuracy. Similarly, removing the edge refinement loss function resulted in a 0.8% decline in DSC performance, emphasizing its crucial role in maintaining spatial coherence. These findings are visually presented in Figure 2, illustrating the consequences of excluding these modules on segmentation results. To further elucidate the influence of our edge refinement loss, we provide visualizations of edge information throughout the training process, demonstrating how this module enhances boundary information, ultimately facilitating effective object separation (see Figure 3)." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "FuseNet excels in challenging medical image segmentation scenarios, substantially improving segmentation quality. It outperforms SOTA methods based on DSC score, HM, and XOR metrics. Visual results highlight FuseNet's ability to increase true positives and reduce false positives, advancing self-supervised medical image analysis and reducing the need for expensive manual annotations." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "* Equal contribution. This work is partially supported by NIH R01-CA246704, R01-CA240639, R03-EB032943, U01-DK127384-02S1, and U01-CA268808." } ]
Semantic segmentation, a crucial task in computer vision, often relies on labor-intensive and costly annotated datasets for training. In response to this challenge, we introduce FuseNet, a dual-stream framework for self-supervised semantic segmentation that eliminates the need for manual annotation. FuseNet leverages the shared semantic dependencies between the original and augmented images to create a clustering space, effectively assigning pixels to semantically related clusters, and ultimately generating the segmentation map. Additionally, FuseNet incorporates a cross-modal fusion technique that extends the principles of CLIP by replacing textual data with augmented images. This approach enables the model to learn complex visual representations, enhancing robustness against variations similar to CLIP's text invariance. To further improve edge alignment and spatial consistency between neighboring pixels, we introduce an edge refinement loss. This loss function considers edge information to enhance spatial coherence, facilitating the grouping of nearby pixels with similar visual features. Extensive experiments on skin lesion and lung segmentation datasets demonstrate the effectiveness of our method. Codebase.
FUSENET: SELF-SUPERVISED DUAL-PATH NETWORK FOR MEDICAL IMAGE SEGMENTATION
[ { "figure_caption": "Fig. 1 :1Fig. 1: A general overview of the proposed FuseNet (left) and the cross-attention module (right).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :Fig. 3 :23Fig. 2: Visual comparison of different methods on the PH 2 skin lesion segmentation and Lung datasets (left) and the impact of individual loss functions (right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" } ]
Amirhossein Kazerouni; Sanaz Karimijafarbigloo; Reza Azad; Yury Velichko; Ulas Bagci; Dorit Merhof
[ { "authors": "Reza Azad; Ehsan Khodapanah Aghdam; Amelie Rauland; Yiwei Jia; Atlas Haddadi Avval; Afshin Bozorgpour; Sanaz Karimijafarbigloo; Joseph Paul Cohen; Ehsan Adeli; Dorit Merhof", "journal": "", "ref_id": "b0", "title": "Medical image segmentation review: The success of u-net", "year": "2022" }, { "authors": "Abdulrahman Gharawi; Mohammad D Alahmadi; Lakshmish Ramaswamy", "journal": "Mathematics", "ref_id": "b1", "title": "Self-supervised skin lesion segmentation: An annotation-free approach", "year": "2023" }, { "authors": "Sanaz Karimijafarbigloo; Reza Azad; Amirhossein Kazerouni; Dorit Merhof", "journal": "", "ref_id": "b2", "title": "Ms-former: Multi-scale self-guided transformer for medical image segmentation", "year": "2023" }, { "authors": "Sanaz Karimijafarbigloo; Reza Azad; Amirhossein Kazerouni; Yury Velichko; Ulas Bagci; Dorit Merhof", "journal": "", "ref_id": "b3", "title": "Self-supervised semantic segmentation: Consistency over transformation", "year": "2023" }, { "authors": "Euijoon Ahn; Dagan Feng; Jinman Kim", "journal": "Springer", "ref_id": "b4", "title": "A spatial guided self-supervised clustering network for medical image segmentation", "year": "2021-10-01" }, { "authors": "Mohammad Reza; Hosseinzadeh Taher; Fatemeh Haghighi; Jianming Michael B Gotway; Liang", "journal": "PMLR", "ref_id": "b5", "title": "Caid: Context-aware instance discrimination for self-supervised learning in medical imaging", "year": "2022" }, { "authors": "Yuting He; Guanyu Yang; Rongjun Ge; Yang Chen; Jean-Louis Coatrieux; Boyu Wang; Shuo Li", "journal": "", "ref_id": "b6", "title": "Geometric visual similarity learning in 3d medical image self-supervised pre-training", "year": "2023" }, { "authors": "Xiaohong Huang; Zhifang Deng; Dandan Li; Xueguang Yuan", "journal": "", "ref_id": "b7", "title": "Missformer: An effective medical image segmentation transformer", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b8", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Teresa Mendonc ¸a; Pedro M Ferreira; Jorge S Marques; André Rs Marcal; Jorge Rozeira", "journal": "IEEE", "ref_id": "b9", "title": "Ph 2-a dermoscopic image database for research and benchmarking", "year": "2013" }, { "authors": "Reza Azad; Maryam Asadi-Aghbolaghi; Mahmood Fathy; Sergio Escalera", "journal": "", "ref_id": "b10", "title": "Bi-directional convlstm u-net with densley connected convolutions", "year": "2019" }, { "authors": "Mathilde Caron; Piotr Bojanowski; Armand Joulin; Matthijs Douze", "journal": "", "ref_id": "b11", "title": "Deep clustering for unsupervised learning of visual features", "year": "2018" }, { "authors": "Xu Ji; Joao F Henriques; Andrea Vedaldi", "journal": "", "ref_id": "b12", "title": "Invariant information clustering for unsupervised image classification and segmentation", "year": "2019" }, { "authors": "Takuya Akiba; Shotaro Sano; Toshihiko Yanase; Takeru Ohta; Masanori Koyama", "journal": "", "ref_id": "b13", "title": "Optuna: A nextgeneration hyperparameter optimization framework", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 325.49, 463.97, 233.5, 9.72 ], "formula_id": "formula_0", "formula_text": "Projection = LN(Linear 2 (GELU(Linear 1 (x)) + x),(1)" }, { "formula_coordinates": [ 2, 315.21, 541.07, 243.78, 27.14 ], "formula_id": "formula_1", "formula_text": "I ∈ R ( H p W p )×p 2 C and A ∈ R ( H p W p )×p 2 C" }, { "formula_coordinates": [ 3, 90.32, 123.74, 207.88, 70.31 ], "formula_id": "formula_2", "formula_text": "Q, K = Proj(LN(x 1 )), V = Proj(LN(x 2 )), X = [LN(x 1 )||LN(x 2 )], E = ρ q (Q)(ρ k (K) T V), T = X + LN(Conv1 × 1(E)), Output = T + MixFFN(LN(T)),(2)" }, { "formula_coordinates": [ 3, 67.68, 358.1, 230.53, 30.32 ], "formula_id": "formula_3", "formula_text": "L ce (P, Y) = - 1 H × W H×W i=1 K j=1 Y i,j log (P i,j ) . (3)" }, { "formula_coordinates": [ 3, 57.42, 663.6, 237.79, 56.1 ], "formula_id": "formula_4", "formula_text": "Logit = (I • A T )/T, Target = SoftMax((I • I T + A • A T )/2T ), L CLIP = L ce (Logit, Target) + L ce (Logit T , Target T ) /2, (4" }, { "formula_coordinates": [ 3, 294.33, 711.06, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 3, 350.92, 351.2, 208.08, 19.91 ], "formula_id": "formula_6", "formula_text": "L Boundary = i,j (|(Down-Up-Y) i,j -Y i,j | ,(5)" }, { "formula_coordinates": [ 3, 347.43, 466.85, 211.56, 9.81 ], "formula_id": "formula_7", "formula_text": "L joint = λ 1 L ce + λ 2 L CLIP + λ 3 L Boundary ,(6)" } ]
10.18653/v1/2021.conll-1.9
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b5", "b13", "b0", "b25", "b11", "b4", "b28", "b26", "b0", "b9", "b19", "b0", "b0" ], "table_ref": [ "tab_1" ], "text": "One of the most interesting aspects of Large Language Models (LLMs) is that while they are usually trained without explicit linguistic supervision, or knowledge injection, there is a relevant body of work that shows that both linguistic structures and relational knowledge emerge even without the need for fine-tuning (Petroni et al., 2019;Goldberg, 2019;Marvin and Linzen, 2018). This has generated an ongoing discussion on how these models are learning and if the current training objectives and text-only data are enough to cover a wide range of downstream tasks.\nOne of the perspectives that have been used recently to study this phenomenon is color language (Abdou et al., 2021). Color naming is a relevant task (Steels et al., 2005), well understood in physiological (Loreto et al., 2012) and sociocultural terms (Gibson et al., 2017) and has an inherent intent from the communicative point of view (Zaslavsky et al., 2019;Twomey et al., 2021). In a recent work, Abdou et al. (2021) proposed to quantify the alignment, understood as the structural correspondence between a point in the color space (e.g. RGB or CIELAB) and its associated name in natural language represented as the feature vector obtained from a LLM. For such empirical study, they make use of the Color Lexicon of American English (Lindsey and Brown, 2014), based on the Munsell Chart of color chips (Munsell et al., 1915), finding that in general alignment is present across the color spectrum. While this work provides valuable insights on the actual grounding requirements, most of the color descriptions are monolexemic.\nThe above findings sparked our interest in the issue and led us to run preliminary experiments to test to what degree this alignment exists for less pragmatic color descriptions. We observed that such alignment drops significantly in the presence of more complex and subjective ways to describe specific colors, for example, when the color names contain multiple nouns or NPs, as well as terms with other parts-of-speech. To further study these issues, we construct a more challenging test scenario by using and processing data from ColorNames 1 , an online service where users can collaboratively generate (color, color description) pairs. Given its free-form structure, the COLORNAMES dataset represents a rich and heterogeneous corpus of color descriptions. This can be appreciated clearly in Table 1.\nUsing this new, more challenging dataset, we conducted two experiments. The first one complements the work of Abdou et al. (2021) in assessing the inter-space alignment between color and LLM spaces, with two key new additions: (i) we propose a more fine-grained color name segmentation, considering metrics associated to subjectivity and concreteness, and (ii) we adopt an Optimal Transport-based metric to complement the existing alignment methods. For the second experiment, we focus on the representations obtained from the LLMs and their ability to ground color comparatives as a way to structure color descriptions. Critically, we do this without the need of accessing underlying points in color space. Concretely, we assess to what extent the LLM is able to discover a comparative-based relationship between a pair of color names without the need for explicit underlying color information, following a few shot learning configuration. For example, what would be the correct comparative associated to the pair of color names (e.g. between blood red wine and funeral roses)? If the model succeeds on that task, it could mean that such relationships are somehow encoded during pretraining, without the need of an explicit color signal.\nThe results of the proposed experiments show in general the alignment scores between spaces on the proposed dataset are low, contrasting with the results provided on (Abdou et al., 2021) on the Munsell dataset. This means that the complexity of the color descriptions, exemplified by the subjectivity and concreteness metrics, really impact on the perceptual structure the language models achieve. On the other hand, the results of the second experiment on comparative prediction, show that all language models are able to perform surprisingly well, even in scenarios of high subjectivity. This discrepancy leads to think that somehow the models retain certain structure learned thought language, but are not able to translate into color modality." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b7", "b15", "b14", "b16", "b17", "b27", "b6", "b18", "b14", "b0" ], "table_ref": [], "text": "Color language, as a medium to study grounding, has been used in several works, mainly trying to learn a mapping between the descriptions and their associated points in color space, such as Kawakami et al. (2016), based on character level model, Monroe et al. (2016); McMahan and Stone (2015) which take as input a color representation and generates a natural language description, Monroe et al. (2017), who incorporates the idea of contextual information to guide the generation, and in Monroe et al. (2018) tested it in a bilingual setting. Winn and Muresan (2018); Han et al. (2019) proposed to model comparatives between colors. In most of these works, the source of color names comes from Munroe (2010), which compresses the results of an online survey where participants were asked to provide free-form labels in natural language to various RGB samples. This data was subsequently filtered by McMahan and Stone (2015), leading to a total number of samples of over 2 million instances, but with a number of unique color names constrained to only 829. This suggests a reduction in the complexity of modeling tasks as proposed by previous work, as the vocabulary is fairly small, and with a homogeneous frequency. In contrast, the empirical study we propose does not have such constraint, allowing us to work with unique, subjective descriptions which are richer in vocabulary. In terms of using color to understand perceptual structure in LLMs, our direct inspiration was the work by Abdou et al. (2021), where authors perform experiments to quantify the alignment between points in color space and embeddings obtained by encoding the associated color names with LLMs." }, { "figure_ref": [], "heading": "Experimental Setting", "publication_ref": [ "b2", "b12", "b3", "b10", "b24", "b1" ], "table_ref": [ "tab_1", "tab_3" ], "text": "Data We use data from ColorNames2 , which crowdsources (color, color name) pairs. Table 1 presents some examples. The extracted data was filtered to standardize the comparison. Only English sentences were kept, spam entries were removed using predefined rules and only color names with a maximum of five words were kept. The resulting dataset consists of 953,522 pair instances, with a total vocabulary size of 111,531 tokens. As seen in Table 2, words with the highest frequencies correspond to color words. In terms of POS patterns, the data presents a total of 3,809 combinations, extracted using Spacy3 but the most frequent patterns represent ways to modify nouns, by using an adjective (e.g. dark apple) or a sequence of nouns. We computed concreteness scores for the descriptions based on Brysbaert et al. (2014), which provides a defined set of lemmas with a ranking varying from 1 to 5. For example, red pine brown gets a score of 4.3, while mysterious skyscape gets a score of 1.9. In this sense, we make the assumption that lower concreteness means higher abstractedness. Additionally, subjectivity scores were computed based on TextBlob (Loria et al., 2018), a rule-based approach that provides a score between 0 and 1, where a description like thick and creamy gets a score of 0.47, and mathematically perfect purple gets a 1.0. Figure 3 shows the correspondence between the scores and the expected usage of three sample words, ranging from ugly, a term associated with subjectivity to apple, which is commonly used to represent the reds spectrum. In the case of rich, it could have mixed connotations, which is reflected in its histogram having most frequent values at the center.\nLanguage Models For all experiments, we used three language models, namely, BERT (Devlin et al., 2019), Roberta (Liu et al., 2019), T5 (Raffel et al., 2020), all of them in their large type, based on their availability on HuggingFace 4 . As a control baseline, we use FastText word embeddings (Bojanowski et al., 2017). The selection of such models is based on the need to explicitly extract embeddings from the LLM, which means that in principle only white-box models can be used. This ruled out API-based systems like GPT-3 and other similar. Moreover, even for competitive white-box models such as LLaMa, given the size of our introduced dataset (around million examples), its usage is left for future work. Finally, we note that our selection of LLMs lies within the masked-language modelling domain (MLM). This is a deliberate and critical decision, as it allows for our experiments to be performed in a controlled in-filling setting, limiting what the LLM can output and allowing us to parse their generations automatically. More 4 https://huggingface.co/models up-to-date models are all causal LMs (with a few exceptions), which means that our capacity to control is more limited, as these models cannot in-fill text. Moreover, it has been shown that the output of these models is highly dependent on how they are prompted, usually requiring a huge amount of work into prompt construction in order to control the output, which adds further complications." }, { "figure_ref": [ "fig_1" ], "heading": "Experiments", "publication_ref": [ "b0", "b0", "b8", "b23", "b21", "b0", "b27", "b0" ], "table_ref": [ "tab_5", "tab_7" ], "text": "Experiment I: Inter-space Alignment This first experiment is directly inspired by Abdou et al. (2021). In this case, we want to assess the alignment between color and LM feature spaces. For measuring such alignment, we replicated the settings proposed by (Abdou et al., 2021) for the case of (i) Linear Mapping (LMap), where given a set of n (color, color name) pairs, the alignment is measured as the fitness of the regressor W ∈ R d LM ×3 that minimizes ||XW -Y|| 2 2 + α||W|| 1 , with α regularization parameter, X ∈ R n×d LM the color names embeddings and Y ∈ R n×3 the vectors coming from the color space, and (ii) Representational Similarity Analysis (RSA) (Kriegeskorte et al., 2008), the non-parametric method, whose score is operationalized via the mean Kendall's τ between both modalities. In addition, we propose to model alignment as an Optimal Transport (OT) (Peyré et al., 2019) problem, where the goal is to find a transport matrix that minimizes the cost of moving all text samples onto their corresponding color samples. We rely on Gromov-Wasserstein distance (GW) (Peyré et al., 2016), which extends the OT setting from samples to metric spaces. GW finds a mapping T ∈ R n×n\n+ that minimizes the GW cost, i,j,k,l ∥C TEXT ik -C COLOR jl ∥ 2 2 T ij T kl subject to 0 ≤ T ij ≤ 1, i T ij = 1 n , and j T ij = 1\nn , where C TEXT = cos(X, X) ∈ R n×n and C COLOR = cos(Y, Y) ∈ R n×n are the withindomain similarity matrices. Therefore, T ij denotes the probability of assigning a sample i in the text space to a sample j in the color space. We grouped the examples into uniform segments associated to the subjectivity and concreteness scores, computing alignment scores per segment, per LM. In general, results show the alignment is low, and it decreases as subjectivity increases. Similarly to (Abdou et al., 2021), FastText behaves as a strong baseline, we hypothesize that given the uniqueness of the descriptions, its n-gram based tokenizer could be more stable than the tokenizers implemented within the LMs. Figure 2 show the results for all models on all segments using LMap and OT. (We omitted RSA as its behavior is very similar to LMap).\nExperiment II: Perceptual Structure via Comparative Identification The objective of this experiment is to determine if a LM can structure relationships between color descriptions without accessing the associated points in color space. To that end we design a task where, given two color descriptions, the LM has to determine the correct comparative that relates both descriptions (e.g. darker or lighter). To this end, we firstly match the dataset provided by Winn and Muresan (2018), which consists of tuples ( [ reference color points ], comparative, [target color points]) against COL-ORNAMES , by sampling (color, color description) pairs (c i , cd i ), (c j , cd j ) and retrieving the comparative that minimizes simultaneously the distance between [reference color points] and c i , and [target color points] and c j . After this step, we have, for any pair of descriptions in the COLORNAMES dataset, a ranking with the most suitable comparatives, based on explicit grounding. Table 3 provides matched examples.\nWe operationalize the task as a few shot inference. Firstly, we select randomly from the matched dataset, K tuples of the form (description i, comparative, description j ), from which K -1 are used to construct the labeled part of a prompt, following the template \"description i is [compar- ative] than description j \". The remaining k-th tuple is appended as \"description i is [MASK] than description j \", i.e., the comparative has been masked. The resulting prompt is passed through the LM and the ranking of the most likely tokens for [MASK] is retrieved. As evaluation metric, we chose the Mean Reciprocal Rank (MRR), as it encodes the position of the correct answer among the raking provided by the LM. We experimented using a total set of 81 comparatives and varying parameter K from 5 to 20. We performed the same uniform segmentation in terms of subjectivity and concreteness. Results in general showed surprisingly good results in terms of MRR, in several cases, LM outputs the correct comparative at position 1. There was a natural decay in performance when K was smaller, but for K > 10 results were consistent across models and segments. Figure 3 presents the results for K = 10, showing uniformity that led us to speculate that for this task, subjectivity may not be relevant as a control factor. From a qualitative point of view, Figure 4 shows the result of constructing a graph based on the comparative relationships correctly inferred by the LM. As it can be seen, (a) there is color coherence in terms of the neighboring relationships, and (b) when sampling paths form the graphs, transitions are consistent. Further investigation of these structures is left for future work. Additionally, trying to assess the impact on the language model selection, we experimented considering ChatGPT (web-based query) and Llama-2 (llama-2-13b-chat) as language models tasked to predict the comparatives. We found that, in several cases (with different prompts) even when using a k-shot setting, these models often created new types of comparatives (e.g. \"SIMILAR IN LIGHTNESS\"). As such new comparatives are not present in the ground truth of our dataset, our evaluation framework becomes increasingly complicated since we would need to further post-process the model outputs. Such a more complex experiment is left for future work. Impact of inner context Finally, differently from (Abdou et al., 2021), as our descriptions are not appended to any additional context at encoding time, we want to assess if their complexity acts as a natural source of context. For example, for the description mustard taxi yellow, we can see how the generic yellow color word is being conditioned by two nouns, mustard and taxi. An analysis of our data showed that out of 900K instances, around 390K (43 %) contain a color word. Based on this, we split the data into two chunks and re-run the two experiments described above. The results show that for the case of the alignment task, the mean R-scores from the regressor associated with the set of descriptions that have and do not have a color word are 0.401 and 0.381 respectively (using BERTlarge). We can see that there is in fact a difference, although the ranges are within the results reported. Moreover, given the full set of (color, color description) pairs, we cluster them using the color representations using k-means. From the resulting set of clusters, we choose the one that groups the yellows, as an example. Within that cluster, we performed a new grouping, this time using the embeddings of the color descriptions. From this, we now obtained 22 subgroups (again, using standard k-means) that are semantically close (inner context) but that are globally constrained by the color spectrum chosen (yellow). We now study the alignment within semantically-close groups and in-between groups. We selected three distinct groups: a group with semantics about taxis, one about bananas and one about sun/sunset/sunrise. We computed the alignment score and the MRR associated to comparative prediction of each group independently, for pairs of groups and for the combination of all of them, as seen in Table 4.\nAs we can see, in general the alignment scores are reasonable for single groups, but as we start combining them, the scores drop. This is expected as in most cases there is a token that becomes an anchor which is slightly modified by the rest of the description. On the other hand, in the prediction of the comparative among pairs of descriptions, we can see that the accuracies (measured with MRR) dropping as we combine the sets, but still remain mostly in the same ballpark. This, while not conclusive evidence, helps us approximate to the notion that alignment can indeed be influenced by the semantics of the descriptions, but it does not seem to play a big role on how the LM structure the information using comparatives." }, { "figure_ref": [], "heading": "Conclusion and Future Work", "publication_ref": [], "table_ref": [], "text": "We studied how LLMs encode perceptual structure in terms of the alignment between color and text embedding spaces and the inference of comparative relationships in a new, challenging dataset that encapsulates elements of real language usage, such abstractedness and subjectivity. The results show that LMs perform in mixed way, which provides additional evidence on the need for actual grounding. In terms of future work, we are considering the need for additional contextual information, which could be attacked by considering color palettes instead of single colors, and also considering a multilingual approach, to cover the sociocultural aspects of color naming, specially in scenarios of low resource translation." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank the anonymous reviewers for their insightful and constructive feedback." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "One of the key limitations of the current work is its focus solely on English language, which, in terms of color language, naturally compresses the cultural aspects associated to English-speaking societies. This is clear when we analyze in detail the vocabulary, we can find cultural archetypes that are probably not transferable. In that, there is an inherent challenge on how to learn color naming conventions in a more broad way. For example, for applications related to low resource languages, understanding the use of language for color description could be helpful for anchoring linguistic patterns." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our main objective is to understand how language is used to communicate color. This has several applications, for example in e-commerce, such as search, product reviews (where color is an important attribute). While directly our study tries to abstract from specific user information, it is certain that language usage identification could be used for prospecting or targeting individuals from a specific cultural background." } ]
The need for grounding in language understanding is an active research topic. Previous work has suggested that color perception and color language appear as a suitable test bed to empirically study the problem, given its cognitive significance and showing that there is considerable alignment between a defined color space and the feature space defined by a language model. To further study this issue, we collect a large scale source of colors and their descriptions, containing almost a 1 million examples , and perform an empirical analysis to compare two kinds of alignments: (i) inter-space, by learning a mapping between embedding space and color space, and (ii) intra-space, by means of prompting comparatives between color descriptions. Our results show that while color space alignment holds for monolexemic, highly pragmatic color descriptions, this alignment drops considerably in the presence of examples that exhibit elements of real linguistic usage such as subjectivity and abstractedness, suggesting that grounding may be required in such cases.
Perceptual Structure in the Absence of Grounding for LLMs: The Impact of Abstractedness and Subjectivity in Color Language
[ { "figure_caption": "Figure 1 :1Figure 1: Joint histograms showing subjectivity and concreteness scores for color descriptions containing the terms ugly, rich and apple, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Alignment using (a) LMap (b) OT.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Mean Reciprocal Rank (MRR) for comparative inference across language models.", "figure_data": "", "figure_id": "fig_2", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Selection of samples from the COLORNAMES dataset, showing the richness of color descriptions.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Most common words and most common POS patterns across color descriptions.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of color descriptions from COLOR-NAMES and their most suitable comparative(Winn and Muresan, 2018) as obtained by our matching procedure.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results for the sample extracted from the yellow spectrum of the dataset.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" } ]
Pablo Loyola; Edison Marrese-Taylor; Andres Hoyos-Idobro
[ { "authors": "Mostafa Abdou; Artur Kulmizev; Daniel Hershcovich; Stella Frank; Ellie Pavlick; Anders Søgaard", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Can language models encode perceptual structure without grounding? a case study in color", "year": "2021" }, { "authors": "Piotr Bojanowski; Edouard Grave; Armand Joulin; Tomas Mikolov", "journal": "Transactions of the association for computational linguistics", "ref_id": "b1", "title": "Enriching word vectors with subword information", "year": "2017" }, { "authors": "Marc Brysbaert; Amy Beth Warriner; Victor Kuperman", "journal": "Behavior research methods", "ref_id": "b2", "title": "Concreteness ratings for 40 thousand generally known english word lemmas", "year": "2014" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Edward Gibson; Richard Futrell; Julian Jara-Ettinger; Kyle Mahowald; Leon Bergen; Sivalogeswaran Ratnasingam; Mitchell Gibson; Steven T Piantadosi; Bevil R Conway", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b4", "title": "Color naming across languages reflects color use", "year": "2017" }, { "authors": "Yoav Goldberg", "journal": "", "ref_id": "b5", "title": "Assessing bert's syntactic abilities", "year": "2019" }, { "authors": "Xudong Han; Philip Schulz; Trevor Cohn", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Grounding learning of modifier dynamics: An application to color naming", "year": "2019" }, { "authors": "Kazuya Kawakami; Chris Dyer; Noah A Bryan R Routledge; Smith", "journal": "", "ref_id": "b7", "title": "Character sequence models for colorful words", "year": "2016" }, { "authors": "Nikolaus Kriegeskorte; Marieke Mur; Peter A Bandettini", "journal": "Frontiers in systems neuroscience", "ref_id": "b8", "title": "Representational similarity analysisconnecting the branches of systems neuroscience", "year": "2008" }, { "authors": "T Delwin; Angela M Lindsey; Brown", "journal": "Journal of Vision", "ref_id": "b9", "title": "The color lexicon of American English", "year": "2014" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b10", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Vittorio Loreto; Animesh Mukherjee; Francesca Tria", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b11", "title": "On the origin of the hierarchy of color names", "year": "2012" }, { "authors": "Steven Loria", "journal": "Release 0", "ref_id": "b12", "title": "Textblob documentation", "year": "2018" }, { "authors": "Rebecca Marvin; Tal Linzen", "journal": "", "ref_id": "b13", "title": "Targeted syntactic evaluation of language models", "year": "2018" }, { "authors": "Brian Mcmahan; Matthew Stone", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b14", "title": "A bayesian model of grounded color semantics", "year": "2015" }, { "authors": "Will Monroe; Noah Goodman; Christopher Potts", "journal": "", "ref_id": "b15", "title": "Learning to generate compositional color descriptions", "year": "2016" }, { "authors": "Will Monroe; Robert Xd Hawkins; Noah D Goodman; Christopher Potts", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b16", "title": "Colors in context: A pragmatic neural model for grounded language understanding", "year": "2017" }, { "authors": "Will Monroe; Jennifer Hu; Andrew Jong; Christopher Potts", "journal": "", "ref_id": "b17", "title": "Generating bilingual pragmatic color references", "year": "2018" }, { "authors": "Randall Munroe", "journal": "", "ref_id": "b18", "title": "Color survey results", "year": "2010" }, { "authors": "Albert Henry Munsell; Dorothy Nickerson", "journal": "Wadsworth, Howland & Company, Incorporated", "ref_id": "b19", "title": "Munsell color system", "year": "1915" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Sebastian Riedel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander Miller", "journal": "", "ref_id": "b20", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Gabriel Peyré; Marco Cuturi; Justin Solomon", "journal": "", "ref_id": "b21", "title": "Gromov-wasserstein averaging of kernel and distance matrices", "year": "2016" }, { "authors": " Pmlr", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Gabriel Peyré; Marco Cuturi", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b23", "title": "Computational optimal transport: With applications to data science", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b24", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Luc Steels; Tony Belpaeme", "journal": "Behavioral and brain sciences", "ref_id": "b25", "title": "Coordinating perceptually grounded categories through language: A case study for colour", "year": "2005" }, { "authors": "Gareth Colin R Twomey; David H Roberts; Joshua B Brainard; Plotkin", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b26", "title": "What we talk about when we talk about colors", "year": "2021" }, { "authors": "Olivia Winn; Smaranda Muresan", "journal": "", "ref_id": "b27", "title": "lighter'can still be dark: Modeling comparative color descriptions", "year": "2018" }, { "authors": "Noga Zaslavsky; Charles Kemp; Naftali Tishby; Terry Regier", "journal": "Topics in cognitive science", "ref_id": "b28", "title": "Color naming reflects both perceptual structure and communicative need", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 306.14, 665.71, 218.26, 54.34 ], "formula_id": "formula_0", "formula_text": "+ that minimizes the GW cost, i,j,k,l ∥C TEXT ik -C COLOR jl ∥ 2 2 T ij T kl subject to 0 ≤ T ij ≤ 1, i T ij = 1 n , and j T ij = 1" } ]
2024-03-28
[ { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_1", "fig_1", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b5", "b19", "b19", "b0", "b20", "b7", "b0", "b20", "b7" ], "table_ref": [], "text": "Scene Text Recognition (STR) is a fundamental task in computer vision, with extensive applications in several domains such as autonomous driving [45], augmented reality [30,33], industrial print recognition [28] and visual understanding [26].\nCurrent progress in STR [3,17,20,35] has demonstrated remarkable performance in numerous scenarios.\nHowever, as shown in Figure 1 (a), STR models are sup- posed to perform robustly over diversified scenarios in the real world, where the scene text is hard to recognize because of domain variation, font diversity, shape deformation, etc. As shown in Figure 1 (b), a straightforward solution involves collecting the corresponding data and then fine-tuning the model for the specific scenario [3,17,20]. This process is computationally intensive and requires multiple model copies for diverse scenarios. The development of a comprehensive and reliable STR model that can effectively handle many real-world scenarios remains a significant challenge. Fortunately, plenty of studies [1,6,21,38] have shown that Large Language Models (LLMs) can easily adapt without additional training. This adaptation is achieved by leveraging only a handful of input-label pairs as context (prompting information), a phenomenon known as \"In-Context Learning\" (ICL). The advantages of ICL inspire our interest in implementing it in STR, such that by fetching a few in-context prompts, a single model can be rapidly adapted to various scenarios without fine-tuning.\nHowever, the equipment of ICL in STR still poses challenges under the existing circumstances. Firstly, it is deemed excessively costly to apply Multi-Modal Large Language Models (M-LLMs) with billions of parameters as a scene text recognizer. And the ICL capabilities in regularsized models have been barely explored currently.\nSecondly, it is hard to acquire ICL capabilities for a STR model with current training strategies. Previous studies have observed that sending image-text sequences for training would naturally endow ICL for M-LLMs [1,21,38], while such a phenomenon is hard to achieve in STR. As shown in Figure 2 (a), we generate sequential training data by randomly concatenating scene text samples. This practice fails as the trained model does not exhibit any performance improvement even when provided with in-domain prompts (Figure 2 Based on the above analysis, we propose E 2 STR (Ego-Evolving STR), a paradigm that facilitates adaptation across diverse scenarios in a training-free manner. Specifically, we propose an in-context training strategy, which enables the model to exploit contextual information from the generated context-rich scene text sequences (Figure 2 (b)). The context-rich scene text sequences are formed using our ST-strategy, which involves random Splitting and Transformation of scene text, hence generating a set of \"sub-samples\". The sub-samples are inner-connected in terms of both visual and linguistic aspects. In the inference stage, E 2 STR fetches in-context prompts based on visual similarities, and utilizes the prompts to assist the recognition, shown in Figure 1 (c). In practice, it is found that with proper training and inference strategies, ICL capabilities can also be observed in regular-sized STR models (hundreds of millions of parameters).\nFinally, the proposed E 2 STR effectively captures contextual information from the in-context prompts and performs rapid adaptation in various novel scenarios in a trainingfree manner (Please refer to Section 4.2). On common benchmarks, E 2 STR achieves SOTA results, with an average improvement of 0.8% over previous methods and 1.1% over itself without ICL. Most importantly, when evaluated on unseen domains, E 2 STR achieves impressive performance with only a few prompts, even outperforming the fine-tuning results of SOTA methods by 1.2%. Our contributions are summarized below:\n(1) We propose E 2 STR, a robust STR paradigm that can perform rapid adaptation over diverse scenarios in a training-free manner.\n(2) We provide an in-context training strategy for equipping STR models with ICL capabilities, as well as an incontext inference strategy for STR models to leverage contextual information from in-context prompts.\n(3) We demonstrate that ICL capabilities can be effectively incorporated into regular-sized STR models via appropriate training and inference strategies.\n(4) Extensive experiments show that E 2 STR exceeds state-of-the-art performance across diverse benchmarks, even surpassing the fine-tuned approaches in unseen domains." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Scene Text Recognition", "publication_ref": [ "b10", "b4", "b10", "b39", "b1", "b6", "b35" ], "table_ref": [], "text": "Recent years have witnessed extensive studies in STR, which can be generally divided into Language-free methods and Language-aware methods. Language-free STR. Language-free models directly utilize visual features for prediction, without considering the relationship between the characters. In this branch CTC-based [11] methods [5,24] play the most prominent part. They typically consist of a CNN for feature extraction and an RNN for sequential feature processing, which are trained end-to-end with the CTC loss [11]. Other methods like [23,40] focus on treating STR as a character-level segmentation task. The lack of linguistic information limits the application of language-free methods in scenarios with occluded or incomplete characters. Language-aware STR. Language-aware models leverage linguistic information to assist the recognition, typically utilizing an external language model (LM) [10,44] or training internal LMs [2,7,36]. SRN [44] and ABINet [10] feed visual predictions to an external LM for linguistic refinement. The direct application of an external LM without considering visual features leads to possible erroneous correction. On the other hand, methods like PARSeq [3] and MAERec [17] implicitly train an internal LM in an auto-regressive manner, which have achieved decent performance. In this paper we base our model on the languageaware design, training a transformer-based language decoder inner-connected with the vision encoder." }, { "figure_ref": [], "heading": "Multi-Modal In-Context Learning", "publication_ref": [ "b46", "b14", "b0", "b20", "b7" ], "table_ref": [], "text": "Recent large language models (LLMs) [6,46] have demonstrated their excellent few-shot adaptation capabilities. By concatenating a few examples with the input as the prompt at reference time, LLMs quickly adapt to novel tasks without parameter updating. This phenomenon introduces a novel learning paradigm termed \"In-Context Learning\". Meanwhile, unlike LLMs, vision-language models (VLMs) struggle to understand complex multi-modal prompts [47]. A large set of approaches [13-15, 34] have been proposed to empower VLMs with multi-modal in-context learning (M-ICL) capabilities, but they typically utilize vision models (like image caption models) to translate images to text [15,34,43], or view the LLM as a scheduler learning to call vision experts based on a few examples [13]. These approaches do not truly establish a VLM with M-ICL capabilities. Recently, several work [1,21,38] proposes to train VLMs with sequential multi-modal data, and have achieved great success in prompting VLMs with multi-modal examples. In this paper, we aim to train a scene text recognizer equipped with M-ICL capabilities based on this sequential training paradigm. We demonstrate that the arbitrary concatenation of scene text fails as stated above, which motivates us to generate context-rich scene text sequences." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminary of Multi-Modal In-Context Learning", "publication_ref": [ "b19" ], "table_ref": [], "text": "Multi-modal in-context Learning enables M-LLMs to perform quick adaptation in downstream tasks in a training-free manner, hence eliminating the redundant computation and time expenses of fine-tuning. In this subsection, we introduce how to formulate multi-modal in-context learning for addressing the STR task. For a scene text tuple (x, y) where x is the scene image and y is the ground-truth text, the STR task involves generating the label y by maximizing the conditional probabil-ity under the classic auto-regressive paradigm as follows: p(y|x) = L l=1 p(y l |x, y <l ), where y l is the l-th character in y, y <l is the set of preceding characters, and L is the number of characters in y.\nWhile previous state-of-the-art studies typically need to fine-tune pre-trained models when confronted with novel scenarios [3,17,20], we propose in this study to leverage multi-modal in-context learning to enable STR models to be rapidly adapted across diverse scenarios without finetuning. Specifically, we define the probability of generating the target label y for a given image x and the multi-modal context C as follows: (x c i , y c i ) are the scene image and the ground-truth text of the context prompts, and n is the number of context prompts.\np(y|x, C) = L l=1 p(y l |{x c 1 , • • • , x c n vision context ; x}, {y c 1 , • • • , y c n language context ; y <l }),(1)\nwhere the context C = {(x c 1 , y c 1 ), • • • , (x c n , y c n )} is the set of the in-context prompts," }, { "figure_ref": [ "fig_4" ], "heading": "Framework Overview and Model Architecture", "publication_ref": [ "b0" ], "table_ref": [], "text": "Our proposed E 2 STR consists of three stages. Firstly, E 2 STR is trained in the standard auto-regressive framework to learn the fundamental STR ability.\nSecondly, as shown in the top of As shown in the top of Figure 3, the model architecture of E 2 STR consists of a vision encoder and a language decoder. The vision encoder receives image inputs and the language decoder processes text inputs in an auto-regressive manner. Following [1], a set of cross attention layers are utilized to bridge the output tokens of the vision encoder and the language decoder. Under the ICL framework, the vision encoder receives numerous images as input. To control the length of the vision token sequence, a fixed number of query tokens are learned by performing cross attention against the output tokens of the vision encoder." }, { "figure_ref": [], "heading": "Training Strategy", "publication_ref": [], "table_ref": [], "text": "Our training process is split into two stages: vanilla STR training and in-context STR training." }, { "figure_ref": [], "heading": "Vanilla Scene Text Recognition Training", "publication_ref": [], "table_ref": [], "text": "The first training phase seeks to provide E 2 STR with the fundamental skills in STR. For a scene text tuple (x, y) the STR selects in-context prompts based on a kNN strategy, then the test sample grasps context information from the prompts to assist the recognition. Specifically, the ambiguous character \"a\" in the test sample is easily misrecognized as \"q\". With the vision-language context produced by the in-context prompts (i.e., \"a\" in the first in-context prompt), E 2 STR rectifies the result. Note that in practice the in-context pool maintains image tokens and thus does not need to go through the vision encoder. input to the vision encoder is x and the initial input to the language decoder is a start token </s>. The training in this phase makes use of the next-token prediction loss:\nL = E (x,y)∼D - L l=1 log p(y l |y <l , x) , (2\n)\nwhere D is the training set." }, { "figure_ref": [ "fig_4", "fig_6" ], "heading": "In-Context Training", "publication_ref": [ "b0", "b3" ], "table_ref": [], "text": "The objective of the in-context training phase is to equip E 2 STR with the capability of In-Context Learning. As depicted in the top of Figure 3, the model is trained with context-rich scene text sequences as stated before. In these sequences, we interleave a placeholder </i> in the text for each image. This serves to make the language decoder distinguish between different samples following [1]. In this stage, we propose two strategies to generate context-rich scene text sequences: the Split Strategy and the Transform Strategy (the ST strategy). The Split Strategy. As shown in Figure 4 (a), when presented with a training tuple (x, y), we split the sample and hence generating a set of \"sub-samples\". It is evident that the sub-samples exhibit a strong connection to the original training sample. Furthermore, the sub-samples themselves demonstrate interconnectivity as they overlap with one another. Next, we proceed to concatenate the sub-samples with (x, y) and additional randomly selected samples to form a context-rich sample sequence. We randomly shuffle the whole sequence before generating the actual input text (i.e., interleaving the </i> token to the text sequence).\nIn practice, to accurately split the training samples, we synthesize 600k scene text images based on [4] Finally, after generating the sample sequence (X, Y ), where X is the image sequence and Y is the text sequence, X is fed into the vision encoder, while Y is processed by the language decoder under the auto-regressive framework. The loss function is formulated as:\nL (X,Y ) = - L l=1 log p(Y l |Y <l , X ≤l ),(3)\nwhere X ≤l is the set of image tokens preceding token Y l in the input sequence." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "In-Context Inference", "publication_ref": [], "table_ref": [], "text": "The In-Context Learning ability is acquired by our E 2 STR model through the above two-stage training approach. AS shown in the bottom of Figure 3, when presented with a test image x, the framework selects N samples {(x c i ,\ny c i )} N i=1\nfrom a in-context pool D c . The selected samples have the highest visual similarities to x in the latent space. Specifically, we calculate the image embedding I of x by averaging the visual token sequence Encoder(x). The in-context prompts are then formed by choosing N samples from D c , where the image embeddings of these samples have the top-N highest cosine similarity with I, i.e.,\nI = argTopN i∈1,2,••• ,|D c | I T I c i ∥I∥ 2 ∥I c i ∥ 2 , (4\n)\nwhere I is the index set of the top-N similar samples in D c , and I c i is the image embedding of the i-th sample in D c . The in-context prompts are then defined as:\nE = {(x c i , y c i )|i ∈ I}.(5)\nAs shown in the bottom of Figure 3, E is concatenated with the test sample x and our in-context prediction is formulated as p(y|E, x). In practice, the in-context pool D c retains solely the output tokens generated by the vision encoder, resulting in a highly efficient selection process. Furthermore, because the in-context pool is tiny and we do straight inference without training, the extra consumption is kept to a minimum (Please refer to Section 4.3). However, by fetching in-context prompts and exploiting incontext information, E 2 STR-ICL achieves an average word accuracy of 91.33%, which is 1.08% higher than E 2 STRbase and 0.83% higher than MAERec. Please note that this improvement is automatic and training-free. Specifically, on the six traditional STR benchmarks (i.e., IIIT, SVT, IC13, IC15, SVTP, and CT80) which have nearly reached saturation in recent years[17], E 2 STR still push the performance limit from 97.02% to 97.74%, leading to a 24% error rate decrease. On the 6 larger and harder STR benchmarks (i.e., COCO Text, CTW, TT, HOST, and WOST), E 2 STR-ICL outperforms MAERec by 0.94%." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Results on Cross Domain Scenarios", "publication_ref": [ "b19", "b19" ], "table_ref": [ "tab_3", "tab_3" ], "text": "We compare with SOTA methods on cross domain benchmarks. Two novel scenarios are introduced: the industrial scenario (MPSC and EIST) and the handwriting scenario (IAM). In each dataset, only 100 training samples are provided. For E 2 STR-ICL we simply use the training samples Figure 5. Comparison with the fine-tuned models. We report the average performance on three cross-domain datasets. Please note that ABINet [10], SATRN [20] and MAERec [17] are fine-tuned with the in-domain data, while our E 2 STR-ICL is training-free.\nas the in-context pool. We compare the training-free results in Table 2 and the fine-tuning results in Figure 5.\nAs we can see, on both industrial and handwriting scenarios our E 2 STR-ICL reaches SOTA performance. As shown in Table 2, under the training-free constraint E 2 STR-ICL reaches an average performance of 78.17%, which is 4.69% higher than E 2 STR-base and 4.03% higher than the SOTA method MAERec. Specifically, on EIST and IAM the application of ICL brings a huge improvement of 7.11% and 4.59%, which demonstrates the extraordinary adaptation ability of E 2 STR-ICL.\nWe further compare the fine-tuned methods and our E 2 STR-ICL. We fine-tune ABINet [10], SATRN [20] and MAERec [17] with the same data preserved in the incontext pool. As shown in Figure 5, E 2 STR-ICL outperforms MAERec by 1.16% even if the latter is fine-tuned with in-domain data, which is an exciting result given that E 2 STR-ICL requires no parameter updating. In a word, our E 2 STR can be rapidly implemented in a training-free manner in various novel scenarios and even achieves better per- formance than the fine-tuned SOTA methods." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Results on Hard Case Rectification", "publication_ref": [], "table_ref": [], "text": "We demonstrate the rectification ability of E 2 STR, which can handle hard cases in STR conveniently and effectively, in a training-free manner. Specifically, we define \"hard cases\" as the scene text samples that are wrongly recognized by both E 2 STR-base and the SOTA method MAERec.\nA small number of hard cases are then annotated, and we study how the model can benefit from the annotated hard cases and decrease the error rate of the rest hard cases. how the test sample learns from context. Shown in Figure 8, we select one context prompt for the test sample, and study the model pays attention to which region of the context image. This is achieved by collecting the attention maps between the language tokens and the image features. As we can see, when the language tokens pay close attention to the misrecognized image region, they also focus on the context image region which has similar patterns. For example, on the last row of Figure 8, E 2 STR misrecognized the test image as \"simplest\" without context. By providing a context prompt \"Display\", one language token focuses on the \"la\" region of both images, which have similar image patterns." }, { "figure_ref": [], "heading": "Shown in", "publication_ref": [], "table_ref": [], "text": "Finally, E 2 STR rectified the misrecognized \"e\" to \"a\" with the help of context ground-truth \"la\" of the focused region." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "There are two limitations in our study. Firstly, there is a very slim chance that E 2 STR-ICL erroneously rectifies predictions due to misleading prompts (please refer to supplementary materials). Additionally, our model still could not recognize characters that are not included in the lexicon." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose E " }, { "figure_ref": [ "fig_9" ], "heading": "Visualization", "publication_ref": [], "table_ref": [], "text": "We provide more examples of the cross attention visualization in Figure 11. " }, { "figure_ref": [], "heading": "Non-Context Prediction In-Context Prompt In-Context Prediction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multi-modal In-Context Learning Makes an Ego-evolving Scene Text Recognizer", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Architecture", "publication_ref": [ "b0" ], "table_ref": [], "text": "Figure 9 presents the detailed model architecture of E 2 STR. We follow the paradigm established by Flamingo [1], where we perform cross attention between the vision outputs and the language outputs in each language model layer. The language outputs serve as queries and the vision outputs serve as keys and values. The detailed configures of the vision encoder and the language decoder are summarized in Table 7. For fair comparison, we provide MAERec [17] with the same language decoder with E 2 STR-ICL (We name this modification as MAERec † ). The comparison between MAERec † and E 2 STR is shown in Table 8. Table 10 presents the inference time change brought by different sizes of the in-context pool. As we can see, when expanding the pool size by 4 times (i.e., from 100 to 500), the inference time is only increased by 0.07 times (i.e., from 0.094 to 0.101). As a result, our E 2 STR-ICL is highly scalable in terms of both in-context pool size and the number of in-context prompts." }, { "figure_ref": [], "heading": "Model Scalability", "publication_ref": [], "table_ref": [], "text": "Pool Size 100 200 300 400 500 Inference Time (s) 0.094 0.096 0.097 0.099 0.101 " }, { "figure_ref": [], "heading": "Model Stability", "publication_ref": [], "table_ref": [], "text": "Table 11 presents how the performance change when varying the domains of the in-context pool. As we can see, our E 2 STR-ICL is stable to the change of the context prompts. On all three benchmarks, out-of-domain in-context pools still improve the performance, though the improvement is lower than in-domain in-context pools. Nevertheless, there still exists a very slim chance that E 2 STR-ICL erroneously rectifies predictions due to misleading prompts. Shown in Figure 10, when certain areas of the prompt image is highly" } ]
Scene text recognition (STR) in the wild frequently encounters challenges when coping with domain variations, font diversity, shape deformations, etc. A straightforward solution is performing model fine-tuning tailored to a specific scenario, but it is computationally intensive and requires multiple model copies for various scenarios. Recent studies indicate that large language models (LLMs) can learn from a few demonstration examples in a trainingfree manner, termed "In-Context Learning" (ICL). Nevertheless, applying LLMs as a text recognizer is unacceptably resource-consuming. Moreover, our pilot experiments on LLMs show that ICL fails in STR, mainly attributed to the insufficient incorporation of contextual information from diverse samples in the training stage. To this end, we introduce E 2 STR, a STR model trained with context-rich scene text sequences, where the sequences are generated via our proposed in-context training strategy. E 2 STR demonstrates that a regular-sized model is sufficient to achieve effective ICL capabilities in STR. Extensive experiments show that E 2 STR exhibits remarkable training-free adaptation in various scenarios and outperforms even the fine-tuned stateof-the-art approaches on public benchmarks. The code is released at https://github.com/bytedance/E2STR.
Multi-modal In-Context Learning Makes an Ego-evolving Scene Text Recognizer
[ { "figure_caption": "Figure 1 .1Figure 1. Demonstration of real-world scene text scenarios and the adaptation pipeline. (a) Diversified scenarios of scene text in the real world. (b) The adaptation pipeline of current methods. They typically have to fine-tune upon a trained STR model with the training set, under a specific scenario. (c) The adaptation pipeline of our proposed E 2 STR. Our method automatically selects in-context prompts and performs training-free adaptation when faced with novel scenarios. Blue characters denote ambiguous scene text that is easily misrecognized.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Our pilot experiments. (a) The randomly concatenated scene text sequence. (b) Our proposed context-rich scene text sequence. (c) By training an STR model based on the randomly concatenated scene text sequence, we evaluate the model on three cross-domain datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(c)). The major cause of this failure is the lack of context in the generated scene text sequences during the training phase. The arbitrary concatenation of scene text does not provide any contextual information (i.e., sample connections) between different samples (Figure 2 (a)). Consequently, the model lacks the ability to effectively use information derived from in-context prompts(Figure 2 (c)), which implies that in-context training is essentially important for the effective implementation of ICL in STR.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 , E 232STR is further trained based on our proposed In-Context Training paradigm. In this stage E 2 STR learns to understand the connection between different samples, allowing it to profit from in-context prompts. Finally, as shown in the bottom of Figure 3, E 2 STR fetches in-context prompts based on visual similarity during inference, allowing the test sample to absorb context information.", "figure_data": "", "figure_id": "fig_3", "figure_label": "32", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Pipeline of our E 2 STR. Top: E 2 STR is trained with our in-context training strategy to obtain the ICL capability. Down: During inference, E2 STR selects in-context prompts based on a kNN strategy, then the test sample grasps context information from the prompts to assist the recognition. Specifically, the ambiguous character \"a\" in the test sample is easily misrecognized as \"q\". With the vision-language context produced by the in-context prompts (i.e., \"a\" in the first in-context prompt), E 2 STR rectifies the result. Note that in practice the in-context pool maintains image tokens and thus does not need to go through the vision encoder.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Illustration of the split strategy, the transform strategy, and how we hybrid them in practice.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Cross attention visualization between the language tokens and the vision tokens. Left: Non-context prediction of E 2 STR. Error characters are marked in red. Right: In-context prediction of E 2 STR-ICL, where only one in-context prompt is selected. We visualize how the language tokens attend to the prompt image and the test image.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Table 12 .12Figure 10. Examples of erroneous rectification brought by misleading prompts.", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 11 .11Figure 11. More examples of the cross attention visualization.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "2 ", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and record the accurate bounding boxes of every single character. Our subsequent experiments show that the synthesized data does not change E 2 STR's non-context text recognition ability, but the Split Strategy based on them equips E 2 STR with a strong capability of in-context learning. The Transform Strategy. As shown in Figure4(b), given a training tuple (x, y) (whether with character-wise bounding boxes or not), we perform data augmentation (a set of image transformations, e.g., color/direction transformations) on x. In this way, we also generate a set of sub-samples with the same label but different image patterns from the original sample.In practice, as depicted in Figure4(c), we hybrid the above strategies. The training set is formed by concatenating the synthesized data and the original training data used in the first training phase. For the synthesized data with character-wise bounding boxes, both the Split Strategy and the Transform Strategy are utilized. For the original training data, only the Transform Strategy is implemented.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results on common benchmarks. All methods are trained on the same dataset except for PARSeq. *: PARSeq is trained on its self-collected real-world dataset and we directly quote the results from its original paper. Red and blue values denote the best and the secondary performance. E 2 STR-base refers to non-context inference.", "figure_data": "4.1. Experimental SetupImplementation Details. Following MAERec [17], wechoose Vision Transformer [9] pre-trained under the MAE[16] framework as the vision encoder. The default languagedecoder is set as OPT-125M [46]. We use the cosine learn-ing rate scheduler without warm-up and the AdamW op-timizer with a weight decay of 0.01. We train our modelfor 10 epochs with an initial learning rate of 1e-4 duringthe first training stage, and 5 epochs with an initial learningrate of 5e-6 during the second in-context training stage. Thetraining batch size is 64 for the first stage and 8 for the sec-ond stage. During inference for E un-der various publicly available benchmarks, including Regu-lar Benchmarks IIIT5k [29], SVT [37], IC13 [18], IrregularBenchmarks IC15 [19], SVTP [31], CUTE80 (CT80) [32],COCO Text (COCO) [39], CTW [25], Total Text (TT) [8],Occluded Benchmarks OST (HOST and WOST) [41] andartistic benchmark WordArt [42]. In cross domain scenariosthe evaluated datasets including the metal-surface bench-mark MPSC [12] and the handwriting benchmark IAM [27],as well as a more difficult real-world industrial text recog-nition dataset EIST (Enhanced Industrial Scene Text) col-lected by us. EIST is collected from the real-world in-dustrial scenario, which contains 200 training samples and8000 test samples. We use Word Accuracy[17] as the eval-uation metric for all compared methods.4.2. Main Results4.2.1 Results on Common BenchmarksTable 1 presents the performance of E 2 STR on com-mon benchmarks. We evaluate E 2 STR on 12 commonlyused STR benchmarks and compare with SOTA meth-ods. E 2 STR-base refers to non-context prediction withoutprompts. For E", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on cross domain scenarios. Three datasets under two unseen domains are evaluated. All approaches are evaluated in a training-free manner.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on hard case rectification. \"Hard Cases\" are test samples misrecognized by both MAERec [17] and our E 2 STRbase. By providing annotation of a small part of the hard cases, we compare the performance increase in the rest test samples between the fine-tuned MAERec and our E 2 STR-ICL.", "figure_data": "COCOHOSTWordArtannotation rate 10%20%10%20%10%20%MAERec [17]000000w/ fine-tuning0.821.671.031.721.342.23E 2 STR-base000000E 2 STR-ICL10.12 12.92 12. 43 13.76 26.22 32.02Training TaskWord AccuracyVT TS SSNon-Context In-Context✓69.6926.82✓✓69.8075.60✓✓69.6673.09✓✓✓69.6676.77", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation of our proposed training strategies, where VT, TS, and SS refer to vanilla STR training, the Transform Strategy, and the Split Strategy. The experiment is performed on EIST.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": ", we perform experiments on COCO Text,HOST, and WordArt. As we can see, by annotating 10%to 20% of the hard cases, E 2 STR-ICL decreases the errorrate of the rest hard cases by up to 32%. This improve-ment is achieved by simply putting the annotated hard casesinto the in-context pool, without any hassle of re-trainingthe model. By comparison, by fine-tuning on the annotatedhard cases, MAERec only decreases the error rate by up to2.23%, completely incomparable to our E 2 STR-ICL. As aresult, E 2 STR-ICL can rapidly learn from hard cases andimprove the performance in a training-free manner, whileSOTA methods like MAERec can hardly benefit from hardsamples even with fine-tuning.4.3. Ablation StudiesImpact of Split-and-Transform Training Strategies. Weperform an experiment to show the effectiveness of our pro-posed Split Strategy and Transform Strategy. Shown in", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "the vanilla STR training brings a word accuracy of 69.69%, but the model cannot understand context information, and the performance even severely decreases to 26.82% when provided with in-context prompts. The application of the Transform Strategy and the Split Strategy in the second training stage does not increase the non-context performance (concerning that the synthesized data is typically weaker than the real-world data used in the vanilla training stage), but the model learns to profit from context, and the performance is improved to 75.60% and 73.09% respectively when provided with in-context prompts. Finally, the hybrid of the above two strategies further enhances the ICL ability, and the performance reaches 76.77%. Figure 7. The performance change brought by different sizes of the in-context pool. The X-axis is the size of the in-context pool and the Y-axis is the word accuracy results.", "figure_data": "Impact of Nearest Neighbor In-Context Prompt Selec-tion. In Section 3.4 we propose to select samples mostsimilar to the test image in the latent space based on thekNN strategy. Here we demonstrate the effectiveness ofthis strategy by comparing the performance to Random Se-lection, i.e., randomly selecting in-context prompts fromthe in-context pool. Shown in Figure 6, on all three eval-uated datasets, random selection can improve the perfor-mance of non-context prediction by a small margin, butis far from comparing with kNN selection. Specifically,on EIST random selection improves the performance ofnon-context from 69.66% to 70.65%, while kNN selectionreaches 76.77% word accuracy under the same condition.Impact of In-Context Pool Size. We next study the impactof varying the size of the in-context pool. Shown in Figure7, we perform experiments on IAM, EIST, and MPSC, byvarying the number of samples maintained in the in-contextpool. As we can see, in general, the larger in-context poolbrings about better performance, and this improvement ef-fect weakens as the pool continually expands. To be spe-cific, on IAM the word accuracy is increased from 69.51%to 74.10% (4.59% improvement) when the pool size is 100,while it only increases the performance from 74.10% to75.50% (1.40% improvement) when the pool is expandedwith another 100 samples. The above fact implies that a", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The performance change brought by the different number of in-context prompts.", "figure_data": "MAERec [17] E 2 STR-base E 2 STR-ICLInference Time (s)0.0920.0710.094", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of the mean inference time of each test sample. All results are reported under the same hardware environment. Impact of the Number of In-Context prompts. We analyze the influence of the number of in-context prompts. Shown in Table5, the experiment is performed on HOST, ToTal Text, and WordArt. Similar to the in-context pool size, the increase in the number of in-context prompts also generally boosts the performance of E 2 STR-ICL. However, as we can see, one to two in-context prompts are adequate for improving the performance by a large margin, and the further increase of in-context prompts brings about a limited improvement. This phenomenon is possibly caused by the fact that usually only a few characters are wrongly recognized for a bad case, which can be rectified by the context information from one or two in-context prompts. Computational Complexity. We experimentally compare the inference speed of E 2 STR and MAERec[17]. Shown in Table6, the inference speed of E 2 STR-ICL is on par with MAERec. Compared to E 2 STR-base, the in-context prompts of E 2 STR-ICL bring extra consumption, but this leads to a limited inference time increase (i.e., from 0.071 to 0.094). It makes sense since we only maintain the visual tokens in the in-context pool and directly feed the visual tokens of the selected prompts to the language model. Visualization and Further Analysis. We further study", "figure_data": "Non-Context PredictionIn-Context PromptIn-Context Prediction", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "2 STR, an ego-evolving scene text recognizer equipped with in-context learning capabilities. Through our proposed in-context training strategy incorporating context-rich scene text sequences, E 2 STR performs rapid adaptation across diverse scenarios without additional fine-tuning. Extensive experiments demonstrate that E 2 STR not only achieves SOTA performance on com-mon STR benchmarks but also outperforms even the approaches that have been fine-tuned specifically for crossdomain scenarios. The model's ability to easily and effectively handle difficult text cases further underscores its potential as a unified text recognizer. Overall, this research represents a significant step toward efficient and highly adaptive text recognition models well-suited for diverse real-world applications. and Baobao Chang. Mmicl: Empowering vision-language model with multi-modal in-context learning. arXiv preprint arXiv:2309.07915, 2023. 3 base 81.22 69.78 69.62 73.54 ICL 82.06 70.95 71.00 74.67 ST 131.2 base 81.26 69.66 69.51 73.48 ICL 83.64 76.77 74.10 78.17", "figure_data": "Training GPU HoursMPSC EIST IAM AVGkNN415.6", "figure_id": "tab_10", "figure_label": "", "figure_type": "table" } ]
Zhen Zhao; Jingqun Tang; Chunhui Lin; Binghong Wu; Can Huang; Hao Liu; Xin Tan; Zhizhong Zhang; Yuan Xie
[ { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Jeonghun Baek; Geewook Kim; Junyeop Lee; Sungrae Park; Dongyoon Han; Sangdoo Yun; Seong Joon Oh; Hwalsuk Lee", "journal": "", "ref_id": "b1", "title": "What is wrong with scene text recognition model comparisons? dataset and model analysis", "year": "2019" }, { "authors": "Darwin Bautista; Rowel Atienza", "journal": "Springer", "ref_id": "b2", "title": "Scene text recognition with permuted autoregressive sequence models", "year": "2022" }, { "authors": " Belval", "journal": "", "ref_id": "b3", "title": "Generator", "year": "" }, { "authors": "Fedor Borisyuk; Albert Gordo; Viswanath Sivakumar", "journal": "", "ref_id": "b4", "title": "Rosetta: Large scale system for text detection and recognition in images", "year": "2018" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Zhanzhan Cheng; Fan Bai; Yunlu Xu; Gang Zheng; Shiliang Pu; Shuigeng Zhou", "journal": "", "ref_id": "b6", "title": "Focusing attention: Towards accurate text recognition in natural images", "year": "2017" }, { "authors": "Kheng Chee; ' Ch; Chee Ng; Chan Seng", "journal": "IEEE", "ref_id": "b7", "title": "Total-text: A comprehensive dataset for scene text detection and recognition", "year": "2017" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b8", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Shancheng Fang; Hongtao Xie; Yuxin Wang; Zhendong Mao; Yongdong Zhang", "journal": "", "ref_id": "b9", "title": "Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition", "year": "2021" }, { "authors": "Alex Graves; Santiago Fernández; Faustino Gomez; Jürgen Schmidhuber", "journal": "", "ref_id": "b10", "title": "Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks", "year": "2006" }, { "authors": "Tongkun Guan; Chaochen Gu; Changsheng Lu; Jingzheng Tu; Qi Feng; Kaijie Wu; Xinping Guan", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b11", "title": "Industrial scene text detection with refined feature-attentive network", "year": "2022" }, { "authors": "Tanmay Gupta; Aniruddha Kembhavi", "journal": "", "ref_id": "b12", "title": "Visual programming: Compositional visual reasoning without training", "year": "2023" }, { "authors": "Jiabang He; Lei Wang; Yi Hu; Ning Liu; Hui Liu; Xing Xu; Heng Tao Shen", "journal": "", "ref_id": "b13", "title": "Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction", "year": "2023" }, { "authors": "Jiabang He; Lei Wang; Yi Hu; Ning Liu; Hui Liu; Xing Xu; Heng Tao Shen", "journal": "", "ref_id": "b14", "title": "Icl-d3ie: In-context learning with diverse demonstrations updating for document information extraction", "year": "" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b15", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Qing Jiang; Jiapeng Wang; Dezhi Peng; Chongyu Liu; Lianwen Jin", "journal": "", "ref_id": "b16", "title": "Revisiting scene text recognition: A data perspective", "year": "2008" }, { "authors": "Dimosthenis Karatzas; Faisal Shafait; Seiichi Uchida; Masakazu Iwamura; Lluis Gomez I Bigorda; Sergi Robles Mestre; Joan Mas; David Fernandez Mota; Jon Almazan Almazan; Lluis Pere De; Las Heras", "journal": "IEEE", "ref_id": "b17", "title": "Icdar 2013 robust reading competition", "year": "2013" }, { "authors": "Dimosthenis Karatzas; Lluis Gomez-Bigorda; Anguelos Nicolaou; Suman Ghosh; Andrew Bagdanov; Masakazu Iwamura; Jiri Matas; Lukas Neumann; Vijay Ramaseshan Chandrasekhar; Shijian Lu", "journal": "IEEE", "ref_id": "b18", "title": "Icdar 2015 competition on robust reading", "year": "2015" }, { "authors": "Junyeop Lee; Sungrae Park; Jeonghun Baek; Joon Seong; Seonghyeon Oh; Hwalsuk Kim; Lee", "journal": "", "ref_id": "b19", "title": "On recognizing texts of arbitrary shapes with 2d self-attention", "year": "2020" }, { "authors": "Bo Li; Yuanhan Zhang; Liangyu Chen; Jinghao Wang; Jingkang Yang; Ziwei Liu", "journal": "", "ref_id": "b20", "title": "Otter: A multi-modal model with in-context instruction tuning", "year": "2023" }, { "authors": "Hui Li; Peng Wang; Chunhua Shen; Guyu Zhang", "journal": "", "ref_id": "b21", "title": "Show, attend and read: A simple and strong baseline for irregular text recognition", "year": "2019" }, { "authors": "Minghui Liao; Jian Zhang; Zhaoyi Wan; Fengming Xie; Jiajun Liang; Pengyuan Lyu; Cong Yao; Xiang Bai", "journal": "", "ref_id": "b22", "title": "Scene text recognition from two-dimensional perspective", "year": "2019" }, { "authors": "Wei Liu; Chaofeng Chen; Kwan-Yee K Wong; Zhizhong Su; Junyu Han", "journal": "", "ref_id": "b23", "title": "Star-net: a spatial attention residue network for scene text recognition", "year": "2016" }, { "authors": "Yuliang Liu; Lianwen Jin; Shuaitao Zhang; Canjie Luo; Sheng Zhang", "journal": "Pattern Recognition", "ref_id": "b24", "title": "Curved scene text detection via transverse and longitudinal sequence connection", "year": "2019" }, { "authors": "Mengkai Ma; Qiu-Feng Wang; Shan Huang; Shen Huang; Yannis Goulermas; Kaizhu Huang", "journal": "Neurocomputing", "ref_id": "b25", "title": "Residual attentionbased multi-scale script identification in scene text images", "year": "2021" }, { "authors": "U-V Marti; Horst Bunke", "journal": "International Journal on Document Analysis and Recognition", "ref_id": "b26", "title": "The iam-database: an english sentence database for offline handwriting recognition", "year": "2002" }, { "authors": "Qiang Mei; Qinyou Hu; Chun Yang; Hailin Zheng; Zhisheng Hu", "journal": "IEEE Access", "ref_id": "b27", "title": "Port recommendation system for alternative container port destinations using a novel neural languagebased algorithm", "year": "2020" }, { "authors": "Anand Mishra; Karteek Alahari; Jawahar", "journal": "IEEE", "ref_id": "b28", "title": "Top-down and bottom-up cues for scene text recognition", "year": "2012" }, { "authors": "Ouali Imene; Mohamed Ben Halima; Ali", "journal": "Procedia Computer Science", "ref_id": "b29", "title": "Augmented reality for scene text recognition, visualization and reading to assist visually impaired people", "year": "2022" }, { "authors": "Trung Quy Phan; Palaiahnakote Shivakumara; Shangxuan Tian; Chew Lim; Tan ", "journal": "", "ref_id": "b30", "title": "Recognizing text with perspective distortion in natural scenes", "year": "2013" }, { "authors": "Anhar Risnumawan; Palaiahankote Shivakumara; Chee Seng Chan; Chew Lim; Tan ", "journal": "Expert Systems with Applications", "ref_id": "b31", "title": "A robust arbitrary text detection system for natural scene images", "year": "2014" }, { "authors": "Abdul Khader; Jilani Saudagar; Habeebvulla Mohammad", "journal": "Journal of Statistics and Management Systems", "ref_id": "b32", "title": "Augmented reality mobile application for arabic text extraction, recognition and translation", "year": "2018" }, { "authors": "Zhenwei Shao; Zhou Yu; Meng Wang; Jun Yu", "journal": "", "ref_id": "b33", "title": "Prompting large language models with answer heuristics for knowledge-based visual question answering", "year": "2023" }, { "authors": "Fenfen Sheng; Zhineng Chen; Bo Xu", "journal": "IEEE", "ref_id": "b34", "title": "Nrtr: A norecurrence sequence-to-sequence model for scene text recognition", "year": "2019" }, { "authors": "Baoguang Shi; Mingkun Yang; Xinggang Wang; Pengyuan Lyu; Cong Yao; Xiang Bai", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b35", "title": "Aster: An attentional scene text recognizer with flexible rectification", "year": "2018" }, { "authors": "Cunzhao Shi; Chunheng Wang; Baihua Xiao; Song Gao; Jinlong Hu", "journal": "Pattern Recognition", "ref_id": "b36", "title": "End-to-end scene text recognition using treestructured models", "year": "2014" }, { "authors": "Maria Tsimpoukelli; Jacob L Menick; Serkan Cabi; Oriol Eslami; Felix Vinyals; Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b37", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "Andreas Veit; Tomas Matera; Lukas Neumann; Jiri Matas; Serge Belongie", "journal": "", "ref_id": "b38", "title": "Coco-text: Dataset and benchmark for text detection and recognition in natural images", "year": "" }, { "authors": "Zhaoyi Wan; Minghang He; Haoran Chen; Xiang Bai; Cong Yao", "journal": "", "ref_id": "b39", "title": "Textscanner: Reading characters in order for robust scene text recognition", "year": "2020" }, { "authors": "Yuxin Wang; Hongtao Xie; Shancheng Fang; Jing Wang; Shenggao Zhu; Yongdong Zhang", "journal": "", "ref_id": "b40", "title": "From two to one: A new scene text recognizer with visual language modeling network", "year": "2021" }, { "authors": "Xudong Xie; Ling Fu; Zhifei Zhang; Zhaowen Wang; Xiang Bai", "journal": "Springer", "ref_id": "b41", "title": "Toward understanding wordart: Corner-guided transformer for scene text recognition", "year": "2022" }, { "authors": "Zhengyuan Yang; Zhe Gan; Jianfeng Wang; Xiaowei Hu; Yumao Lu; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b42", "title": "An empirical study of gpt-3 for few-shot knowledge-based vqa", "year": "2022" }, { "authors": "Deli Yu; Xuan Li; Chengquan Zhang; Tao Liu; Junyu Han; Jingtuo Liu; Errui Ding", "journal": "", "ref_id": "b43", "title": "Towards accurate scene text recognition with semantic reasoning networks", "year": "2020" }, { "authors": "Chongsheng Zhang; Yuefeng Tao; Kai Du; Weiping Ding; Bin Wang; Ji Liu; Wei Wang", "journal": "IEEE Transactions on Artificial Intelligence", "ref_id": "b44", "title": "Character-level street view text spotting based on deep multisegmentation network for smarter autonomous driving", "year": "2021" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b45", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Haozhe Zhao; Zefan Cai; Shuzheng Si; Xiaojian Ma; Kaikai An; Liang Chen; Zixuan Liu; Sheng Wang; Wenjuan Han", "journal": "", "ref_id": "b46", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 3, 308.86, 224.64, 245.36, 45.72 ], "formula_id": "formula_0", "formula_text": "p(y|x, C) = L l=1 p(y l |{x c 1 , • • • , x c n vision context ; x}, {y c 1 , • • • , y c n language context ; y <l }),(1)" }, { "formula_coordinates": [ 3, 308.86, 271.78, 236.25, 22.49 ], "formula_id": "formula_1", "formula_text": "where the context C = {(x c 1 , y c 1 ), • • • , (x c n , y c n )} is the set of the in-context prompts," }, { "formula_coordinates": [ 4, 84.38, 653.22, 198.11, 30.55 ], "formula_id": "formula_2", "formula_text": "L = E (x,y)∼D - L l=1 log p(y l |y <l , x) , (2" }, { "formula_coordinates": [ 4, 282.49, 663.95, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 5, 89.5, 383.5, 196.87, 30.55 ], "formula_id": "formula_4", "formula_text": "L (X,Y ) = - L l=1 log p(Y l |Y <l , X ≤l ),(3)" }, { "formula_coordinates": [ 5, 253.79, 511.84, 32.07, 12.33 ], "formula_id": "formula_5", "formula_text": "y c i )} N i=1" }, { "formula_coordinates": [ 5, 109.49, 617.93, 173, 26.52 ], "formula_id": "formula_6", "formula_text": "I = argTopN i∈1,2,••• ,|D c | I T I c i ∥I∥ 2 ∥I c i ∥ 2 , (4" }, { "formula_coordinates": [ 5, 282.49, 627.32, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 122.1, 702.12, 164.26, 12.69 ], "formula_id": "formula_8", "formula_text": "E = {(x c i , y c i )|i ∈ I}.(5)" } ]
2023-11-22
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b5", "b16", "b22", "b16", "b41", "b5", "b34", "b42", "b20", "b40", "b16", "b21", "b21", "b23", "b28", "b0", "b27", "b5", "b34", "b1", "b42", "b37", "b14", "b27", "b39", "b14", "b10", "b32", "b36", "b6", "b42", "b14", "b1", "b1", "b11", "b4", "b2", "b15", "b1" ], "table_ref": [], "text": "A major and ongoing thrust of research on Recommendation Systems (RSs) is to leverage side information. Side information, a.k.a, auxiliary data, which refers to non-feedback data associated with users and items [6,17,23], is available in various forms. For example, most RSs provide rich demographic profiles of users [17,42], e.g., gender, location, etc. Numerous item features [6,35,43] can be accumulated on an E-commerce RS, including the item's category, brand, price, textual descriptions, display images, and so on. Users of RSs also generate a wealth of data on item reviews [21,41]. Due to the sparsity of user feedback data, side information has shown great potential in enhancing recommendation performance by complementing user feedback [17,22], especially on cold-start users and items [22]. Furthermore, since side information (e.g., product reviews) explains detailed reasons why a user favors an item, it has been extensively exploited to develop explainable RSs [24,29].\nExisting methods of utilizing side information can be roughly classified into three types: (1) extract and combine the feature from side information by direct addition [28] or an attention mechanism [6,35]; (2) utilize an additional model, e.g., a graph [43] or a hypergraph [38], to capture the relationships between users and items in the side information; (3) incorporate side information as a source of supervision signal, e.g., multi-task learning [15].\nHowever, existing methods can not generalize well to different domains and/or side information. To demonstrate this, we select four recent studies that incorporate side information at feature (i.e., directly summing features [28] and combining features with attention [40]), model (i.e., FREEDOM 5 [43]), and signal (i.e., SPEX [15]) level, and apply them on two backbones, including the commonly adopted LightGCN [11] and the state-of-the-art SimRec [33]. Then, we implement these methods to perform POI recommendations by utilizing social network information on the Gowalla [37] and FourSquare [7] datasets. As shown in Figure 1, the inclusion of various side information in most cases reduces the performance of the backbone model. For example, FREEDOM was designed to utilize item textual and visual descriptions and had gained a 15.65% increase over LightGCN on Amazon Sports dataset as per [43]. But as shown in Figure 1, FREEDOM still can't match SimRec without side information, and even using SimRec as its backbone, FREEDOM is still inferior to SimRec itself. SPEX was designed to utilize social network information to improve social recommendation and had gained a 13.48% increase on the Weibo dataset as reported in [15]. Unfortunately, it has led to a significant performance decrease in backbone Sim-Rec. Furthermore, existing methods with side information are inferior to SimRec, which is based solely on feedback data without side information. Therefore, it is critical to develop a generalized model that can utilize various side information to enhance recommendation performance across domains.\nWe argue that the fundamental reason for the malfunction of existing methods is that the side information directly participates in the learning of recommendation. The volume of side information is larger than the number of users/items and the user feedback data. Thus, treating side information as features can overshadow the crucial ID features. For methods that model side information separately, the imbalance of instances results in poorer performance of the recommendation model. Furthermore, some side information, such as the users' social network, is indirectly involved in the recommendation. It is risky to adopt side information as supervision signals because the noisy signals can cause negative transfer in multitasking.\nRecently, pre-training-fine-tuning has achieved great success and has become a standard paradigm in many areas, e.g., Natural Language Processing [2]. The paradigm first learns representations from raw data in a self-supervision manner and then fine-tunes the representations on the downstream tasks. The goal is to benefit target tasks with knowledge acquired in pre-training and mitigate negative transfer.\nIn this paper, we propose to learn user-and item-representations by pretraining on the side information and fine-tuning on the feedback data. We focus on solving three challenges with regard to the generalization of pre-training on side information.\nC1: diversity of format. Common side information includes text sequences (e.g., product reviews), numerical features (e.g., Longitude and Latitude of a POI), categorical features (e.g., item brand), and relation data (e.g., social networks). Previous pre-training methods are based on either sequence models [2] or graph neural networks [12], and they are natural for modeling one format but sub-optimal for others. How to develop a unified pertaining framework to accommodate heterogeneous side information?\nC2: diversity of semantics. Side information describes users and items from multi-levels in a context different from RSs. For example, product reviews provide fine-grained user sentiment on specific items, while social networks express more coarse-grained user preferences (e.g., linked friends may have similar tastes). Prior studies often tailor pre-training tasks to different side information, e.g., link prediction task in the social network learns node semantics from a global perspective of user collaboration [5], contrastive learning tasks with domainspecific data augmentations [3] learns item semantics from a local perspective of textual descriptions, and these pre-training tasks are not robust across domains. How to design the pre-training tasks to integrate complementary information from multiple levels to adapt to diverse side information?\nC3: diversity of correlation. Side information measures similarity/dissimilarity over pairs, triples, or more objects, and these multi-faceted and/or high-order relationships are important. For example, a user's preference is affected by his/her social circle connecting multiple friends (i.e., multi-faceted) and his/her friend's friend (i.e., high-order). Former study emphasizes pair-wise relationships [16] and overlooks the high-order and multi-faceted relationship. How to derive a pre-training model that captures complex high-order and multi-faceted relationships as well as the pairwise relationship?\nIn this paper, we propose GENET, meaning Generalized hypErgraph pretraiNing on sidE informaTion. To address C1, GENET is based on a hypergraph, which is suitable to represent heterogeneous side information. To address C2, GENET presents three pre-training tasks, i.e., hyperlink prediction, global contrastive, and local contrastive, to reveal semantic relationships among users/items at different levels and combine fine-grained and coarse-grained representations. To address C3, GENET presents a novel strategy to corrupt the positive sample in the hyperlink prediction task, to increase the robustness of pertaining tasks while preserving the high-order and multi-faceted relation.\nTo verify the generalization of GENET, we perform extensive experiments on two recommendation tasks, i.e., TOP-N recommendation and sequential recommendation, on three different datasets with various side information, including social networks, POI locations, product reviews, and item brands. We compare GENET with 25 existing methods. The experimental results demonstrate that GENET can improve the SOTA SimRec by 5% -15% in terms of NDCG@10 on TopN recommendations, 3% -38% on sequential recommendations. GENET is significantly powerful in cold-start recommendations and increases NDCG@10 of cold-start users by at least 175% on all datasets.\nIn summary, the main contributions are three-fold. (1) A unified model GENET is presented to unleash the power of side information and enhance recommendation performance across different domains. (2) To the best of our knowledge, GENET is also the first pre-training framework on hypergraphs, which encodes high-order and multi-faceted (i.e., beyond pairwise) relationships with selfsupervision. (3) Extensive experiments (i.e., on different real-world datasets, recommendation tasks, and side information) demonstrate that the proposed GENET significantly outperforms the SOTA competitor." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "We briefly introduce relevant studies on recommendation models with side information, pre-training, and hypergraphs." }, { "figure_ref": [], "heading": "Recommendation Models with Side Information", "publication_ref": [ "b5", "b14", "b22", "b35", "b42", "b16", "b5", "b42", "b35", "b14" ], "table_ref": [], "text": "Using side information to improve recommendation has been extensively studied [6,15,23,36,43]. This section discusses three methods of improving recommendation systems using side information. The first method involves combining features from side information, exemplified by HIRE [17] and FDSA [6] which integrate these details in different ways. The second approach employs additional models, like graph models, to capture relationships between users and items, with FREEDOM [43] and Flashback [36] constructing various item graphs. The third method uses side information as a supervisory signal, as seen in SPEX [15] which incorporates social network data through multitask learning. Although these methods show improvements, they may fall short in representing heterogeneous side information and in generalization capabilities." }, { "figure_ref": [], "heading": "Recommendation Models with Pre-training", "publication_ref": [ "b32", "b24", "b22", "b41" ], "table_ref": [], "text": "Pre-training in recommendation models is mainly divided into two categories: one based on user-item feedback data and the other on side information. In the first category, models like SimRec [33] and BERT4Rec [25] enhance robustness and efficiency by pre-training on user-item feedback. The second category, exemplified by Graph-Flashback [23] and S 3 -rec [42], employs various pre-training strategies to learn representations from side information, enriching user and item profiles. These methods overlook the diversity of data formats and lose high-order information." }, { "figure_ref": [], "heading": "Recommendation models based on Hypergraph", "publication_ref": [ "b26", "b37" ], "table_ref": [], "text": "Recent studies increasingly focus on using hypergraphs for recommendations. For instance, HyperRec [27] employ hypergraph convolution networks to capture short-term correlations in sequence recommendation tasks and combine prediction tasks with contrastive tasks on hypergraphs. MHCN [38] introduces a multi-channel hypergraph convolutional network to enhance social recommendations. We identify a potential risk in previous methods: the direct use of side information in training recommendation tasks, leading to negative transfer effects.\nOur method leverages pre-trained representations on a hypergraph, integrating diverse side information like sequences and graphs. This strategy alleviates the above issues, may prevent negative transfer effects, and offers a versatile solution for different side information types." }, { "figure_ref": [], "heading": "Pre-training on Side Information", "publication_ref": [], "table_ref": [], "text": "To pre-train on side information, we first construct a hypergraph on the side information (Section 3.1), perform propagation on the hypergraph to obtain node embeddings for users and items (Section 3.2) and optimize the node embeddings via a set of pre-training tasks (Section 3.3 to Section 3.4). A node x i and a hyperedge e p are incident if x i ∈ e p . The incidence matrix H ∈ {0, 1} |X |×|E| describes the structure of G, where h(x i , e p ) = 1 if x i and e p are incident. Two nodes x i , x j are adjacent if there exists at least one hyperedge e p that x i ∈ e p , x j ∈ e p ." }, { "figure_ref": [], "heading": "Construction of Hypergraphs", "publication_ref": [], "table_ref": [], "text": "In constructing hypergraphs (G), we focus on four types of side information: social networks, POI locations, product reviews, and item brands. These encompass various data formats such as relational data, categorical and numerical features, and text sequences, suitable for common recommendation scenarios like POI and e-commerce recommendations.\n- \n#! # \" ## # $ #% #& ) ) → * HSCL GENET-P \"! \"$ \"\" inter&intra neg * → ) #! #\" ## #$ # % #& \"! \" $ \"\"" }, { "figure_ref": [], "heading": "Hypergraph", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Node Representation via Hypergraph Convolution", "publication_ref": [], "table_ref": [], "text": "Input of GENET is a hypergraph G, a sparse matrix of the initial node embeddings\nX 0 ∈ R |X |×|X |\n, where each node is represented by one-hot encoding, the degree matrix of hyperedges D e and the degree matrix of D v . LHG Encoder Inspired by the general Hypergraph Neural Network HGNN + [10], we derive the node embeddings by introducing an edge embedding vector for each hyperedge and propagating edge embeddings to incident nodes. As shown in Figure 3(a), the propagation is performed from nodes to hyperedges and from hyperedges to nodes, as shown below:\nE = D e -1 H ⊤ X 0 Θ 0 , X = D v -1 HIE. (1\n)\nwhere E is the edges embedding matrix. Θ is the trainable matrix. X is the nodes embedding matrix. I is an identity matrix. Noted that our encoder is Light HyperGraph encoding (LHG), because different from HGNN + , we remove the non-linear activation function, which may affect the aggregation of information in the hypergraph." }, { "figure_ref": [], "heading": "Hyperlink Prediction", "publication_ref": [ "b17", "b18", "b25" ], "table_ref": [], "text": "One pre-training task on G is hyperlink prediction, i.e., to predict if two nodes are adjacent given a hyperedge. The motivation for using hyperlink prediction is that adjacent nodes in a hyperedge are more likely to be mutually related. We phrase this property as mutual connectivity.\nMutual connectivity highlights the importance of predicting adjacent nodes. We propose the hyperlink prediction task, where given a hyperedge e h , an anchor node x i which is incident with e h , the positive node x j which is also incident with e h is discriminated from a negative node x k which is not incident with e h . Furthermore, we corrupt the representation of the positive sample x j to increase the difficulty of discrimination and the robustness of pre-training tasks.\nExisting graph pre-training methods corrupt node representations by cutting down some edges and creating a subgraph, and then using a graph encoder on the subgraph [18,19] to derive a representation of the target node. This strategy is infeasible in hypergraphs because hyperedges are indescomposable [26]. If we cut down the adjacency between the positive node and the anchor node, the hyperedge is broken, the meaning of the hyperedge is incomplete, and highorder/multi-faceted information is discarded.\nTo avoid disrupting the high-order/multi-faceted information, we propose to corrupt the positive sample by combining direct node perturbation (i.e., NP) and incidence matrix perturbation (IMP).\nNode perturbation randomly draws a node embedding x g j from:\nx g j = N (x j , λI),(2)\nwhere N is the Gaussian distribution with the original node representation as its mean, I is the identity matrix, and λ is the covariance scalar.\nIncidence matrix perturbation. To avoid information leakage, i.e., the node embedding may contain information about the condition hyperedge e h , we remove the connection and rewrite the incidence matrix by letting h(x j , e h ) = 0. The corrupted incidence matrix is denoted as Ĥ. We then propagate the filtered hyperedges back to nodes by:\nx a j = (D v -1 ĤWE) j ,(3)\nwhere (•) j represents the j-th row elements. Then we merge x g i and x a i and obtain the corrupted node representation x j , which is shown below:\nx j = x g j + x a j .(4)\nGiven the hyperedge e h , the anchor node x i , the positive node x j and the negative node x k . We use the ranking loss L P as the training objective of the hyperlink prediction task (GENET-P), which is shown below:\nL P = - i,j,k σ(x i • x j -x i • x k ),(5)\nwhere σ(•) is the sigmoid function." }, { "figure_ref": [], "heading": "Hypergraph contrastive learning", "publication_ref": [], "table_ref": [], "text": "Furthermore, as label information is unavailable during the pre-training stage, we introduce Hypergraph Self-Contrastive Learning (HSCL) to better the global information of the hypergraph and the local variability among nodes within a hyperedge.\nFirst, we propose self-contrastive inter-hyperedges. Formally, given a node set X in a batch, we treat any node embedding x i as the anchor, the positive sample is the anchor's augmented embedding x a i from the augmented hypergraph Ĝ by Equation 3, and negative samples are other node embeddings x j in X . We denote the inter-hyperedge contrastive loss as L inter .\nL inter = xi∈X -log exp (sim(x i , x a i )/τ ) xj ∈X exp (sim(x i , x j )/τ ) , (6\n)\nwhere τ is a temperature parameter. Second, we propose intra-hyperedge self-contrastive learning. Formally, given a hyperedge set E in a batch, we sample a node set X h from each hyperedge e h ∈ E and |X h | = K. Given an anchor node x i ∈ X h , its positive sample x a i is obtained by Equation 3, and its negative samples are other nodes embedding x j ∈ e h . We denote the intra-hyperedge contrastive loss as L intra .\nL intra = 1 |E||X h | e h ∈E x i ∈X h -log exp (sim(xi, x a i )/τ )) x j ∈X h exp (sim(xi, xj)/τ )) , (7\n)\nwhere τ is a temperature parameter. The overall loss in the pre-training stage includes the objective loss in hyperlink prediction L P and the hypergraph self-contrastive from global and local levels, as shown below\nL P re = L P + β 1 L intra + β 2 L inter .(8)\nwhere β 1 and β 2 are hyper-parameters." }, { "figure_ref": [], "heading": "Fine-tuning on User Feedback", "publication_ref": [], "table_ref": [], "text": "In recommendation systems, TOP-N recommendation and sequence recommendation are two of the most common recommendation tasks. GENET designs a simple yet effective downstream fine-tuning approach for both tasks." }, { "figure_ref": [ "fig_2" ], "heading": "Top-N recommendation", "publication_ref": [], "table_ref": [], "text": "In the pre-training phase, users and items are isolated, and there is no direct connection between them. Therefore, as shown in Figure 3(b), in the fine-tuning stage, we first establish connections between users and items. Hypergraph updating. We update the incidence matrix according to the interaction between users and items. Formally, if user u i is connected to hyperedge e p and item v j is connected to hyperedge e q , and u i interacts with v j (e.g., buy or click), we set h(u i , e q ) = 1 and h(v j , e p ) = 1.\nThe node embedding X is updated by:\nX = X + D v -1 HWE. (9\n)\nwhere E is obtained in the pre-training stage and W is the parameters obtained in the pre-training stage, H is the updated hypergraph.\nLightGCN fine-tuning. To comprehensively understand the relationship between users and items, we build a user-item bipartite graph A and employ the classical LightGCN to finetune the representations\nu i , v j = LightGCN ( ǔi , vj , A). (10\n)\nwhere ǔi , and vj are the user embedding and the item embedding in X, respectively.\nThen, we introduce L T op as the training objective in the fine-tuning stage\nL T op = - i,j,k σ(u i v j -u i v k ). (11\n)\nwhere the user i interacts with item j and not interacts with item k, σ(•) is the sigmoid function." }, { "figure_ref": [], "heading": "Sequential recommendation", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "Unlike the TOP-N recommendation, sequential recommendation focuses on not only long-term user intent but also short-term user intent to make item predictions for the next moment. We believe that LightGCN only captures the long-term user intent, therefore we extra employ GRU [1] to capture short-term user intent. Formally, given a user i 's interaction sequence on most recent s items,V i = {v t-s+1 , • • • , v t }, our model GENET learns the user's short-term intents at moment t by GRU [1] h\nt i = GRU (V i ), ut i = u i + h t i . (12\n)\nwhere h t i is the hidden state of user i at time t. The user representation ut i at time t is an ensemble of short-term user intent and long-term user preference.\nThen, We introduce the sequential recommendation loss L Seq .\nL Seq = - i,j,k σ( ut i v j -ut i v k ). (13\n)\nwhere the user i interacts with item j and not interacts with item k, σ(•) is the sigmoid function." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "In this section, we conduct experiments to study the following questions:\nRQ1: Can GENET generalize well to different side information? RQ2: How does each component in GENET contribute to the overall performance? RQ3: Is GENET suitable for pre-training hypergraphs? RQ4: Does GENET alleviate the users and items cold start problems? " }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b36", "b6", "b19", "b9", "b19", "b14", "b0", "b29", "b10", "b31", "b33", "b1", "b32", "b2", "b14", "b16", "b42", "b35", "b22", "b16", "b14", "b7", "b9", "b35", "b42", "b0", "b12", "b30", "b24", "b3", "b1", "b35", "b2", "b41", "b22", "b8", "b9", "b13" ], "table_ref": [ "tab_1" ], "text": "Datasets: Our experiments are conducted on three real-world datasets: Gowalla [37], Foursquare [7], and Books [20]. We evaluate GENET on these datasets because they are evaluated frequently and cover a wide range of common types of side information in recommendation systems, including item brand, item category, item review, user social networks, and POI geo-location. These datasets include timestamped feedback. Thus, they can be used for Top-N recommendations and sequential recommendations. Statistics of the datasets are shown in Table 1.\nEvaluation Metrics We evaluated the TOP-N recommendation performance and Sequential recommendation performance of all models on three datasets using two widely-used metrics, N@K(NDCG@K) and R@K(Recall@K), where K= [10,20]. We adopt the widely used leave-one-out evaluation method, similar to SPEX [15].\nCompetitors. We compare GENET to an extensive list of competitors. We classify based on whether competitors use side information and pre-training. For the TOP-N task, the competitors are (1).without side information and pre-training: NGCF [30], LightGCN [11],SGL [32], HCCF [34], (2). with pretraining but without side information: SimRec [33], (3). with side information but without pre-training: SPEX [15], HIRE [17], FREEDOM [43], Flashback [36], HGNN + [10], (4). with side information and pre-training: Graph-Flashback [23]. There are three categories in (3): Feature [17], Signal [15] and Model [8,10,36,43].\nFor the Sequential task, the competitors are (1). without side information and pre-training: SASRec [13], ContrastVAE [31], BRET4Rec [25], CBiT [4], (2). with side information but without pre-training: Flashback [36],HGNN + [10], (3). with side information and pre-training: S 3 -Rec [42],Graph-Flashback [23]. More details about competitors can be found in the related works 2.1 section.\nImplementations. The proposed GENET framework is implemented in Pytorch library and Deep HyperGraph [9,10]. We adopt the Adam [14] optimizer in both the pre-training and fine-tuning stages. In the pre-training stage, the embedding sizes of the nodes and hyperedges are both 64, the batch size is 4096, the learning rate is 0.0005, and the epochs are 500. The noise intensity λ is 0.1. The hypergraph contrastive learning hyperparameter β 1 and β 2 are 0.005 and 0.01, respectively. For the updated hypergraph, we only use the training dataset. In the fine-tuning stage, The epochs are 10, and the learning rate is 0.0005. During the first three epochs, we amplify the learning rate by ten times. Models are in the same setting for fairness. The purpose is to expedite the transfer of the pre-trained universal representations to downstream tasks. The LightGCN Table 2. Comparative performance for TOP-N recommendation, bold fonts for best results, underlined scores for the second best results." }, { "figure_ref": [], "heading": "Gowalla", "publication_ref": [ "b38", "b42" ], "table_ref": [], "text": "Foursquare Book N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@ layer number K is 2. To potentially address oversmoothing, we also explore the possibility of incorporating contrastive learning and denoising techniques [39,43]." }, { "figure_ref": [], "heading": "Generalization of GENET", "publication_ref": [], "table_ref": [], "text": "To answer RQ1, we evaluate GENET on three datasets (Gowalla, Foursquare, Books) for two recommendation tasks (Top-N and sequential recommendation), showing superior performance across all metrics and tasks. For instance, in Top-N recommendation, it increased NDCG@10 by up to 14.98% and Recall@10 by up to 10.66%. In sequential recommendation, the increase was up to 37.88% for NDCG@10 and 19.58% for Recall@10. Existing methods demonstrated weaker generalization, especially across different datasets. GENET consistently achieved a recommendation performance of over 0.6000 for NDCG@10 across all datasets and tasks, indicating strong generalization capability. Moreover, pre-training positively impacted recommendation performance. Models like SimRec and Graph-Flashback significantly improved after pre-training. S 3 -rec, by incorporating pretraining into SASRec, achieved improvements of 119.56% and 131.28% in Recall@10 and NDCG@10, respectively, on the Gowalla dataset. Overall, GENET demonstrated robust generalization across three datasets and two tasks, outperforming 25 competitors in various side information scenarios." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "To answer RQ2, we assessed the effectiveness of various variants of GENET by removing components such as the node perturbation method NP, incidence ma- N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@20 4(a), the results indicated that each component is crucial for performance. On the Gowalla dataset, removing NP(i.e.,\"w/o\" NP), IMP(i.e.,\"w/o\" IMP), and HSCL(i.e.,\"w/o\" HSCL) led to declines in Recall@10 and NDCG@10 by 0.0124 and 0.0185, 0.0181 and 0.0332, 0.0103 and 0.0196, respectively. Particularly, removing IMP had the most significant impact as it is a core component of the pre-training stage. Its removal left only NP for node perturbation, oversimplifying the link prediction task and affecting the model's ability to learn intrinsic relationships and higher-order dependencies within the data." }, { "figure_ref": [], "heading": "Pre-training on Hypergraphs", "publication_ref": [], "table_ref": [], "text": "To study whether the proposed pre-training tasks are suitable for pre-training on hypergraphs (RQ3), we studied the suitability of pre-training tasks for hypergraphs (RQ3), comparing the pre-training phase (GENET-P) of GENET with two common graph-based pretext tasks: Link Prediction (LP) and Node Feature Reconstruction (NR). We utilized two popular graph pre-training strategies: Contrastive and Generative. Consequently, we obtained four pre-training tasks: LP-C, LP-G, NR-C, and NR-G. As a baseline, we also implemented random initialization of node embeddings. These pre-training methods were evaluated by directly computing user-item similarity for recommendation (No-tuning) and performance after fine-tuning with downstream models, including Recall@10 and NDCG@10, which are shown in 4(b)(c).\nBased on the results, we draw the following conclusions: (1) GENET-P outperforms all other pre-training tasks on hypergraphs. For instance, on the FourSquare dataset, it increases Recall@10 by 28.24% and NDCG@1 by 30.86% compared to the best competitor (i.e., LP-C). ( 2) GENET-P is already very powerful without fine-tuning on feedback data, especially on the BOOK dataset, where all competitors' Recall@10 is under 0.5379 (0.2792 in NDCG@10), and GENET-P's " }, { "figure_ref": [ "fig_3" ], "heading": "Cold-start", "publication_ref": [], "table_ref": [], "text": "To answer RQ4, we evaluate the performance of GENET in scenarios where it encounters user cold-start and item cold-start situations.\nTo create a user cold-start scenario, we select the last 1% of users based on their interaction counts and designate them as cold-start users. We remove these cold-start users from the training dataset and retain only cold-start users in the test dataset. We create an item cold-start scenario in the same manner.\nAs shown in Figure 4(d), GENET demonstrates a remarkable capability to deal with cold-start users and items. It has a 321.50% to 869.53% improvement in NDCG@10 and a 258.98% to 538.89% improvement in Recall@10 compared with SimRec in the users cold-start scenario, and it has a 33.33% to 4069.35% improvement in Recall@10 and 152.11% to 5191.67% improvement in NDCG@10 in the items cold-start scenario. This highlights the effectiveness of the hypergraph constructed from social networks and user reviews in accurately modeling users' preferences and the effectiveness of the hypergraph constructed from POI location and item features in accurately modeling items' properties." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we proposed a novel framework called Generalized hypErgraph pretraiNing on sidE informaTion(GENET). It aims to enhance recommendation performances by integrating side information. GENET is based on pretrainingfinetuning, where heterogeneous side information is constructed as a unified hypergraph. We propose novel pre-training tasks tailored explicitly for hypergraphs, which can effectively capture high-order and multi-faceted relationships in hypergraphs. The finetuning is conducted with simple and straightforward approaches. Extensive experimental results show that GENET has excellent generalization. It outperforms SOTA competitors on two recommendation tasks on three public datasets with varying side information." } ]
Recommendation with side information has drawn significant research interest due to its potential to mitigate user feedback sparsity. However, existing models struggle with generalization across diverse domains and types of side information. In particular, three challenges have not been addressed, and they are (1) the diverse formats of side information, including text sequences. (2) The diverse semantics of side information that describes items and users from multi-level in a context different from recommendation systems. (3) The diverse correlations in side information to measure similarity over multiple objects beyond pairwise relations. In this paper, we introduce GENET (Generalized hypErgraph pretraiNing on sidE informaTion), that pre-trains user and item representations on feedback-irrelevant side information and fine-tunes the representations on user feedback data. GENET leverages pre-training as a means to prevent side information from overshadowing critical ID features and feedback signals. It employs a hypergraph framework to accommodate various types of diverse side information. During pre-training, GENET integrates tasks for hyperlink prediction and self-supervised contrast to capture fine-grained semantics at both local and global levels. Additionally, it introduces a unique strategy to enhance pre-training robustness by perturbing positive samples while maintaining high-order relations. Extensive experiments demonstrate that GENET exhibits strong generalization capabilities, outperforming the SOTA method by up to 38% in TOP-N recommendation and sequential recommendation tasks on various datasets with different side information.
GENET: Unleashing the Power of Side Information for Recommendation via Hypergraph Pre-training
[ { "figure_caption": "Fig. 1 .1Fig. 1. NDCG@10 on the Gowalla and FourSquare dataset by incorporating side information at feature, model, and signal level upon backbone models LightGCN and SimRec", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Different types of side information and the construction of hypergraphs", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The framework of GENET pre-training phase and fine-tuning stage.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Performances of GENET variants, different pre-training tasks and cold start.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Social Network Hypergraph: Constructs hyperedges to represent social circles, capturing high-order and multifaceted social connections. For example, a hyperedge represents David's social circle, connecting individuals like Victor and Alice if they are friends of David. -POI Location Hypergraph: Utilizes geographical data (longitude and latitude) to segment POIs into regions. POIs are clustered into regions using k-means clustering, and a hyperedge is constructed to represent each region.", "figure_data": "-Product Review Hypergraph: Builds hyperedges based on brand-awaresentiments. We analyze sentiments of reviews under a brand to connect userswho exhibit brand loyalty.-Item Feature Hypergraph: Handles item-related metadata, such as brandsand categories. Hyperedges are constructed for each brand and category. Anitem is connected to the corresponding hyperedges if it belongs to a specificbrand or category.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics of the datasets", "figure_data": "Dataset#User #Item #Interactions #Relationship#Side Information #DensityGowalla10,671 48,5871,195,53293,260 Friendship,POI location0.23%Foursquare 22,065 9,891182,04479,704 Friendship,POI location0.08%Books28,898 85,1883,233,028160,448,633 Product review, category0.13%", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparative performance on Sequential recommendation", "figure_data": "GowallaFoursquareBook", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Yang Li; Qi'ao Zhao; Chen Lin; Zhenjie Zhang; Xiaomin Zhu
[ { "authors": "J Chung; C Gulcehre; K Cho; Y Bengio", "journal": "", "ref_id": "b0", "title": "Empirical evaluation of gated recurrent neural networks on sequence modeling", "year": "2014" }, { "authors": "J Devlin; M W Chang; K Lee; K Toutanova", "journal": "HAACL-HLT", "ref_id": "b1", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "X Dong; X Zhan; Y Wu; Y Wei; M C Kampffmeyer; X Wei; M Lu; Y Wang; X Liang", "journal": "", "ref_id": "b2", "title": "M5product: Self-harmonized contrastive learning for e-commercial multi-modal pretraining", "year": "2022" }, { "authors": "H Du; H Shi; P Zhao; D Wang; V S Sheng; Y Liu; G Liu; L Zhao", "journal": "", "ref_id": "b3", "title": "Contrastive learning with bidirectional transformers for sequential recommendation", "year": "2022" }, { "authors": "A El-Kishky; T Markovich; S Park; C Verma; B Kim; R Eskander; Y Malkov; F Portman; S Samaniego; Y Xiao", "journal": "", "ref_id": "b4", "title": "Twhin: Embedding the twitter heterogeneous information network for personalized recommendation", "year": "2022" }, { "authors": "Y Fang; L Si", "journal": "", "ref_id": "b5", "title": "Matrix co-factorization for recommendation with rich side information and implicit feedback", "year": "2011" }, { "authors": "J Feng; Y Li; C Zhang; F Sun; F Meng; A Guo; D Jin", "journal": "", "ref_id": "b6", "title": "Deepmove: Predicting human mobility with attentional recurrent networks", "year": "2018" }, { "authors": "Y Feng; H You; Z Zhang; R Ji; Y Gao", "journal": "", "ref_id": "b7", "title": "Hypergraph neural networks", "year": "2019" }, { "authors": "Y Feng; H You; Z Zhang; R Ji; Y Gao", "journal": "", "ref_id": "b8", "title": "Hypergraph neural networks", "year": "2019" }, { "authors": "Y Gao; Y Feng; S Ji; R Ji", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "Hgnn + : General hypergraph neural networks", "year": "2022" }, { "authors": "X He; K Deng; X Wang; Y Li; Y Zhang; M Wang", "journal": "", "ref_id": "b10", "title": "Lightgcn: Simplifying and powering graph convolution network for recommendation", "year": "2020" }, { "authors": "Z Hu; Y Dong; K Wang; K W Chang; Y Sun", "journal": "", "ref_id": "b11", "title": "Gpt-gnn: Generative pretraining of graph neural networks", "year": "2020" }, { "authors": "W C Kang; J Mcauley", "journal": "IEEE", "ref_id": "b12", "title": "Self-attentive sequential recommendation", "year": "2018" }, { "authors": "D Kinga; J B Adam", "journal": "", "ref_id": "b13", "title": "A method for stochastic optimization", "year": "2015" }, { "authors": "H Li; L Li; G Xv; C Lin; K Li; B Jiang", "journal": "ACM Transactions on Information Systems (TOIS)", "ref_id": "b14", "title": "Spex: A generic framework for enhancing neural social recommendation", "year": "2021" }, { "authors": "X Li; H Chen", "journal": "Decision Support Systems", "ref_id": "b15", "title": "Recommendation as link prediction in bipartite graphs: A graph kernel-based machine learning approach", "year": "2013" }, { "authors": "T Liu; Z Wang; J Tang; S Yang; G Y Huang; Z Liu", "journal": "", "ref_id": "b16", "title": "Recommender systems with heterogeneous side information", "year": "2019" }, { "authors": "Z Liu; X Yu; Y Fang; X Zhang", "journal": "", "ref_id": "b17", "title": "Graphprompt: Unifying pre-training and downstream tasks for graph neural networks", "year": "2023" }, { "authors": "Y Lu; X Jiang; Y Fang; C Shi", "journal": "", "ref_id": "b18", "title": "Learning to pre-train graph neural networks", "year": "2021" }, { "authors": "J Mcauley; C Targett; Q Shi; Van Den; A Hengel", "journal": "", "ref_id": "b19", "title": "Image-based recommendations on styles and substitutes", "year": "2015" }, { "authors": "X Ning; G Karypis", "journal": "", "ref_id": "b20", "title": "Sparse linear methods with side information for top-n recommendations", "year": "2012" }, { "authors": "A Pfadler; H Zhao; J Wang; L Wang; P Huang; D L Lee", "journal": "IEEE", "ref_id": "b21", "title": "Billion-scale recommendation with heterogeneous side information at taobao", "year": "2020" }, { "authors": "X Rao; L Chen; Y Liu; S Shang; B Yao; P Han", "journal": "", "ref_id": "b22", "title": "Graph-flashback network for next location recommendation", "year": "2022" }, { "authors": "R Shimizu; M Matsutani; M Goto", "journal": "Knowledge-Based Systems", "ref_id": "b23", "title": "An explainable recommendation framework based on an improved knowledge graph attention network with massive volumes of side information", "year": "2022" }, { "authors": "F Sun; J Liu; J Wu; C Pei; X Lin; W Ou; P Jiang", "journal": "", "ref_id": "b24", "title": "Bert4rec: Sequential recommendation with bidirectional encoder representations from transformer", "year": "2019" }, { "authors": "K Tu; P Cui; X Wang; F Wang; W Zhu", "journal": "", "ref_id": "b25", "title": "Structural deep embedding for hyper-networks", "year": "2018" }, { "authors": "J Wang; K Ding; L Hong; H Liu; J Caverlee", "journal": "", "ref_id": "b26", "title": "Next-item recommendation with sequential hypergraphs", "year": "2020" }, { "authors": "J Wang; P Huang; H Zhao; Z Zhang; B Zhao; D L Lee", "journal": "", "ref_id": "b27", "title": "Billion-scale commodity embedding for e-commerce recommendation in alibaba", "year": "2018" }, { "authors": "X Wang; X He; F Feng; L Nie; T S Chua", "journal": "", "ref_id": "b28", "title": "Tem: Tree-enhanced embedding model for explainable recommendation", "year": "2018" }, { "authors": "X Wang; X He; M Wang; F Feng; T S Chua", "journal": "", "ref_id": "b29", "title": "Neural graph collaborative filtering", "year": "2019" }, { "authors": "Y Wang; H Zhang; Z Liu; L Yang; P S Yu", "journal": "", "ref_id": "b30", "title": "Contrastvae: Contrastive variational autoencoder for sequential recommendation", "year": "2022" }, { "authors": "J Wu; X Wang; F Feng; X He; L Chen; J Lian; X Xie", "journal": "", "ref_id": "b31", "title": "Self-supervised graph learning for recommendation", "year": "2021" }, { "authors": "L Xia; C Huang; J Shi; Y Xu", "journal": "", "ref_id": "b32", "title": "Graph-less collaborative filtering", "year": "2023" }, { "authors": "L Xia; C Huang; Y Xu; J Zhao; D Yin; J Huang", "journal": "", "ref_id": "b33", "title": "Hypergraph contrastive collaborative filtering", "year": "2022" }, { "authors": "Y Xie; P Zhou; S Kim", "journal": "", "ref_id": "b34", "title": "Decoupled side information fusion for sequential recommendation", "year": "2022" }, { "authors": "D Yang; B Fankhauser; P Rosso; P Cudre-Mauroux", "journal": "", "ref_id": "b35", "title": "Location prediction over sparse user mobility traces using rnns", "year": "2020" }, { "authors": "H Yin; B Cui; L Chen; Z Hu; C Zhang", "journal": "ACM Transactions on Knowledge Discovery from Data (TKDD)", "ref_id": "b36", "title": "Modeling location-based user rating profiles for personalized recommendation", "year": "2015" }, { "authors": "J Yu; H Yin; J Li; Q Wang; N Q V Hung; X Zhang", "journal": "", "ref_id": "b37", "title": "Self-supervised multichannel hypergraph convolutional network for social recommendation", "year": "2021" }, { "authors": "J Yu; H Yin; X Xia; T Chen; L Cui; Q V H Nguyen", "journal": "", "ref_id": "b38", "title": "Are graph augmentations necessary? simple graph contrastive learning for recommendation", "year": "2022" }, { "authors": "T Zhang; P Zhao; Y Liu; V S Sheng; J Xu; D Wang; G Liu; X Zhou", "journal": "", "ref_id": "b39", "title": "Feature-level deeper self-attention network for sequential recommendation", "year": "2019" }, { "authors": "F Zhao; M Xiao; Y Guo", "journal": "", "ref_id": "b40", "title": "Predictive collaborative filtering with side information", "year": "2016" }, { "authors": "K Zhou; H Wang; W X Zhao; Y Zhu; S Wang; F Zhang; Z Wang; J R Wen", "journal": "", "ref_id": "b41", "title": "S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization", "year": "2020" }, { "authors": "X Zhou", "journal": "", "ref_id": "b42", "title": "A tale of two graphs: Freezing and denoising graph structures for multimodal recommendation", "year": "2022" } ]
[ { "formula_coordinates": [ 7, 203.24, 121.91, 170.83, 115.47 ], "formula_id": "formula_0", "formula_text": "#! # \" ## # $ #% #& ) ) → * HSCL GENET-P \"! \"$ \"\" inter&intra neg * → ) #! #\" ## #$ # % #& \"! \" $ \"\"" }, { "formula_coordinates": [ 7, 134.77, 347.79, 61.68, 10.87 ], "formula_id": "formula_1", "formula_text": "X 0 ∈ R |X |×|X |" }, { "formula_coordinates": [ 7, 263.05, 441.25, 213.3, 30.47 ], "formula_id": "formula_2", "formula_text": "E = D e -1 H ⊤ X 0 Θ 0 , X = D v -1 HIE. (1" }, { "formula_coordinates": [ 7, 476.35, 453.04, 4.24, 8.74 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 8, 273.2, 293.52, 207.39, 13.68 ], "formula_id": "formula_4", "formula_text": "x g j = N (x j , λI),(2)" }, { "formula_coordinates": [ 8, 262.68, 410.76, 217.92, 14.34 ], "formula_id": "formula_5", "formula_text": "x a j = (D v -1 ĤWE) j ,(3)" }, { "formula_coordinates": [ 8, 277.67, 469.45, 202.92, 13.68 ], "formula_id": "formula_6", "formula_text": "x j = x g j + x a j .(4)" }, { "formula_coordinates": [ 8, 239.61, 535.77, 240.99, 22.21 ], "formula_id": "formula_7", "formula_text": "L P = - i,j,k σ(x i • x j -x i • x k ),(5)" }, { "formula_coordinates": [ 9, 204.49, 186.96, 271.86, 28.38 ], "formula_id": "formula_8", "formula_text": "L inter = xi∈X -log exp (sim(x i , x a i )/τ ) xj ∈X exp (sim(x i , x j )/τ ) , (6" }, { "formula_coordinates": [ 9, 476.35, 195.27, 4.24, 8.74 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 9, 180.75, 300.64, 295.92, 26.58 ], "formula_id": "formula_10", "formula_text": "L intra = 1 |E||X h | e h ∈E x i ∈X h -log exp (sim(xi, x a i )/τ )) x j ∈X h exp (sim(xi, xj)/τ )) , (7" }, { "formula_coordinates": [ 9, 476.67, 308.21, 3.93, 7.86 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 9, 233.92, 393.37, 246.67, 11.72 ], "formula_id": "formula_12", "formula_text": "L P re = L P + β 1 L intra + β 2 L inter .(8)" }, { "formula_coordinates": [ 9, 256.19, 653.34, 220.16, 12.46 ], "formula_id": "formula_13", "formula_text": "X = X + D v -1 HWE. (9" }, { "formula_coordinates": [ 9, 476.35, 656.12, 4.24, 8.74 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 10, 240.45, 185.97, 235.72, 9.79 ], "formula_id": "formula_15", "formula_text": "u i , v j = LightGCN ( ǔi , vj , A). (10" }, { "formula_coordinates": [ 10, 476.16, 186.11, 4.43, 8.74 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 10, 242.49, 247.97, 233.68, 22.21 ], "formula_id": "formula_17", "formula_text": "L T op = - i,j,k σ(u i v j -u i v k ). (11" }, { "formula_coordinates": [ 10, 476.16, 250.04, 4.43, 8.74 ], "formula_id": "formula_18", "formula_text": ")" }, { "formula_coordinates": [ 10, 276.25, 431.26, 199.92, 28.67 ], "formula_id": "formula_19", "formula_text": "t i = GRU (V i ), ut i = u i + h t i . (12" }, { "formula_coordinates": [ 10, 476.16, 441.4, 4.43, 8.74 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 10, 242.73, 508.77, 233.44, 22.21 ], "formula_id": "formula_21", "formula_text": "L Seq = - i,j,k σ( ut i v j -ut i v k ). (13" }, { "formula_coordinates": [ 10, 476.16, 510.84, 4.43, 8.74 ], "formula_id": "formula_22", "formula_text": ")" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b35", "b32", "b17", "b14", "b23", "b21", "b27", "b36" ], "table_ref": [], "text": "Large Language Models (LLMs) have exhibited remarkable capabilities, with popular models such as ChatGPT 1 and GPT4 (OpenAI, 2023) showcasing their potentials in a variety of NLP tasks (Zhao et al., 2023;Mohamadi et al., 2023). However, these LLMs often suffer from extensive parameter requirements and associated computational demands, limiting their practicality and scalability for real-world applications. Parameter-Efficient Fine-Tuning (PEFT) addresses the challenges by reducing the number of parameters required for effective fine-tuning without compromising the model performance. Notable PEFT approaches include LoRA (Hu et al., 2022), adapter tuning (Houlsby et al., 2019), prefix-tuning (Li and Liang, 2021), prompt-tuning (Lester et al., 2021), P-tuning (Liu et al., 2022), BitFit (Zaken et al., 2022) and others.\nDespite the advancement, we observe that existing PEFT approaches for LLMs present several limitations that hinder their effectiveness and practicality. Most PEFT methods are proposed in the BERT era primarily for encoder-based models, which are not tailored specifically for LLMs. Detailed PEFT implementations are mostly agnostic without considering the decoder-only architectures and the algorithmic characteristics of mainstream LLMs, for example, the requirements of Reinforcement Learning from Human Feedback (RLHF)-based fine-tuning (Ouyang et al., 2022). Thus, there is an urgent need for better PEFT that enables more effective learning of LLMs.\nIn this position paper, we advocate for the development of PEFT techniques specifically tailored for LLMs. We briefly review current states of development in the field. Based on our empirical study, we show that in general LoRA-based approaches are more suitable for LLMs; yet there are no uniform algorithmic designs for all the settings. In addition, we discuss complicated learning strategies that are not supported by current PEFT methods, such as more efficient distributed PEFT, PEFT that support RLHF training for better human alignment, PEFT combines with various model compression techniques (such as distillation and quantization), and PEFT for multi-modal LLMs. We hope that our research can stimulate research for better PEFT techniques, especially for LLMs." }, { "figure_ref": [], "heading": "Literature Review", "publication_ref": [ "b37", "b31", "b8", "b57", "b19", "b2", "b33", "b42", "b58", "b54", "b0", "b36", "b9", "b17", "b45", "b14", "b47", "b23", "b27", "b22", "b21", "b28", "b46", "b40", "b11", "b17", "b56", "b18", "b3", "b49", "b13", "b7", "b20", "b24" ], "table_ref": [], "text": "A Brief Overview of LLMs. Before the LLM wave, Pre-trained Language Models (PLMs) have gained significant attention due to their abilities to learn contextual representations (Qiu et al., 2020;Min et al., 2021). One prominent example is BERT (Devlin et al., 2019), which leverages the encoder-only architecture and has been adopted in language understanding tasks. Since the launch of ChatGPT, a variety of LLMs have been released. Popular open LLMs include LLaMA (Touvron et al., 2023a), LLaMA 2 (Touvron et al., 2023b), OPT (Zhang et al., 2022), OPT-IML (Iyer et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022), BLOOMZ (Muennighoff et al., 2023), Galactica (Taylor et al., 2022), CPM-2 (Zhang et al., 2021), GLM (Zeng et al., 2023), Pythia (Biderman et al., 2023), and many others, to name a few. For model training, the three stage process of \"pre-training, supervised finetuning (SFT) and RLHF\" put forward by (Ouyang et al., 2022) is widely accepted by the community. It can be easily seen that training LLMs requires numerous computational resources. Therefore, the huge computational and financial costs naturally call for the development of PEFT for LLMs. General PEFT Methods. PEFT is a type of finetuning methods that reduce the number of learnable parameters of PLMs (not specifically for LLMs) while preserving good performance, which is also referred to as Delta Tuning (Ding et al., 2023). Bit-Fit (Zaken et al., 2022) is a simple sparse finetuning method where only the bias parameters are tuned. LoRA (Hu et al., 2022) leverages lowrank approximation to the update matrices (i.e., parameters) at each model layer, which can be applied to various PLMs. Following the work of LoRA, AdaLoRA (Zhang et al., 2023a) is proposed to incorporate adaptive budget allocation into the choices of LoRA ranks for different matrices. Dy-LoRA (Valipour et al., 2023) further employs a dynamic search-free technique for rank selection. Adapters (Houlsby et al., 2019) are small neural network modules integrated into original transformer blocks, which are learned to capture new knowledge for downstream tasks. AdaMix (Wang et al., 2022) learns a mixture of multiple adapters for PEFT. Prefix-tuning (Li and Liang, 2021) adds a sequence of prefixes represented as trainable continuous embeddings to each transformer layer that specifically capture the task-specific information. Adaptive Prefix-tuning (Zhang et al., 2023c) extends Prefix-tuning to make the lengths of prefixes more adaptive to tasks. P-tuning v2 (Liu et al., 2022) is a similar approach that shows layerwise prompt vectors are also beneficial for solving language understanding tasks. Prefix Propagation (Li et al., 2023) explores prefix-tuning for longer input sequences. In contrast to continuous vectors, prompt-tuning employs trainable prompt vectors (Lester et al., 2021;Liu et al., 2021;Wang et al., 2021;Xu et al., 2023b) or discrete textual descriptions (Shin et al., 2020;Gao et al., 2021) at the input layer to model task-level knowledge. We refer reader to the survey (Liu et al., 2023b) for a more detailed review. PEFT Methods for LLMs. It is worth noting that the above methods are not tailored to LLMs. Thus, we further summarize how these PEFT techniques are applied. To the best of our knowledge, LoRA (Hu et al., 2022) is one of the most widely applied methods due to its simplicity in design and uniformity in application scenarios. Apart from LoRA, LLaMA-Adapter (Zhang et al., 2023b) is proposed to insert adapter networks into LLMs with zero-initialized attention. In the open-source community, PEFT2 is also the name of a project that provides the implementations of several PEFT methods on LLMs, serving as a useful tool for further research into the subject. OpenDelta (Hu et al., 2023) focuses on the quick adaptation of LLMs. A few works focus on evaluating PEFT on text generation tasks (Chen et al., 2022;Xu et al., 2022), but are not conducted for LLMs. Another thread of works combine model quantization with PEFT, which maps model parameters from floatingpoint numbers to integers (Gholami et al., 2021). QLoRA (Dettmers et al., 2023) quantizes an LLM to 4-bit, and then leverages a small set of LoRA weights to avoid performance degradation. Alpha Tuning (Kwon et al., 2022) and QA-LoRA (Xu et al., 2023a) are quantization-aware adaptation methods for LLMs. AWQ (Lin et al., 2023) significantly reduces the model quantization error by protecting 1% of the salient weights of the LLM." }, { "figure_ref": [], "heading": "Analysis and Research Directions", "publication_ref": [], "table_ref": [], "text": "We analyze the performance of PEFT on LLMs and suggest several directions for future research." }, { "figure_ref": [], "heading": "Empirical Analysis", "publication_ref": [ "b34", "b12", "b5", "b38", "b21", "b23", "b17" ], "table_ref": [ "tab_0" ], "text": "Before presenting research directions, we conduct a brief empirical analysis on the effectiveness of PEFT over LLMs. Without loss of generality, we evaluate the performance of a popular LLM, i.e., Llama-2-7b-chat3 , over two text generation tasks (E2E (Novikova et al., 2017) and WebNLG (Gardent et al., 2017) and two more challenging task of math problems (GSM8K (Cobbe et al., 2021) question answering (CoQA (Reddy et al., 2019)).\nDetailed dataset statistics and experimental settings can be found in the appendix.\nIn Table 1, we report the testing performance of standard fine-tuning and three popular PEFT methods, namely, prompt-tuning (Lester et al., 2021), prefix-tuning (Li and Liang, 2021) and LoRA (Hu et al., 2022). Results show that LoRA and full finetuning exhibit similar performance in generative tasks, with minimal differences in quality of generated contents. For math problems and QA tasks, they display a slight variance in accuracy, whereas other PEFT methods perform inadequately. In Table 2 we study the effectiveness of LoRA on a larger model scale based on Llama-2 and Vicuna4 . The results indicate that for simple text generation, there is a minor enhancement as the model size increases from 7B to 13B. However, there is no discernible difference in the generation quality of specific questions after manual checking. Conversely, for more intricate math problems, we observe a significant improvement in accuracy with the increase in model parameters.\nWe further observe that different LoRA ranks have varying degrees of performance impact. To investigate this further, we conduct tests on the same dataset with different data volumes (randomly sampled 5%, 10%, and the entire dataset) using different LoRA ranks. As shown in Figure 1, for smaller datasets, a lower LoRA rank yields optimal results, and increasing LoRA ranks actually leads to a decline in performance. Therefore, a lower LoRA rank can achieve satisfactory performance while also saving training resource costs." }, { "figure_ref": [], "heading": "Lessons Learned for Future Research", "publication_ref": [], "table_ref": [], "text": "From the experiments, it is seen that LoRA-style PEFT methods achieve better performance for LLMs. Yet, there is no \"free lunch\" for all learning settings, particularly for different tasks and data volumes. In addition, the trained LoRA modules with large ranks may still be over-parameterized for some cases. We suggest that for future research, here are some possible directions. i) Task-adaptive LoRA methods can be developed to search more suitable ranks based on task difficulty and data volumes. ii) More compact low-rank structures can be involved to decompose the parameter matrices, which speed up the training process and avoid overfitting simultaneously. iii) Combining LoRA-style approaches with better prompt designs for LLMs may also result in better performance." }, { "figure_ref": [], "heading": "Other Research Directions", "publication_ref": [ "b36", "b7", "b16", "b30", "b41", "b48", "b6", "b52", "b16", "b6", "b10" ], "table_ref": [], "text": "Large-scale Training. As observed from the experimental results, the performance of LoRA is highly related to the number of trainable parameters (controlled by the LoRA rank), for LLMs with 100B parameters or more (such as GPT-4 (OpenAI, 2023)), even turning only 1% of the parameters leads to huge computational costs. In addition, the model checkpoints must be partitioned as they do not fit in single GPU. Thus, the parameters LoRA of modules are also distributed according to the model partition strategies during training. The parameter values should be communicated frequently across GPUs and machines during the training process. To the best of our knowledge, there are no comprehensive studies or publicly available frameworks that address the problems of large-scale, distributed LoRA training for ultra-large models effectively.\nIn addition, the auto-regressive language model next token prediction objective is not the only learning task during the LLM training process. For better alignment with human values, RLHF (Ouyang et al., 2022) is often leveraged to fine-tune the LLMs based on reinforcement learning coached by a reward model. This process is more computationally expensive due to the involvement of both the supervised fine-tuned and RLHF-based fine-tuned LLM checkpoints, together with a reward model that expresses human preferences. Compared to simple fine-tuning, RLHF requires the computational graphs and weights of these additional models loaded into the GPU memory during training, which significantly lowers the GPU memory space for training the LLM itself. We suggest that further studies on PEFT-style RLHF training is of greater value to save computational resources and benefit the NLP community for deeper research into how to apply RLHF more easily. PEFT with Model Compression. For application developers, it is more important to deploy LLMs online for real-time inference. Hence, compressing LLMs to smaller sizes is critical, in order to save GPU memory and speedup the inference process. In the literature, several types of approaches have been proposed to compress the models, such as knowledge distillation, model quantization and pruning. Take quantization as example. In QLoRA (Dettmers et al., 2023), the underlying LLM is quantized to 4-bit first and then tuned using LoRA over a small but high-quality dataset. The work (Hsieh et al., 2023) distills LLMs by extracting rationales as additional supervision from larger models for training small models, yet the parameters of small models require to be fully fine-tuned to ensure high performance. LLM-Pruner (Ma et al., 2023) leverages structural pruning for LLMs to selectively removes non-critical structures based on the gradients learned during training. A similar work Wanda (Sun et al., 2023) prunes weights smallest magnitudes multiplied by the corresponding input activations, in order to bring parameter sparsity to large models. We suggest that the research on LLM compression with PEFT is vital for online deployment and highly insufficient in current states. For example, it is possible to obtain a smaller model by PEFT-applied distillation. This benefits institutions or developers where fully finetuning smaller models (with parameters around 7B) is computationally prohibitive. PEFT for Multi-modal LLMs. LLMs are not only about texts. By feeding the output representations of visual encoders (or encoders for other modalities) into LLM backbones, multi-modal LLMs, including NExt-GPT (Wu et al., 2023), Instruct-BLIP (Dai et al., 2023), mPLUG-Owl (Ye et al., 2023), LLaVa (Liu et al., 2023a), MiniGPT (Chen et al., 2023) and many others, can be trained and deployed to tackle multi-modal tasks by instruction following. In multi-modal LLMs, unifying the representations of different modalities into the same semantic space is crucial for multi-modal understanding and generation. For instance, Instruct-BLIP (Dai et al., 2023) leverages a Q-Former to extract instruction-aware visual features as the input to a frozen LLM. However, without the training of the LLM, it obtains no new knowledge on how to solve the multi-modal tasks. We believe that PEFT can act as the \"bridge\" to achieve cross-modal communications by slightly tuning existing LLMs that effectively prevents the catastrophic forgetting of uni-modal knowledge. Other Topics. In addition to the above mentioned topics, there are other topics that are worth exploring. Strategies such as adaptive learning rates and regularization methods specifically designed for PEFT can further faster and stabilize the training process. Apart from RLHF, examining parameterefficient ways to address ethical considerations, such as knowledge securities, fairness or bias mitigation (Fan et al., 2023), can contribute to the development of more reliable and unbiased LLMs. Due to space limitation, we do not elaborate." }, { "figure_ref": [], "heading": "Concluding Remarks", "publication_ref": [], "table_ref": [], "text": "In this position paper, we have highlighted the pressing need for better PEFT methods tailored for LLMs, underscoring the importance of addressing the challenges and open issues in PEFT and encompassing the exploration of novel efficient PEFT architectures, PEFT for different learning settings, and PEFT for multi-modal LLMs. By addressing these challenges, we can pave the way for more efficient and accessible PEFT techniques that are more practical for real-world applications." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This paper is a position paper and does not present any specific new methodologies or approaches that could be employed to tackle the identified challenges. The limitations and research directions mentioned in the paper are based on the authors' perspectives and may not encompass the entire scope of issues related to PEFT for LLMs." }, { "figure_ref": [], "heading": "A Datasets and Experimental Settings", "publication_ref": [ "b34", "b12", "b5", "b38", "b29" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Datasets. We evaluate the results on two standard neural generation datasets for the table-to-text task: E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), one math problem reasoning dataset: GSM8K (Cobbe et al., 2021) and one QA dataset CoQA (Reddy et al., 2019).\nSpecifically, the E2E dataset contains approximately 50K examples featuring 8 distinct fields. It includes multiple test references for each source table and has an average output length of 22.9. We employ the official evaluation script, which provides metrics such as BLEU, METEOR, ROUGE-L and CIDEr for assessment. The WebNLG dataset consists of 22K examples where the input x consists of sequences of (subject, property, object) triples. The average output length is 22.5. The training and validation splits encompass input descriptions of entities from 9 distinct DBpedia categories, such as Monuments. The test split is divided into two sections: the first half contains categories observed in the training data, while the second half includes 5 unseen categories for extrapolation evaluation. For evaluation, we also utilize the official evaluation script. GSM8K presents a challenging arithmetic reasoning task that language models frequently find difficult to tackle. CoQA is a challenging task to measure the model abilities to understand a text passage and answer a series of related questions. Experimental Settings. The experiments are conducted on a Linux server with two NVIDIA A100-80G GPUs. We choose Llama-2-7b-chat as the default LLM. In addition, Llama-2-13b-chat, together with the 7B and 13B versions of the Vicuna models, is leveraged for study.\nHyper-parameter Settings. At training time, we use AdamW (Loshchilov and Hutter, 2019) as the optimizer, and set its hyper-parameter (β1, β2) to (0.9, 0.98). The hyper-parameters we tune include the number of epochs, the batch size, the learning rate, and the sequence length. Hyper-parameter details are in shown in Table 3 andTable 4. " } ]
This paper delves into the pressing need in Parameter-Efficient Fine-Tuning (PEFT) for Large Language Models (LLMs). While LLMs possess remarkable capabilities, their extensive parameter requirements and associated computational demands hinder their practicality and scalability for real-world applications. Our position paper highlights current states and the necessity of further studying into the topic, and recognizes significant challenges and open issues that must be addressed to fully harness the powerful abilities of LLMs. These challenges encompass novel efficient PEFT architectures, PEFT for different learning settings, PEFT combined with model compression techniques, and the exploration of PEFT for multimodal LLMs. By presenting this position paper, we aim to stimulate further research and foster discussions surrounding more efficient and accessible PEFT for LLMs.
Towards Better Parameter-Efficient Fine-Tuning for Large Language Models: A Position Paper
[ { "figure_caption": "and Performance of PEFT methods and FT over multiple generation tasks. Note: FT (full fine-tuning), Prompt (prompt-tuning), Prefix (prefix-tuning).", "figure_data": "MetricFTLoRAPrompt PrefixDataset: E2E (Text Generation)BLEU-10.5460 0.50000.44760.4552BLEU-20.3956 0.34860.29940.3252METEOR0.3448 0.32650.28160.2952ROUGE-L 0.3918 0.35690.31530.3312CIDEr0.9502 0.76460.51630.6003Dataset: WebNLG (Text Generation)BLEU-10.3025 0.32170.23520.2587BLEU-20.2109 0.21730.19430.2005METEOR0.2014 0.19920.16980.1754ROUGE-L 0.3045 0.28810.23980.2465CIDEr0.6207 0.50290.34650.4186Dataset: GSM8K (Math Problem)Accuracy0.2382 0.21542 0.15643 0.17454Dataset: CoQA (Question Answering)EM0.6089 0.59760.51930.5339F10.7004 0.69580.59300.6264", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "LoRA performance with different model sizes.", "figure_data": "MetricLlama-2 Llama-2 Vicuna Vicuna(7B)(13B)(7B)(13B)Dataset: E2E (Text Generation)BLEU-10.50000.50280.50380.5066BLEU-20.34860.35220.35260.3545METEOR0.32650.32280.32380.3247ROUGE-L 0.35690.35590.35580.3560CIDEr0.76460.79860.77830.8106Dataset: GSM8K (Math Problem)Accuracy0.23820.38350.16410.22730.50 0.51 0.52 0.53 BLUE Score5% E2E 10% E2E 100% E2E0.494816 LoRA Rank 3264 128Figure 1: The impact of data volume (5%, 10%, 100%of the E2E dataset) with different LoRA ranks.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hyper-parameter settings for individual datasets.", "figure_data": "DatasetEpoch Sequence LengthE2E10256WebNLG10256GSM8K10512CoQA52048ValueLearning Rate3e-6AdamW (β1, β2) (0.9, 0.98)Dropout0.1Weight Decay0.01Batch Size48", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Hyper-parameter settings for all datasets.", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Chengyu Wang; Junbing Yan; Wei Zhang; Jun Huang
[ { "authors": "Stella Biderman; Hailey Schoelkopf; Quentin Gregory Anthony; Herbie Bradley; O' Kyle; Eric Brien; Mohammad Hallahan; Shivanshu Aflah Khan; Purohit; Edward Usvsn Sai Prashanth; Aviya Raff; Lintang Skowron; Oskar Sutawika; Van Der Wal", "journal": "", "ref_id": "b0", "title": "Pythia: A suite for analyzing large language models across training and scaling", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b1", "title": "", "year": "" }, { "authors": "Sid Black; Stella Biderman; Eric Hallahan; Quentin Anthony; Leo Gao; Laurence Golding; Horace He; Connor Leahy; Kyle Mcdonell; Jason Phang; Michael Pieler; Shivanshu Usvsn Sai Prashanth; Laria Purohit; Jonathan Reynolds; Ben Tow; Samuel Wang; Weinbach", "journal": "", "ref_id": "b2", "title": "Gpt-neox-20b: An open-source autoregressive language model", "year": "2022" }, { "authors": "Guanzheng Chen; Fangyu Liu; Zaiqiao Meng; Shangsong Liang", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Revisiting parameterefficient tuning: Are we really there yet?", "year": "2022" }, { "authors": "Jun Chen; Deyao Zhu; Xiaoqian Shen; Xiang Li; Zechun Liu; Pengchuan Zhang; Raghuraman Krishnamoorthi; Vikas Chandra; Yunyang Xiong; Mohamed Elhoseiny", "journal": "", "ref_id": "b4", "title": "Minigpt-v2: large language model as a unified interface for vision-language multi-task learning", "year": "2023" }, { "authors": "Karl Cobbe; Vineet Kosaraju; Mohammad Bavarian; Mark Chen; Heewoo Jun; Lukasz Kaiser; Matthias Plappert; Jerry Tworek; Jacob Hilton; Reiichiro Nakano; Christopher Hesse; John Schulman", "journal": "", "ref_id": "b5", "title": "Training verifiers to solve math word problems", "year": "2021" }, { "authors": "Wenliang Dai; Junnan Li; Dongxu Li; Anthony Meng; Huat Tiong; Junqi Zhao; Weisheng Wang; Boyang Li; Pascale Fung; Steven C H Hoi", "journal": "", "ref_id": "b6", "title": "Instructblip: Towards general-purpose visionlanguage models with instruction tuning", "year": "2023" }, { "authors": "Tim Dettmers; Artidoro Pagnoni; Ari Holtzman; Luke Zettlemoyer", "journal": "", "ref_id": "b7", "title": "Qlora: Efficient finetuning of quantized llms", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ning Ding; Yujia Qin; Guang Yang; Fuchao Wei; Zonghan Yang; Yusheng Su; Shengding Hu; Yulin Chen; Chi-Min Chan; Weize Chen; Jing Yi; Weilin Zhao; Xiaozhi Wang; Zhiyuan Liu; Hai-Tao Zheng; Jianfei Chen; Yang Liu; Jie Tang; Juanzi Li; Maosong Sun", "journal": "Nat. Mac. Intell", "ref_id": "b9", "title": "Parameter-efficient fine-tuning of largescale pre-trained language models", "year": "2023" }, { "authors": "Mingyuan Fan; Cen Chen; Chengyu Wang; Jun Huang", "journal": "", "ref_id": "b10", "title": "On the trustworthiness landscape of state-of-the-art generative models: A comprehensive survey", "year": "2023" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": "Claire Gardent; Anastasia Shimorina; Shashi Narayan; Laura Perez-Beltrachini", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Creating training corpora for NLG micro-planners", "year": "2017" }, { "authors": "Amir Gholami; Sehoon Kim; Zhen Dong; Zhewei Yao; Michael W Mahoney; Kurt Keutzer", "journal": "", "ref_id": "b13", "title": "A survey of quantization methods for efficient neural network inference", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b14", "title": "Parameter-efficient transfer learning for NLP", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Cheng-Yu Hsieh; Chun-Liang Li; Chih-Kuan Yeh; Hootan Nakhost; Yasuhisa Fujii; Alex Ratner; Ranjay Krishna; Chen-Yu Lee; Tomas Pfister", "journal": "", "ref_id": "b16", "title": "Distilling step-by-step! outperforming larger language models with less training data and smaller model sizes", "year": "2023" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b17", "title": "Lora: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Shengding Hu; Ning Ding; Weilin Zhao; Xingtai Lv; Zhen Zhang; Zhiyuan Liu; Maosong Sun", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Opendelta: A plug-and-play library for parameterefficient adaptation of pre-trained models", "year": "2023-07-10" }, { "authors": "Srinivasan Iyer; Xi Victoria Lin; Ramakanth Pasunuru; Todor Mihaylov; Daniel Simig; Ping Yu; Kurt Shuster; Tianlu Wang; Qing Liu; Punit Singh Koura; Xian Li; Brian O' Horo; Gabriel Pereyra; Jeff Wang; Christopher Dewan; Asli Celikyilmaz; Luke Zettlemoyer; Ves Stoyanov", "journal": "", "ref_id": "b19", "title": "OPT-IML: scaling language model instruction meta learning through the lens of generalization", "year": "2022" }, { "authors": "Se Jung Kwon; Jeonghoon Kim; Jeongin Bae; Min Kang; Jin-Hwa Yoo; Baeseong Kim; Byeongwook Park; Jung-Woo Kim; Nako Ha; Dongsoo Sung; Lee", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Alphatuning: Quantization-aware parameterefficient adaptation of large-scale pre-trained language models", "year": "2022-12-07" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Jonathan Li; Will Aitken; Rohan Bhambhoria; Xiaodan Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Prefix propagation: Parameterefficient tuning for long sequences", "year": "2023" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Prefix-tuning: Optimizing continuous prompts for generation", "year": "2021" }, { "authors": "Ji Lin; Jiaming Tang; Haotian Tang; Shang Yang; Xingyu Dang; Song Han", "journal": "", "ref_id": "b24", "title": "AWQ: activationaware weight quantization for LLM compression and acceleration", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b25", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "ACM Comput. Surv", "ref_id": "b26", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Xiao Liu; Kaixuan Ji; Yicheng Fu; Weng Tam; Zhengxiao Du; Zhilin Yang; Jie Tang", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks", "year": "2022" }, { "authors": "Xiao Liu; Yanan Zheng; Zhengxiao Du; Ming Ding; Yujie Qian; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b28", "title": "GPT understands, too", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b29", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Xinyin Ma; Gongfan Fang; Xinchao Wang", "journal": "", "ref_id": "b30", "title": "Llm-pruner: On the structural pruning of large language models", "year": "2023" }, { "authors": "Bonan Min; Hayley Ross; Elior Sulem; Amir Pouran; Ben Veyseh; Thien Huu Nguyen; Oscar Sainz; Eneko Agirre; Ilana Heintz; Dan Roth", "journal": "", "ref_id": "b31", "title": "Recent advances in natural language processing via large pre-trained language models: A survey", "year": "2021" }, { "authors": "Salman Mohamadi; Ghulam Mujtaba; Ngan Le; Gianfranco Doretto; Donald A Adjeroh", "journal": "", "ref_id": "b32", "title": "Chatgpt in the age of generative AI and large language models: A concise survey", "year": "2023" }, { "authors": "Niklas Muennighoff; Thomas Wang; Lintang Sutawika; Adam Roberts; Stella Biderman; Teven Le Scao; M Saiful; Sheng Bari; Zheng Xin Shen; Hailey Yong; Xiangru Schoelkopf; Dragomir Tang; Alham Radev; Khalid Fikri Aji; Samuel Almubarak; Zaid Albanie; Albert Alyafeai; Edward Webson; Colin Raff; Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Crosslingual generalization through multitask finetuning", "year": "2023" }, { "authors": "Jekaterina Novikova; Ondrej Dusek; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "The E2E dataset: New challenges for end-toend generation", "year": "2017" }, { "authors": " Openai", "journal": "", "ref_id": "b35", "title": "GPT-4 technical report", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b36", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Xipeng Qiu; Tianxiang Sun; Yige Xu; Yunfan Shao; Ning Dai; Xuanjing Huang", "journal": "", "ref_id": "b37", "title": "Pre-trained models for natural language processing: A survey", "year": "2020" }, { "authors": "Siva Reddy; Danqi Chen; Christopher D Manning", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b38", "title": "Coqa: A conversational question answering challenge", "year": "2019" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilic; Roman Hesslow; Alexandra Castagné; François Sasha Luccioni; Matthias Yvon; Jonathan Gallé; Alexander M Tow; Stella Rush; Albert Biderman; Pawan Webson; Thomas Sasanka Ammanamanchi; Benoît Wang; Niklas Sagot; Albert Muennighoff; Olatunji Villanova Del Moral; Rachel Ruwase; Stas Bawden; Angelina Bekman; Iz Mcmillan-Major; Huu Beltagy; Lucile Nguyen; Samson Saulnier; Pedro Ortiz Tan; Victor Suarez; Hugo Sanh; Yacine Laurençon; Julien Jernite; Margaret Launay; Colin Mitchell; Aaron Raffel; Adi Gokaslan; Aitor Simhi; Alham Soroa; Amit Fikri Aji; Anna Alfassy; Ariel Kreisberg Rogers; Canwen Nitzav; Chenghao Xu; Chris Mou; Christopher Emezue; Colin Klamm; Leong; David Daniel Van Strien; Ifeoluwa Adelani", "journal": "", "ref_id": "b39", "title": "BLOOM: A 176b-parameter open-access multilingual language model", "year": "2022" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Autoprompt: Eliciting knowledge from language models with automatically generated prompts", "year": "2020" }, { "authors": "Mingjie Sun; Zhuang Liu; Anna Bair; J Zico Kolter", "journal": "", "ref_id": "b41", "title": "A simple and effective pruning approach for large language models", "year": "2023" }, { "authors": "Ross Taylor; Marcin Kardas; Guillem Cucurull; Thomas Scialom; Anthony Hartshorn; Elvis Saravia; Andrew Poulton; Viktor Kerkez; Robert Stojnic", "journal": "", "ref_id": "b42", "title": "Galactica: A large language model for science", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurélien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b43", "title": "Llama: Open and efficient language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton-Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurélien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b44", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Mojtaba Valipour; Mehdi Rezagholizadeh; Ivan Kobyzev; Ali Ghodsi", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Dylora: Parameterefficient tuning of pre-trained models using dynamic search-free low-rank adaptation", "year": "2023" }, { "authors": "Chengyu Wang; Jianing Wang; Minghui Qiu; Jun Huang; Ming Gao", "journal": "Association for Computational Linguistics", "ref_id": "b46", "title": "Transprompt: Towards an automatic transferable prompting framework for few-shot text classification", "year": "2021" }, { "authors": "Yaqing Wang; Sahaj Agarwal; Subhabrata Mukherjee; Xiaodong Liu; Jing Gao; Ahmed Hassan Awadallah; Jianfeng Gao", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Adamix: Mixture-ofadaptations for parameter-efficient model tuning", "year": "2022" }, { "authors": "Shengqiong Wu; Hao Fei; Leigang Qu; Wei Ji; Tat-Seng Chua", "journal": "", "ref_id": "b48", "title": "Next-gpt: Any-to-any multimodal LLM", "year": "2023" }, { "authors": "Peng Xu; Mostofa Patwary; Shrimai Prabhumoye; Virginia Adams; Ryan Prenger; Wei Ping; Nayeon Lee; Mohammad Shoeybi; Bryan Catanzaro", "journal": "Association for Computational Linguistics", "ref_id": "b49", "title": "Evaluating parameter efficient learning for generation", "year": "2022" }, { "authors": "Yuhui Xu; Lingxi Xie; Xiaotao Gu; Xin Chen; Heng Chang; Hengheng Zhang; Zhengsu Chen; Xiaopeng Zhang; Qi Tian", "journal": "", "ref_id": "b50", "title": "Qa-lora: Quantizationaware low-rank adaptation of large language models", "year": "2023" }, { "authors": "Ziyun Xu; Chengyu Wang; Minghui Qiu; Fuli Luo; Runxin Xu; Songfang Huang; Jun Huang", "journal": "ACM", "ref_id": "b51", "title": "Making pre-trained language models end-to-end fewshot learners with contrastive prompt tuning", "year": "2023" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi; Chenliang Li; Yuanhong Xu; Hehong Chen; Junfeng Tian; Qian Qi; Ji Zhang; Fei Huang", "journal": "", "ref_id": "b52", "title": "mplug-owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Elad Ben Zaken; Yoav Goldberg; Shauli Ravfogel", "journal": "Association for Computational Linguistics", "ref_id": "b53", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2022" }, { "authors": "Aohan Zeng; Xiao Liu; Zhengxiao Du; Zihan Wang; Hanyu Lai; Ming Ding; Zhuoyi Yang; Yifan Xu; Wendi Zheng; Xiao Xia; Weng Lam Tam; Zixuan Ma; Yufei Xue; Jidong Zhai; Wenguang Chen; Zhiyuan Liu; Peng Zhang; Yuxiao Dong; Jie Tang", "journal": "", "ref_id": "b54", "title": "GLM-130B: an open bilingual pre-trained model", "year": "2023" }, { "authors": "Qingru Zhang; Minshuo Chen; Alexander Bukharin; Pengcheng He; Yu Cheng; Weizhu Chen; Tuo Zhao", "journal": "", "ref_id": "b55", "title": "Adaptive budget allocation for parameter-efficient fine-tuning", "year": "2023" }, { "authors": "Renrui Zhang; Jiaming Han; Aojun Zhou; Xiangfei Hu; Shilin Yan; Pan Lu; Hongsheng Li; Peng Gao; Yu Qiao", "journal": "", "ref_id": "b56", "title": "Llama-adapter: Efficient fine-tuning of language models with zero-init attention", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona T Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b57", "title": "OPT: open pre-trained transformer language models", "year": "2022" }, { "authors": "Zhengyan Zhang; Yuxian Gu; Xu Han; Shengqi Chen; Chaojun Xiao; Zhenbo Sun; Yuan Yao; Fanchao Qi; Jian Guan; Pei Ke; Yanzheng Cai; Guoyang Zeng; Zhixing Tan; Zhiyuan Liu; Minlie Huang; Wentao Han; Yang Liu; Xiaoyan Zhu; Maosong Sun", "journal": "AI Open", "ref_id": "b58", "title": "CPM-2: large-scale cost-effective pre-trained language models", "year": "2021" }, { "authors": "Zhenru Zhang; Chuanqi Tan; Haiyang Xu; Chengyu Wang; Jun Huang; Songfang Huang", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "Towards adaptive prefix tuning for parameter-efficient language model fine-tuning", "year": "2023" }, { "authors": "Kun Wayne Xin Zhao; Junyi Zhou; Tianyi Li; Xiaolei Tang; Yupeng Wang; Yingqian Hou; Beichen Min; Junjie Zhang; Zican Zhang; Yifan Dong; Chen Du; Yushuo Yang; Zhipeng Chen; Jinhao Chen; Ruiyang Jiang; Yifan Ren; Xinyu Li; Zikang Tang; Peiyu Liu; Jian-Yun Liu; Ji-Rong Nie; Wen", "journal": "", "ref_id": "b60", "title": "A survey of large language models", "year": "2023" } ]
[]
2023-11-22
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b19", "b15", "b25", "b6", "b29", "b0", "b23", "b3", "b27", "b45", "b5", "b44", "b17", "b20", "b41", "b21", "b22", "b4", "b13", "b12" ], "table_ref": [], "text": "Aerial object detection focuses on identifying objects of interest, such as vehicles and airplanes, on the ground within aerial images and determining their categories. With the increasing availability of aerial imagery, this field has become a specific yet highly active area within computer vision (Ren et al., 2015;Lin et al., 2017;Tian et al., 2019;Ding et al., 2019;Xie et al., 2021).\nNevertheless, obtaining high-quality bounding box annotations demands significant human resources. Weakly supervised object detection (Bilen & Vedaldi, 2016;Tang et al., 2017;2018;Chen et al., 2020;Wan et al., 2018;Zhou et al., 2016;Diba et al., 2017;Zhang et al., 2018) has emerged as a solution, replacing bounding box annotations with more affordable image-level annotations. However, due to the absence of precise location information and challenges in distinguishing densely packed objects, image-level supervised methods exhibit limited performance in complex scenarios. In recent times, point-based annotations have gained widespread usage across various tasks, including object detection (Papadopoulos et al., 2017;Ren et al., 2020), localization (Yu et al., 2022;Ribera et al., 2019;Song et al., 2021), instance segmentation (Cheng et al., 2022), and action localization (Lee & Byun, 2021).\nOne intriguing question naturally arises: Can weakly supervised learning for oriented object detection be achieved solely using point annotations instead of rotated bounding box annotations? We explore this question using a mask proposal generator (e.g., SAM (Kirillov et al., 2023) as employed in this paper). One straightforward approach is to choose the mask with the highest associated score as the object. Following this, we apply the minimum bounding rectangle method to transform it into rotated bounding box annotations, which serves as our baseline.\nHowever, due to the lack of intra-class homogeneity, ambiguity arises between companion scores and the best-performing mask (the one with the highest Intersection over Union with the ground truth). This ambiguity leads to difficulties in selecting the best-performing mask based on companion scores, as illustrated in Fig. 1. In this paper, we introduce an architecture inspired by Multiple Instance Learning (MIL). By extracting mask proposal information during the Inspector Module, the network becomes proficient in classifying specific objects by aggregating information from annotated points across the entire dataset. This results in a semantic score that enhances the assessment of proposal masks. Additionally, we introduce the Constrainer Module, which takes into account the alignment between marked points and the center of gravity of a mask, providing offset penalties. After aggregating all these assessments, the best mask is selected using the new assessment criteria, and it is used to generate a rotated circumscribed bounding box via the Symmetry Axis Estimation (SAE) Module, as illustrated in Fig. 2. Our main contributions are as follows:\n1) Proposing of the P2RBox Network: We introduce the P2RBox network, which is a method based on point annotation and a mask generator for achieving point-supervised rotated object detection. To the best of our knowledge, this marks the first attempt to train a rotated object detector using point supervision. By combining with Oriented R-CNN, P2RBox achieves 62.26% on DOTA-v1.0 test dataset.\n2) High-Quality Mask Selection: Utilizing the Inspector Module, we introduce a semantic score for the masks, combined with the Constrainer Module, leading to the development of a comprehensive filtering approach. This, in turn, enhances the quality of the selected mask proposals from the mask generator and ultimately improves the quality of rotated bounding box annotations.\n3) Mask-to-Rotated Box Conversion: We design the SAE module based on the spectral theorem for symmetric matrices to convert the best mask proposal into rotated bounding boxes, enhancing the effectiveness of rotated object detection." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b15", "b6", "b29", "b18", "b30", "b12", "b11", "b14", "b7", "b2", "b19", "b41", "b10", "b12", "b2" ], "table_ref": [], "text": "RBox-supervised Oriented Object Detection. Notable approaches in this field include Rotated RetinaNet (Lin et al., 2017) with anchor-based methods, Rotated FCOS (Tian et al., 2019) using anchor-free techniques, and two-stage detectors like RoI Transformer (Ding et al., 2019), Oriented R-CNN (Xie et al., 2021), and ReDet (Han et al., 2021b). Performance enhancements have been seen with methods like R3Det (Yang et al., 2021b) and S2A-Net (Han et al., 2021a), leveraging alignment features. Most of these approaches use direct angle regression, but this can face challenges due to the periodic nature of angles, leading to strategies like modulated losses (Yang et al., 2019a;Qian et al., 2021), angle coders (Yang & Yan, 2020;Yang et al., 2021a;Yu & Da, 2023), and Gaussianbased losses (Yang et al., 2021c;d;2022b;c). RepPoint-based approaches (Yang et al., 2019b;Hou et al., 2023;Li et al., 2022) provide alternative solutions for oriented object detection by predicting a set of sample points defining the object's spatial extent.\nHBox-supervised oriented object detection. While the oriented bounding box can be derived from the segmentation mask, employing the HBox-Mask-RBox pipeline can be less efficient in terms of cost. A pioneering approach, H2RBox (Yang et al., 2022a), bypasses the segmentation step and directly detects RBoxes from HBox annotations. By leveraging HBox annotations for the same object in various orientations, the geometric constraints narrow down the possible angles for the object, making the detection more efficient. Additionally, the integration of a self-supervised branch in H2RBox helps filter out undesirable results, establishing an HBox-to-RBox paradigm. In an extended version, H2RBox-v2 (Yu et al., 2023), a new self-supervised branch further enhances the precision of object angle learning, resulting in improved performance.\nPoint-supervised object detection. Point-level annotation, a recent advancement, is efficient, taking about 1.87 seconds per image on the VOC dataset (Everingham et al., 2010), comparable to image-level annotation (1.5 seconds per image) and much less than bounding box annotation (34.5 seconds per image), especially rotated bounding box annotation. However, the time for point-level annotation may increase with more objects in the image. P2BNet (Chen et al., 2022) uses a coarseto-fine strategy, enhancing IoU with ground-truth by predicting pseudo boxes using point annotations and Faster R-CNN (Ren et al., 2015).\nIn a related context, (Yu et al., 2022) explores object localization with coarse point annotations, addressing point annotation's semantic variability through algorithms. Additionally, (He et al., 2023) predicts horizontal bounding boxes in remote sensing scenes using point annotations. The Segment Anything Model (SAM) (Kirillov et al., 2023) allows obtaining object masks with a simple click, but ensuring mask quality remains challenging. Combining P2BNet (Chen et al., 2022) and H2RBox-v2 (Yu et al., 2023) provides the final object orientation, but H2RBox-v2 requires precise circumscribed horizontal bounding boxes. Background noise may affect both P2BNet and H2RBox-v2, leading to a poor performance (line of P2BNet-H2RBox in Tab. 1). Therefore, P2RBox focuses on generating high-quality rotated bounding boxes, a domain that has yet to be extensively studied to date." }, { "figure_ref": [ "fig_1" ], "heading": "POINT-TO-ROTATED-BOX NETWORK", "publication_ref": [], "table_ref": [], "text": "As shown in Fig. 2, we design the P2RBox to establish a seamless connection between point annotations and rotated boxes through the generation, constraint, and inspection of mask proposals. Specifically, the annotated point located on an object serves as the prompt for producing initial mask proposals. Subsequently, a dedicated Constrainer Module is devised to discern and refine the most plausible masks. Building upon these filtered candidates, we introduce a novel Inspector Module designed to discern superior masks by intuitively capturing the semantic nuances embedded within the masks. Lastly, our improved mask-to-oriented-box module, named SAE, plays a pivotal component in facilitating the annotation transformation." }, { "figure_ref": [], "heading": "CONSTRAINER MODULE", "publication_ref": [ "b2" ], "table_ref": [], "text": "In many cases, the annotated point of an object is typically positioned in close proximity to the center of the mask (Chen et al., 2022). Leveraging this observation, we introduce a penalty term that quantifies the distance between the mask's center and the annotated point, thus facilitating the Centroid Offset Penalty. Let Radius represent the farthest Euclidean distance from any pixel on the mask to the annotation point, and dis denote the offset of the pixel's centroid on the mask from the annotation point. A penalty formula regarding the relative offset is designed as follows:\nS of f set = (1 -exp(-w • Radius + b)) • dis Radius . (1\n)\nWhile dis is scale-sensitive, we use dis/Radius to establish a scale-independent criterion. This approach works well for larger objects. However, consider small objects, we incorporate the term 1 -exp(-w • Radius + b) to increase tolerance. This adjustment is necessary because even a singlepixel offset during annotation can result in significant changes. The term 1 -exp(-w • Radius + b) (w set as 0.04, b set as 0.01, spectively) indicates that the coefficient is positively associated with the Radius. This suggests that the Constrainer operates under the assumption that as the dis/Radius ratio remains constant, a reduction in the value of Radius results in a corresponding decrease in S of f set . In our network, we only retain masks for which S of f set is less than the threshold thr1, set to 0.15 in our paper. During training, all masks meeting the criteria mentioned will be merged into a unified mask for subsequent steps." }, { "figure_ref": [ "fig_1" ], "heading": "INSPECTOR MODULE", "publication_ref": [ "b15" ], "table_ref": [], "text": "Utilizing the qualified masks derived from the Constrainer Module, the Inspector Module samples positive and negative points to guide the model in acquiring deeper perception of the semantic category associated with specific objects, thereby enhancing the assessment of the proposal masks.\nPoint Sampling. The four points bag or set (positive bags, negative bags, negative points and annotation points.) are constructed to train inspector module, which are described here. We denote the coordinates of an annotated point as a ∈ R 2 and its corresponding category as c ∈ {0, 1} K , where K represents the total number of categories. p = (p x , p y ) denotes a point on a feature map.\n1) Positive Bag Construction. In Fig. 2, with a relatively trustworthy mask in the neighborhood of a, we define Radius as the maximum Euclidean distance between the annotated point and pixels forming the mask. We define N ring-shaped sampling regions. Then we randomly sample u 0 points within each region, and obtain Sample(a, r). All sampled points of N ring-shaped regions are defined as points' bag of a, denoted as B in Eq. 2.\nRing(a, r) = p|p ∈ mask, r -1 N < ||p -a|| Radius <= r N , 1 <= r <= N.(2)\nSample(a, r) = {p i |p i ∈ Ring(a, r), p i is randomly selected}.\n(3)\nB = 1≤r≤N Sample(a, r),(4)\nwhere B is used for calculating the MIL loss for P2RBox training, and |Sample(a, r)| = u 0 (u 0 are number of points sampled for each ring).\n2) Negative Bag Construction. Background points located far from the object are easier for the network to learn. Conversely, the negative points along the object's boundary are more valuable samples, as they often reside on the critical boundary between the foreground and background. Hence, we design a negative point bag, training the model to discern boundary features. By selecting u 1 points in the margin pixels of the mask, the negative point bag of a j can be defined as:\nB neg = p i |p i ∈ mask margin , p i is randomly selected ,(5)\nwhere |B neg | = u 1 and mask margin can be obtained by calculating the non-zero points on the gradient map (implemented using first-order differences) of the mask.\n3) Negative Points. With annotated point a as a naturally positive reference, we introduce negative points to instruct the model in discerning background features. To obtain the negative points for a given mask, we follow a three-step process. Firstly, determine the circumscribed bounding box of the mask, denoted as (x, y, w, h, α). Secondly, increase both the height h and width w by a factor as ĥ = (1 + δ) • h and ŵ = (1 + δ) • w, where δ is the set to 0.05 in this paper. Lastly, the set of negative points, denoted as N , comprises the four vertices along with the midpoints of the four edges, i.e., \nn ij = (x + ŵ 2 • cos α • i - ĥ 2 • sin α • j, y + ŵ 2 • sin α • i + ĥ 2 • cos α • j), N = n ij | i, j ∈ {-1, 0, 1}, (i, j) ̸ = (0, 0) .(6\nS B = p∈B [S ins B ] p • [S cls B ] p , ∈ R K . (7\n)\nThis score will be used for the subsequent loss calculation.\n2) Total Loss. Object-level MIL loss is introduced to endow P2RBox the ability of finding semantic points around each annotated point. By combining information from similar objects throughout the entire dataset, it imparts discriminative capabilities to the features of that category, distinguishing between foreground and background. The objective function of P2RBox is a weighted summation of the three losses:\nL P 2RBox = L ann + L pos M IL + L neg M IL + L neg .(8)\nAnd L M IL , L ann and L neg are based on the focal loss (Lin et al., 2017):\nFL(S p , c) = K k=1 c k (1 -S p,k ) γ log(S p,k ) + (1 -c k )S γ p,k log(1 -S p,k ),(9)\nwhere γ is set as 2 following the standard focal loss. S p ∈ R K and c ∈ {0, 1} K are the predicted scores on all categories and the category label, respectively.\n3) Object-level MIL Loss. As mentioned above, S B is computed for a set of points in bag B. Each point's classification score and instance score are generated independently. The final score is obtained by summing the element-wise products of these scores. Based on bag score S B , the MIL loss is given by the focal loss with the category label c of a:\nL pos M IL = 1 M M j=1 FL(S Bj , c j ).(10)\nIn a similar manner, L neg M IL can be computed by applying the same procedure to B neg . 4) Annotaion Loss. Due to the absence of positive samples with point annotations, the annotated points serve as natural positive samples that guide the model in learning about the foreground of each category. Hence, we introduce the annotation loss L ann to provide the network with accurate positive samples for supervision. L ann ensures a high score for annotated points. A classification branch with shared weights, as described above, is utilized to compute L ann in the following manner.\nS a = σ(f c cls (F a )) ∈ R K , L ann = 1 M M j=1 FL(S aj , c j ),(11)\nwhere M is the number of objects in an image, σ served as an activation function.\n5) Negative Loss. Conventional MIL employs binary logarithmic loss and considers proposals from other categories as negative samples. Due to the absence of explicit supervision from background samples, it struggles to effectively suppress the negative samples during MIL training. To address this, we calculate the negative loss, denoted as L neg , which constitutes the negative component of the focal loss. The calculation is as follows, with γ set to 2, based on the set N j .\nS p = σ 1 (f c cls (F p )) ∈ R K ; L neg = 1 8 * M M j=1 p∈Nj S γ p • log(1 -S p ).(12)\nMask Quality Assessment. By assimilating location and category information from annotated points, the Inspector Module gains classification capability. This allows for the prediction of semantic information, enhancing its quality assessment of the mask. Additionally, the classification scores of the marginal points (i.e., negative bag) are taken into account and integrated to derive the semantic score of the mask:\nS smt = α 1 • mask cls -α 2 • mask cls margin ,(13)\nwhere, mask cls represents the mean of classification scores across all pixels within the mask that pertain to the identical class as the annotated point. Similarly, the computation of mask margin follows a akin approach.\nWe enhance the mask selection process by incorporating mask-associated scores with the center of mass deviation penalty introduced by the Constrainer Module. This results in a comprehensive weighted average score, derived from three quantified scores, which surpasses the performance achieved by using mask-associated scores alone.\nScore = S mask -β 1 • S of f set + β 2 • S smt ,(14)\nwhere S mask is accompanied by its inherent properties at the moment of its generation given by SAM, S of f set is defined in Constrainer Module.\nDuring the testing phase, we straightforwardly select the mask with the highest score as the object's mask, which is subsequently converted into a rotated bounding box." }, { "figure_ref": [ "fig_0" ], "heading": "SYMMETRY AXIS ESTIMATION.", "publication_ref": [], "table_ref": [], "text": "Symmetry Axis Estimation (SAE) Module primarily addresses a specific yet prevalent issue. In the case of a symmetrical object, its symmetry axes are typically considered as its orientation. As we are aware, rotated bounding boxes offer a more precise means of annotation and can also convey its orientation. Generally, an object's orientation aligns with at least one edge of the minimum circumscribed rectangle. However, this isn't always the case. For instance, consider an object like a plane; even though it possesses an axis of symmetry, its smallest enclosing rectangle does not have any edges parallel to its orientation, as shown in the last column of Fig. 1. Symmetry Axis Direction. Assuming the presence of a symmetric object, let's denote all its pixel coordinates as P , which forms an n × 2 matrix. By translating its center of mass to the origin, we ensure that the origin always coincides with its symmetric axis. We assert that the eigenvectors of the matrix P T • P correspond to the object's symmetry and vertical directions.\nProof of The Assertion. In accordance with this condition, if the target exhibits symmetry along an axis passing through the origin, then there exists a rotation matrix, also referred to as an orthogonal matrix R, such that:\nP • E = Q • R, E = 1 0 0 1 . (15\n)\nHere, we have a matrix R with dimensions 2 × 2, representing a rotated matrix that aligns the axis of symmetry with the x-axis. Consequently, we can express Q as follows:\nQ = x 1 x 1 . . . x n x n y 1 -y 1 . . . y n -y n T .(16)\nTo find the matrix R, we multiply both sides of the above equation by its transpose, yielding:\nE T • P T • P • E = R T • Q T • Q • R.(17)\nThis further simplifies to:\nP T • P = R T • 2 × n i=1 x 2 i 0 0 2 × n i=1 y 2 i • R. (18\n)\nBy spectral theorem for symmetric matrices, Eq. 18 demonstrates that Q T • Q and P T • P are similar matrices, with R serving as the similarity transformation matrix and also the eigenvector matrix of P T • P because Q T • Q being a diagonal matrix. This confirms our assertion: The eigenvectors of the matrix P T • P correspond to the object's symmetry direction and its vertical direction.\nIn the SAE Module, we generate oriented bounding rectangles for categories PL and HC, following their symmetry axes. For simplicity, other categories continue to use minimum circumscribed bounding boxes." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b28" ], "table_ref": [], "text": "To evaluate our proposed method, we conduct extensive experiments on the most widely-used oriented object detection datasets, namely DOTA (Xia et al., 2018)." }, { "figure_ref": [], "heading": "DATASETS AND IMPLEMENT DETAILS", "publication_ref": [ "b1" ], "table_ref": [], "text": "DOTA. There are 2,806 aerial images-1,411 for training, 937 for validation, and 458 for testing, as annotated using 15 categories with 188,282 instances in total. We follow the preprocessing in MMRotate-The high-resolution images are split into 1,024 × 1,024 patches with an overlap of 200 pixels for training, and the detection results of all patches are merged to evaluate the performance. We use training and validation sets for training and the test set for testing. The detection performance is obtained by submitting testing results to DOTA's evaluation server. We report the AP 50 which uses the IoU between the predicted rotated boxes and rotated ground-truth bounding boxes.\nTraining Details. P2RBox predicts the rotated bounding boxes from single point annotations and uses the predicted boxes to train three classic oriented detectors (RetinaNet, FCOS, Oriented R-CNN) with standard settings. All the fully-supervised models are trained based on a single GeForce RTX 2080Ti GPU. Our model is trained with SGD (Bottou, 2012) on a single GeForce RTX 2080Ti GPU. The initial learning rate is 2.5 × 10 -3 with momentum 0.9 and weight decay being 0.0001. And the learning rate will warm up for 500 iterations. As shown in Tab.1, our model's performance across many categories is astonishing. In pointsupervised detection, to demonstrate the effectiveness of the proposed method in our model, we designed a parameter-free rotation box annotation generator based on SAM, which directly retains the highest-score mask and computes the minimum bounding rectangle to obtain the rotated bounding box. By comparing the results of pseudo-label training on three strong supervised detectors, P2RBox model outperforms our baseline in every single category combined with any detector (55.50% vs. 47.91% on RetinaNet,58.40% vs. 50.84% on FCOS,62.26% vs. 52.75% on Oriented R-CNN)." }, { "figure_ref": [ "fig_2" ], "heading": "MAIN RESULT", "publication_ref": [], "table_ref": [], "text": "Our mAP 50 is 62.26% combined with Oriented R-CNN, which exceeds the previous methods with the H2Rbox-based detector, e.g., BoxInst-RBox (Tian et al., 2021). Compared with the H2Rbox, P2RBox (Oriented R-CNN) achieves comparable performance in some categories, such as GTF and HC. Examples of detection results on the DOTA dataset using P2RBox (Oriented R-CNN) are shown in Fig. 3 4.3 ABLATION STUDY The ablation study's mAP results are based on P2RBox (RetinaNet), mIoU results are calculated between ground truth and pseudo rotated box. The following experiments assume that α 1 = α 2 = β 1 = β 2 = 1.0 and the SAE method is applied on PL and HC if not specified.\nP2RBox Modules. As depicted in Tab. 2, we evaluated various strategies of the P2RBox model, including Inspector Module, Constrainer Module, and the Symmetry Axis Estimation Module. Our experiments reveal the following: The first row of Tab. 2 indicates that we have confidence in the score provided by SAM for selecting the highest-scoring mask proposal. Then, we transform this mask proposal into a rotated box by calculating its minimum bounding rectangle, resulting in an mAP of 47.91%. Subsequent experiments demonstrate the performance improvements achieved by each module.\nAssessment Score. Our mask assessment score consists of three weighted quantified scores in Eq. 14. The weight parameter β 1 , β 2 is used to get the final score for selecting. As shown in Tab. 3, our model demonstrates insensitivity to parameter adjustments, showing robustness. Semantic Score. In the Inspector Module, to harness the full potential of the network's semantic capabilities, we conducted experiments concerning the parameters α 1 and α 2 . As demonstrated in Tab. 4, when we set α 2 = 0, the mIoU decreases from 60.68% to 57.90%, when set α 1 = 0, the mIoU decreases from 60.68% to 52.74%. This indicates that the model possesses the ability to distinguish between the margin and inner pixels. The subsequent experiments demonstrate that the mIoU is not significantly affected, and it is acceptable to set α 1 = α 2 = 1.0. SAE Method. To show the full performance of SAE method, in major categories. As shown in Tab. 5, our SAE method exhibits significant improvements compared with minimum bounding rectangle in both plane (PL) and helicopter (HC), while keeping other categories basically the same. " }, { "figure_ref": [ "fig_3" ], "heading": "APPENDIX", "publication_ref": [], "table_ref": [], "text": "Upper Limit in our method In fact, we don't create new mask proposals; we simply choose a mask from the SAM generator using our criteria. As a result, there's a performance limit. When selecting based on IoU with the ground truth, the IoU results are displayed in Tab. 6. This result demonstrates that we have outperformed the SAM model in every category compared to simply selecting the highest score. It also highlights that for some categories, the performance remains poor due to very low upper limits, despite significant improvements from the baseline.\nDetails when using Symmetry Axis Estimation Module. Tab. 7 provides detailed information.\nThe SAE method shows a slight decrease in IoU for some categories, which is negligible. However, it experiences a significant drop in the BD category. The issue arises because the annotation or ground truth for BD does not align with its symmetry axis, even when a symmetry axis is present, as illustrated in Fig. 4. The limitations on the upper performance bound for the Bridges. category are quite restrictive. This is primarily attributed to the distinctive nature of its definition, which deviates from the conventional object definitions. In the case of bridges, they are defined as road segments that span across bodies of water, leading to a situation where there are insufficient discernible pixel variations between the left and right ends of the bridge. Consequently, this characteristic significantly hampers the performance of the SAM model. As a result, it imposes a notable constraint on the potential performance within this category. This challenge is further exemplified in Fig. 5.\nDetails in Inspector Module. We designed the coefficients of the Inspector Module to address challenges posed by small-scale objects. In cases where a small-scale object assimilates excessive background context, the increment in the denominator term Radius within the coefficient formulation leads to a reduction in the S of f set . Consequently, as shown in Fig. 6this deviation in S of f set guides the bias observed in our selection of outcomes.\nWhat Result in Bad Bases using minimum bounding rectangle. To illustrate this without loss of generality, let's consider an object that exhibits symmetry about the y-axis (see Fig. 7). We'll denote three points on the oriented circumscribed bounding box as a, b, and c, respectively, and their corresponding mirror points as â, b, and ĉ." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The second inequality is derived based on the area requirement. For the more general case, as shown in Fig. 7 (b). By finding two tangent lines with fixed slopes (1 and -1), where α • h is the distance between the intersections of these lines with the right green edge, we obtain an equation regarding the length of the diagonal:\nSpecifically, if the width is equal to the height, w = h, the inequality simplifies to:\nIn conclusion, taking an airplane as an example, as shown in the last column of Fig. 1, due to the intersection ratio α < √ 2 -1, ambiguity arises between the minimum bounding rectangle and the oriented bounding rectangle, which is well addressed by Symmetry Axis Estimation Module. " } ]
Oriented object detection, a specialized subfield in computer vision, finds applications across diverse scenarios, excelling particularly when dealing with objects of arbitrary orientations. Conversely, point annotation, which treats objects as single points, offers a cost-effective alternative to rotated and horizontal bounding boxes but sacrifices performance due to the loss of size and orientation information. In this study, we introduce the P2RBox network, which leverages point annotations and a mask generator to create mask proposals, followed by filtration through our Inspector Module and Constrainer Module. This process selects high-quality masks, which are subsequently converted into rotated box annotations for training a fully supervised detector. Specifically, we've thoughtfully crafted an Inspector Module rooted in multi-instance learning principles to evaluate the semantic score of masks. We've also proposed a more robust mask quality assessment in conjunction with the Constrainer Module. Furthermore, we've introduced a Symmetry Axis Estimation (SAE) Module inspired by the spectral theorem for symmetric matrices to transform the top-performing mask proposal into rotated bounding boxes. P2RBox performs well with three fully supervised rotated object detectors: RetinaNet, Rotated FCOS, and Oriented R-CNN. By combining with Oriented R-CNN, P2RBox achieves 62.26% on DOTA-v1.0 test dataset. As far as we know, this is the first attempt at training an oriented object detector with point supervision.
P2RBOX: A SINGLE POINT IS ALL YOU NEED FOR ORIENTED OBJECT DETECTION
[ { "figure_caption": "Figure 1 :1Figure 1: Visual comparison of the highest confidence mask and its corresponding rotated box annotation generated by mask proposal generator (SAM) and P2RBox. The second row displays the results of the baseline method, while the Rotated boxes in the last row are generated by the SAE module. (Best viewed in color).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The overview of training process of P2RBox, consisting of mask generator, Constrainer Module and Inspector Module. Initially, mask proposals are generated by a generator (SAM). The Constrainer Module selects high-quality masks to create the union mask. Four point sets are constructed according the union mask to train Inspector Moudle, which pursuing dataset-wide category consistency. The trained network will used to assesses mask quality, selecting the best proposals for detector training supervision. (Best viewed in color)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Examples of detection results on the DOTA dataset using P2RBox (Oriented R-CNN).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: GT, minimum and SAE on category BD.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "P2RBox Loss. In following, we gives the details of the objective function of training P2RBox network based on the positive bag B and its score S B , negative bag B neg and its score S Bneg , and negative points N . Based on the designed loss as described, after training, the network has acquired pixel-level classification capability.1) Bag Score. To facilitate P2RBox in determining whether the points in B belong to the same category as a, we treat the points in bag B as positive instances. We extract the feature vectors {F p |p ∈ B}. For each p ∈ B, two separate fully connected layers with independent weights generate the classification score and instance score, denoted as [S ins B ] p and [S cls B ] p . To obtain the final classification score for B, we calculate it by summing the element-wise products of [S ins B ] p and [S cls B ] p as follows:", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of each category on the DOTA-v1.0 test set.", "figure_data": "MethodPLBDBR GTF SVLVSHTCBCST SBF RA HASPHC mAP50Rbox-supervised:RetinaNet (2017)89.1 74.5 44.7 72.2 71.8 63.6 74.9 90.8 78.7 80.6 50.5 59.2 62.9 64.4 39.7 67.83FOCS (2019)88.4 75.6 48.0 60.1 79.8 77.8 86.7 90.1 78.2 85.0 52.8 66.3 64.5 68.3 40.3 70.78Oriented R-CNN (2021) 89.5 82.1 54.8 70.9 78.9 83.0 88.2 90.9 87.5 84.7 64.0 67.7 74.9 68.8 52.3 75.87RepPoints (2019b)84.8 73.4 40.7 56.5 71.6 52.2 73.4 90.6 76.3 85.2 58.8 61.4 54.9 64.4 18.6 64.18Faster R-CNN (2015)88.4 73.1 44.9 59.1 73.3 71.5 77.1 90.8 78.9 83.9 48.6 63.0 62.2 64.9 56.2 69.05RoI Transformer (2019) 88.6 78.5 43.4 75.9 68.8 73.7 83.6 90.7 77.3 81.5 58.4 53.5 62.8 58.9 47.7 69.56DAL (2021)88.7 76.6 45.1 66.8 67.0 76.8 79.7 90.8 79.5 78.5 57.7 62.3 69.1 73.1 60.1 71.44RSDet (2021)89.8 82.9 48.6 65.2 69.5 70.1 70.2 90.5 85.6 83.4 62.5 63.9 65.6 67.2 68.0 72.20R 3 Det (2021b)88.8 83.1 50.9 67.3 76.2 80.4 86.7 90.8 84.7 83.2 62.0 61.4 66.9 70.6 53.9 73.79Hbox-supervised:BoxInst-RBox (2021)68.4 40.8 33.1 32.3 46.9 55.4 56.6 79.5 66.8 82.1 41.2 52.8 52.8 65.0 30.0 53.59H2RBox (2022a)88.5 73.5 40.8 56.9 77.5 65.4 77.9 90.9 83.2 85.3 55.3 62.9 52.4 63.6 43.3 67.82H2RBox-v2 (2023)89.0 74.4 50.0 60.5 79.8 75.3 86.9 90.9 85.1 85.0 59.2 63.2 65.2 70.5 49.7 72.31Point-supervised:P2BNet-H2RBox2.3 33.8 1.23.6 36.7 10.2 22.3 0.21.6 24.5 9.1 44.4 10.5 34.8 20.9 17.08SAM (RetinaNet)79.7 64.6 11.1 45.6 67.9 47.7 74.6 81.1 6.6 75.7 20.0 30.6 36.9 50.5 26.1 47.91SAM (FCOS)78.2 61.7 11.7 45.1 68.7 64.8 78.6 80.9 5.0 77.0 16.1 31.8 45.7 53.4 44.2 50.84SAM (Oriented R-CNN)79.0 62.6 8.6 55.8 68.4 67.3 77.2 79.5 4.4 77.1 26.9 28.8 49.2 55.2 51.3 52.75P2RBox (RetinaNet)86.9 70.0 12.5 47.9 70.4 53.9 75.4 88.8 44.1 77.4 41.9 33.4 41.2 53.9 34.8 55.50P2RBox (FCOS)86.7 66.0 14.5 47.4 72.4 71.3 78.6 89.7 45.8 79.6 44.6 34.8 48.4 55.4 40.8 58.40P2RBox (Oriented R-CNN) 87.7 72.6 13.9 63.1 70.1 74.7 82.8 90.1 46.4 81.8 53.0 33.5 57.2 56.4 50.1 62.26", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation with main modules. Ins for Inspector, Cons for Constrainer.", "figure_data": "Ins Cons SAE mIoU mAP---54.86 47.91✓ --55.98 49.33-✓-58.77 53.49✓ ✓-59.68 54.38✓ ✓ ✓ 60.68 55.50", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Varying β 1 , β 2 for assessment score.", "figure_data": "β 1 β 2 mIoU1.2 1.2 60.691 1 60.680.8 0.8 60.641.2 0.8 60.620.8 1.2 60.13", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Varying α 1 , α 2 for semantic score.", "figure_data": "α 1 α 2 mIoU1.2 1.2 60.441 1 60.680.8 0.8 60.871.2 0.8 60.490.8 1.2 59.29", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "IoU results in major categories on DOTA using different methods.", "figure_data": "MethodPLBRSVLVSHBCSBFHAHCminimum-only57.8522.0165.4269.2267.9744.8066.9557.3057.77SAE-only71.2221.8065.4669.1268.1543.8064.9157.4259.545 CONCLUSIONThis paper introduces P2RBox, the first point-supervised oriented object detector to our best knowl-edge. P2RBox distinguishes features through multi-instance learning, introduces a novel method", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "IoU result of SAM (highest score), P2RBox, ceiling (always choose the highest IoU using SAE on PL and HC while others minimum).", "figure_data": "MethodPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHCmIoUSAM55.7060.7217.8562.6563.7965.9067.0678.3825.5457.8746.1248.4752.2660.2056.0454.57P2RBox71.2266.1022.0164.8365.4269.2267.9780.7044.8058.4966.9552.2257.3063.5059.5460.68IoU-highest74.0870.3926.2378.5369.6173.4874.9183.4347.1464.6170.0858.3766.5166.8164.1165.89", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Different mask2rbox method IoU results.", "figure_data": "MethodPLBDBRGTFSVLVSHTCBCSTSBFRAHASPHCmIoUminimum-only57.8566.1022.0164.8365.4269.2267.9780.7044.8058.4966.9552.2257.3063.5057.7759.68SAE-only71.2258.1421.8064.9465.4669.1268.1580.3743.8056.0064.9152.8757.4262.9559.5459.78", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" } ]
Guangming Cao; Xuehui Yu; Wenwen Yu; Xumeng Han; Xue Yang; Guorong Li; Jianbin Jiao; Zhenjun Han
[ { "authors": "Hakan Bilen; Andrea Vedaldi", "journal": "", "ref_id": "b0", "title": "Weakly supervised deep detection networks", "year": "2016" }, { "authors": "Léon Bottou", "journal": "Springer", "ref_id": "b1", "title": "Stochastic gradient descent tricks", "year": "2012" }, { "authors": "Pengfei Chen; Xuehui Yu; Xumeng Han; Najmul Hassan; Kai Wang; Jiachen Li; Jian Zhao; Humphrey Shi; Zhenjun Han; Qixiang Ye", "journal": "Springer", "ref_id": "b2", "title": "Point-to-box network for accurate object detection via single point supervision", "year": "2022" }, { "authors": "Ze Chen; Zhihang Fu; Rongxin Jiang; Yaowu Chen; Xian-Sheng Hua", "journal": "", "ref_id": "b3", "title": "Slv: Spatial likelihood voting for weakly supervised object detection", "year": "2020" }, { "authors": "Bowen Cheng; Omkar Parkhi; Alexander Kirillov", "journal": "", "ref_id": "b4", "title": "Pointly-supervised instance segmentation", "year": "2022" }, { "authors": "Ali Diba; Vivek Sharma; Ali Pazandeh; Hamed Pirsiavash; Luc Van Gool", "journal": "", "ref_id": "b5", "title": "Weakly supervised cascaded convolutional networks", "year": "2017" }, { "authors": "Jian Ding; Nan Xue; Yang Long; Gui-Song Xia; Qikai Lu", "journal": "", "ref_id": "b6", "title": "Learning roi transformer for oriented object detection in aerial images", "year": "2019" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International journal of computer vision", "ref_id": "b7", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Jiaming Han; Jian Ding; Jie Li; Gui-Song Xia", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b8", "title": "Align deep features for oriented object detection", "year": "2021" }, { "authors": "Jiaming Han; Jian Ding; Nan Xue; Gui-Song Xia", "journal": "", "ref_id": "b9", "title": "Redet: A rotation-equivariant detector for aerial object detection", "year": "2021" }, { "authors": "Shitian He; Huanxin Zou; Yingqian Wang; Boyang Li; Xu Cao; Ning Jing", "journal": "", "ref_id": "b10", "title": "Learning remote sensing object detection with single point supervision", "year": "2023" }, { "authors": "Liping Hou; Ke Lu; Xue Yang; Yuqiu Li; Jian Xue", "journal": "Remote Sensing", "ref_id": "b11", "title": "G-rep: Gaussian representation for arbitraryoriented object detection", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b12", "title": "Segment anything", "year": "2023" }, { "authors": "Pilhyeon Lee; Hyeran Byun", "journal": "", "ref_id": "b13", "title": "Learning action completeness from points for weakly-supervised temporal action localization", "year": "2021" }, { "authors": "Wentong Li; Yijie Chen; Kaixuan Hu; Jianke Zhu", "journal": "", "ref_id": "b14", "title": "Oriented reppoints for aerial object detection", "year": "2022" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b15", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Qi Ming; Zhiqiang Zhou; Lingjuan Miao; Hongwei Zhang; Linhao Li", "journal": "", "ref_id": "b16", "title": "Dynamic anchor learning for arbitrary-oriented object detection", "year": "2021" }, { "authors": " Dim P Papadopoulos; Frank Jasper Rr Uijlings; Vittorio Keller; Ferrari", "journal": "", "ref_id": "b17", "title": "Training object class detectors with click supervision", "year": "2017" }, { "authors": "Wen Qian; Xue Yang; Silong Peng; Junchi Yan; Yue Guo", "journal": "", "ref_id": "b18", "title": "Learning modulated loss for rotated object detection", "year": "2021" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "", "ref_id": "b19", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Zhongzheng Ren; Zhiding Yu; Xiaodong Yang; Ming-Yu Liu; Alexander G Schwing; Jan Kautz", "journal": "Springer", "ref_id": "b20", "title": "Ufo 2: A unified framework towards omni-supervised object detection", "year": "2020" }, { "authors": "Javier Ribera; David Guera; Yuhao Chen; Edward J Delp", "journal": "", "ref_id": "b21", "title": "Locating objects without bounding boxes", "year": "2019" }, { "authors": "Qingyu Song; Changan Wang; Zhengkai Jiang; Yabiao Wang; Ying Tai; Chengjie Wang; Jilin Li; Feiyue Huang; Yang Wu", "journal": "", "ref_id": "b22", "title": "Rethinking counting and localization in crowds: A purely pointbased framework", "year": "2021" }, { "authors": "Peng Tang; Xinggang Wang; Xiang Bai; Wenyu Liu", "journal": "", "ref_id": "b23", "title": "Multiple instance detection network with online instance classifier refinement", "year": "2017" }, { "authors": "Peng Tang; Xinggang Wang; Song Bai; Wei Shen; Xiang Bai; Wenyu Liu; Alan Yuille", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b24", "title": "Pcl: Proposal cluster learning for weakly supervised object detection", "year": "2018" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b25", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Chunhua Zhi Tian; Xinlong Shen; Hao Wang; Chen", "journal": "", "ref_id": "b26", "title": "Boxinst: High-performance instance segmentation with box annotations", "year": "2021" }, { "authors": "Fang Wan; Pengxu Wei; Jianbin Jiao; Zhenjun Han; Qixiang Ye", "journal": "", "ref_id": "b27", "title": "Min-entropy latent model for weakly supervised object detection", "year": "2018" }, { "authors": "Gui-Song Xia; Xiang Bai; Jian Ding; Zhen Zhu; Serge Belongie; Jiebo Luo; Mihai Datcu; Marcello Pelillo; Liangpei Zhang", "journal": "", "ref_id": "b28", "title": "Dota: A large-scale dataset for object detection in aerial images", "year": "2018" }, { "authors": "Xingxing Xie; Gong Cheng; Jiabao Wang; Xiwen Yao; Junwei Han", "journal": "", "ref_id": "b29", "title": "Oriented r-cnn for object detection", "year": "2021" }, { "authors": "Xue Yang; Junchi Yan", "journal": "", "ref_id": "b30", "title": "Arbitrary-oriented object detection with circular smooth label", "year": "2020" }, { "authors": "Xue Yang; Jirui Yang; Junchi Yan; Yue Zhang; Tengfei Zhang; Zhi Guo; Xian Sun; Kun Fu", "journal": "", "ref_id": "b31", "title": "Scrdet: Towards more robust detection for small, cluttered and rotated objects", "year": "2019" }, { "authors": "Xue Yang; Liping Hou; Yue Zhou; Wentao Wang; Junchi Yan", "journal": "", "ref_id": "b32", "title": "Dense label encoding for boundary discontinuity free rotation detection", "year": "2021" }, { "authors": "Xue Yang; Junchi Yan; Ziming Feng; Tao He", "journal": "", "ref_id": "b33", "title": "R3det: Refined single-stage detector with feature refinement for rotating object", "year": "2021" }, { "authors": "Xue Yang; Junchi Yan; Qi Ming; Wentao Wang; Xiaopeng Zhang; Qi Tian", "journal": "", "ref_id": "b34", "title": "Rethinking rotated object detection with gaussian wasserstein distance loss", "year": "" }, { "authors": " Pmlr", "journal": "", "ref_id": "b35", "title": "", "year": "2021" }, { "authors": "Xue Yang; Xiaojiang Yang; Jirui Yang; Qi Ming; Wentao Wang; Qi Tian; Junchi Yan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Learning high-precision bounding box for rotated object detection via kullback-leibler divergence", "year": "2021" }, { "authors": "Xue Yang; Gefan Zhang; Wentong Li; Xuehui Wang; Yue Zhou; Junchi Yan", "journal": "", "ref_id": "b37", "title": "H2rbox: Horizonal box annotation is all you need for oriented object detection", "year": "2022" }, { "authors": "Xue Yang; Gefan Zhang; Xiaojiang Yang; Yue Zhou; Wentao Wang; Jin Tang; Tao He; Junchi Yan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b38", "title": "Detecting rotated objects as gaussian distributions and its 3-d generalization", "year": "2022" }, { "authors": "Xue Yang; Yue Zhou; Gefan Zhang; Jirui Yang; Wentao Wang; Junchi Yan; Xiaopeng Zhang; Qi Tian", "journal": "", "ref_id": "b39", "title": "The kfiou loss for rotated object detection", "year": "2022" }, { "authors": "Ze Yang; Shaohui Liu; Han Hu; Liwei Wang; Stephen Lin", "journal": "", "ref_id": "b40", "title": "Reppoints: Point set representation for object detection", "year": "2019" }, { "authors": "Xuehui Yu; Pengfei Chen; Di Wu; Najmul Hassan; Guorong Li; Junchi Yan; Humphrey Shi; Qixiang Ye; Zhenjun Han", "journal": "", "ref_id": "b41", "title": "Object localization under single coarse point supervision", "year": "2022" }, { "authors": "Yi Yu; Feipeng Da", "journal": "", "ref_id": "b42", "title": "Phase-shifting coder: Predicting accurate orientation in oriented object detection", "year": "2023" }, { "authors": "Yi Yu; Xue Yang; Qingyun Li; Yue Zhou; Gefan Zhang; Junchi Yan; Feipeng Da", "journal": "", "ref_id": "b43", "title": "H2rboxv2: Boosting hbox-supervised oriented object detection via symmetric learning", "year": "2023" }, { "authors": "Xiaolin Zhang; Yunchao Wei; Jiashi Feng; Yi Yang; Thomas S Huang", "journal": "", "ref_id": "b44", "title": "Adversarial complementary learning for weakly supervised object localization", "year": "2018" }, { "authors": "Bolei Zhou; Aditya Khosla; Agata Lapedriza; Aude Oliva; Antonio Torralba", "journal": "", "ref_id": "b45", "title": "Learning deep features for discriminative localization", "year": "2016" } ]
[ { "formula_coordinates": [ 4, 202.74, 470.85, 297.39, 22.31 ], "formula_id": "formula_0", "formula_text": "S of f set = (1 -exp(-w • Radius + b)) • dis Radius . (1" }, { "formula_coordinates": [ 4, 500.13, 476.98, 3.87, 8.64 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 5, 157.78, 145.73, 346.22, 22.31 ], "formula_id": "formula_2", "formula_text": "Ring(a, r) = p|p ∈ mask, r -1 N < ||p -a|| Radius <= r N , 1 <= r <= N.(2)" }, { "formula_coordinates": [ 5, 252.16, 191.52, 251.84, 20.08 ], "formula_id": "formula_3", "formula_text": "B = 1≤r≤N Sample(a, r),(4)" }, { "formula_coordinates": [ 5, 186.21, 308.05, 317.79, 9.65 ], "formula_id": "formula_4", "formula_text": "B neg = p i |p i ∈ mask margin , p i is randomly selected ,(5)" }, { "formula_coordinates": [ 5, 158.22, 438.3, 341.91, 38.31 ], "formula_id": "formula_5", "formula_text": "n ij = (x + ŵ 2 • cos α • i - ĥ 2 • sin α • j, y + ŵ 2 • sin α • i + ĥ 2 • cos α • j), N = n ij | i, j ∈ {-1, 0, 1}, (i, j) ̸ = (0, 0) .(6" }, { "formula_coordinates": [ 5, 238.54, 620.95, 261.59, 22.13 ], "formula_id": "formula_6", "formula_text": "S B = p∈B [S ins B ] p • [S cls B ] p , ∈ R K . (7" }, { "formula_coordinates": [ 5, 500.13, 623.35, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 214.41, 720.31, 289.59, 13.83 ], "formula_id": "formula_8", "formula_text": "L P 2RBox = L ann + L pos M IL + L neg M IL + L neg .(8)" }, { "formula_coordinates": [ 6, 161.35, 100, 342.66, 30.55 ], "formula_id": "formula_9", "formula_text": "FL(S p , c) = K k=1 c k (1 -S p,k ) γ log(S p,k ) + (1 -c k )S γ p,k log(1 -S p,k ),(9)" }, { "formula_coordinates": [ 6, 245.42, 211.16, 258.58, 30.32 ], "formula_id": "formula_10", "formula_text": "L pos M IL = 1 M M j=1 FL(S Bj , c j ).(10)" }, { "formula_coordinates": [ 6, 247.74, 321.93, 256.26, 47.47 ], "formula_id": "formula_11", "formula_text": "S a = σ(f c cls (F a )) ∈ R K , L ann = 1 M M j=1 FL(S aj , c j ),(11)" }, { "formula_coordinates": [ 6, 220.92, 449.57, 283.08, 47.62 ], "formula_id": "formula_12", "formula_text": "S p = σ 1 (f c cls (F p )) ∈ R K ; L neg = 1 8 * M M j=1 p∈Nj S γ p • log(1 -S p ).(12)" }, { "formula_coordinates": [ 6, 220.57, 565.22, 283.43, 12.69 ], "formula_id": "formula_13", "formula_text": "S smt = α 1 • mask cls -α 2 • mask cls margin ,(13)" }, { "formula_coordinates": [ 6, 217.37, 669.91, 286.63, 9.65 ], "formula_id": "formula_14", "formula_text": "Score = S mask -β 1 • S of f set + β 2 • S smt ,(14)" }, { "formula_coordinates": [ 7, 244.22, 271.03, 255.63, 19.7 ], "formula_id": "formula_15", "formula_text": "P • E = Q • R, E = 1 0 0 1 . (15" }, { "formula_coordinates": [ 7, 499.85, 277.13, 4.15, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 7, 228.78, 322.42, 275.22, 24.77 ], "formula_id": "formula_17", "formula_text": "Q = x 1 x 1 . . . x n x n y 1 -y 1 . . . y n -y n T .(16)" }, { "formula_coordinates": [ 7, 232.61, 369.12, 271.39, 11.03 ], "formula_id": "formula_18", "formula_text": "E T • P T • P • E = R T • Q T • Q • R.(17)" }, { "formula_coordinates": [ 7, 199.94, 403.53, 299.91, 25.51 ], "formula_id": "formula_19", "formula_text": "P T • P = R T • 2 × n i=1 x 2 i 0 0 2 × n i=1 y 2 i • R. (18" }, { "formula_coordinates": [ 7, 499.85, 412.59, 4.15, 8.64 ], "formula_id": "formula_20", "formula_text": ")" } ]
[ { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b3", "b4", "b0", "b1" ], "table_ref": [], "text": "Recent studies like MVDiffusion [3], StitchDiffusion [4], and PanoDiff [5] have proved the feasibility of diffusionbased 360-degree panoramic images generation, but still have some drawbacks.\nMVDiffusion needs 8 perspective views (user-provided or generated from Stable Diffusion [1]) as inputs. The resulting closed-loop panoramic image is more like a longrange image with a wide angle. So it has artifacts on the 'sky' and 'floor' when viewing in a 360 image viewer.\nStitchDiffusion proposes a global cropping on the left and right side of the image to maintain the continuity. However, it still cracks on the junctions when zoom-in in the 360 image viewer.\nPanoDiff, similar to the StitchDiffusion, proposes a circular padding scheme, which is the most related research to our work. The idea of our circular blending strategy is derived from the circular padding scheme. The differences are (1) we use an adaptive weighting policy for geometric continuity, (2) we do not need the Rotating Schedule at both training and inference time, which means that we can directly finetune a dreambooth [2] model using standard diffusion pipeline for this task, and just apply the circular blending at inference time, and (3) we can directly apply our technique into the ControlNet-Tile [8] model to produce high-resolution results. " }, { "figure_ref": [], "heading": "GAN", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Base Model", "publication_ref": [], "table_ref": [], "text": "Finetuned on SDv2.1" }, { "figure_ref": [], "heading": "Low Resolution Result", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "512*1024", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text Description", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "ControlNet-Tile", "publication_ref": [], "table_ref": [], "text": "Finetuned on SDv1.5 " }, { "figure_ref": [], "heading": "ControlNet-Tile", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Circular Blending", "publication_ref": [], "table_ref": [], "text": "We propose a circular blending strategy at the inference time to generate seamless 360-degree panoramic images. Specifically, at each denoising step, the right part (of a such portion) of the latent feature and the left part (of the same portion as the right part) is blended with adaptive weights. This is illustrated in Fig. 1. Similarly, this strategy can be added to the tiled decode function of the VAE decoder (see Fig. 2). We find that using the circular blending in the VAE decoder is more important than in the latent denoising stage for maintaining the geometric continuity." }, { "figure_ref": [ "fig_1" ], "heading": "Text-to-360-Panoramas", "publication_ref": [ "b6", "b1", "b5" ], "table_ref": [], "text": "For the Text-to-360-Panoramas task, we propose a multistage framework to generate high resolution 360-degree panoramic images. As illustrated in Fig. 3, we first generate a low resolution image using a base model (finetuned on the SUN360 [7] dataset using the DreamBooth [2] training method), and then employ some super-resolution strategies (including diffusion-based and the GAN-based methods, like the ControlNet-Tile model and the RealESRGAN [6]) to up-scale the result to a high resolution one. For better results, we also finetune the ControlNet-Tile model on the SUN360 dataset by generate low-resolution and highresolution image pairs." }, { "figure_ref": [ "fig_6" ], "heading": "Single-Image-to-360-Panoramas", "publication_ref": [], "table_ref": [], "text": "For the Single-Image-to-360-Panoramas task, the framework is similar to the Text-to-360-Panoramas by replacing the base model to a controlnet-outpainting model. We design a ControlNet-Outpainting model to generate a low resolution 360-degree panoramic image from a given single ordinary 2D image at perspective view. To generate the training pairs of perspective and panoramic images, we first convert the panoramic image to cube-maps and select the center-cube as its perspective image. The inputs of the ControlNet-Outpainting model consist of the converted center-cube map C with the other cubes filled by zeros and the mask M . At inference time, the perspective image can be generated from a certain generative model or captured by a camera (the image should be squared). The perspective image is converted to the center-cube map C as the input of the ControlNet-Outpaining model. For some reason, the trained models of this task can not be released. However, it should be easy to reproduce. See some results in Fig. 8." }, { "figure_ref": [ "fig_2", "fig_3", "fig_4", "fig_5" ], "heading": "Resuls", "publication_ref": [], "table_ref": [], "text": "We show some testing results at different stages of the Textto-360-Panoramas task in Fig. 4, Fig. 5, Fig. 6, and Fig. 7.\nThe input prompts it fetch at the MVDiffusion project page (https://mvdiffusion.github.io/)" }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b1" ], "table_ref": [], "text": "The base model is trained using the DreamBooth [2] technique, so it can not be changed with the models from CIVI-TAI (https://civitai.com/) for stylizing purposes.\nAdding some style descriptions (such as 'cartoon style' and 'oil painting style') in the prompt does not work. One can generate an initial 360 image using our method, and then use ControlNets (like canny and depth) with different base models to change the style.\nThis kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted.\nMajestically rising towards the heavens, the snow-capped mountain stood, its jagged peaks cloaked in a shroud of ethereal clouds, its rugged slopes a stark contrast against the serene azure sky, and its silent grandeur exuding an air of ancient wisdom and timeless solitude, commanding awe and reverence from all who beheld it.\nThis kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted.\nBathed in the soft, dappled light of the setting sun, the silent street lay undisturbed, revealing the grandeur of its cobblestone texture, the rusted lampposts bearing witness to forgotten stories, and the ancient, ivy-clad houses standing stoically, their shuttered windows and weather-beaten doors speaking volumes about their passage through time. This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted." }, { "figure_ref": [], "heading": "Results from base model", "publication_ref": [], "table_ref": [], "text": "Majestically rising towards the heavens, the snow-capped mountain stood, its jagged peaks cloaked in a shroud of ethereal clouds, its rugged slopes a stark contrast against the serene azure sky, and its silent grandeur exuding an air of ancient wisdom and timeless solitude, commanding awe and reverence from all who beheld it.\nThis kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted.\nBathed in the soft, dappled light of the setting sun, the silent street lay undisturbed, revealing the grandeur of its cobblestone texture, the rusted lampposts bearing witness to forgotten stories, and the ancient, ivy-clad houses standing stoically, their shuttered windows and weather-beaten doors speaking volumes about their passage through time. This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted." }, { "figure_ref": [], "heading": "Results from base+initsr", "publication_ref": [], "table_ref": [], "text": "Majestically rising towards the heavens, the snow-capped mountain stood, its jagged peaks cloaked in a shroud of ethereal clouds, its rugged slopes a stark contrast against the serene azure sky, and its silent grandeur exuding an air of ancient wisdom and timeless solitude, commanding awe and reverence from all who beheld it.\nThis kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted.\nBathed in the soft, dappled light of the setting sun, the silent street lay undisturbed, revealing the grandeur of its cobblestone texture, the rusted lampposts bearing witness to forgotten stories, and the ancient, ivy-clad houses standing stoically, their shuttered windows and weather-beaten doors speaking volumes about their passage through time. This kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted." }, { "figure_ref": [], "heading": "Results from base+initsr+realesrgan", "publication_ref": [], "table_ref": [], "text": "Majestically rising towards the heavens, the snow-capped mountain stood, its jagged peaks cloaked in a shroud of ethereal clouds, its rugged slopes a stark contrast against the serene azure sky, and its silent grandeur exuding an air of ancient wisdom and timeless solitude, commanding awe and reverence from all who beheld it.\nThis kitchen is a charming blend of rustic and modern, featuring a large reclaimed wood island with marble countertop, a sink surrounded by cabinets. To the left of the island, a stainless-steel refrigerator stands tall. To the right of the sink, built-in wooden cabinets painted in a muted.\nBathed in the soft, dappled light of the setting sun, the silent street lay undisturbed, revealing the grandeur of its cobblestone texture, the rusted lampposts bearing witness to forgotten stories, and the ancient, ivy-clad houses standing stoically, their shuttered windows and weather-beaten doors speaking volumes about their passage through time.\nResults from our full implementation " } ]
Latents before circular blending rightmost Leftmost h w=2h Adaptive Weights 𝒘 0 1 𝑏𝑙𝑒𝑛𝑑𝑒𝑑 = 𝒘 * 𝑙𝑒𝑓𝑡 + 1 -𝒘 * 𝑟𝑖𝑔ℎ𝑡 Latents after circular blending w=2h h 𝑏𝑙𝑒𝑛𝑑𝑒𝑑 Figure 1. The proposed circular blending operation.
Diffusion360: Seamless 360 Degree Panoramic Image Generation based on Diffusion Models
[ { "figure_caption": "Figure 2 .2Figure 2. The circular blending operation in different stages.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "FinetunedFigure 3 .3Figure 3. The pipeline of Text-to-360-Panoramas.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Results from the Base Model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Results from Base+InitSR.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Results from Base+InitSR+ReslESRGAN. It can be observed that, the geometric continuity of the rightmost and the leftmost sides of our results are smooth and nearly no cracks. Some artifacts in the top two rows are cost by the RealESRGAN.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Results from the full implementation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Results of Single-Image-to-360-Panoramas.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" } ]
Mengyang Feng; Jinlin Liu; Miaomiao Cui; Xuansong Xie; Alibaba Group
[ { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b0", "title": "High-resolution image synthesis with latent diffusion models", "year": "2021" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b1", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2022" }, { "authors": "Shitao Tang; Fuyang Zhang; Jiacheng Chen; Peng Wang; Yasutaka Furukawa", "journal": "", "ref_id": "b2", "title": "Mvdiffusion: Enabling holistic multiview image generation with correspondence-aware diffusion", "year": "2023" }, { "authors": "Hai Wang; Xiaoyu Xiang; Yuchen Fan; Jing-Hao Xue", "journal": "", "ref_id": "b3", "title": "Customizing 360-degree panoramas through text-to-image diffusion models", "year": "2023" }, { "authors": "Jionghao Wang; Ziyu Chen; Jun Ling; Rong Xie; Li Song", "journal": "", "ref_id": "b4", "title": "360-degree panorama generation from few unregistered nfov images", "year": "2023" }, { "authors": "Xintao Wang; Liangbin Xie; Chao Dong; Ying Shan", "journal": "", "ref_id": "b5", "title": "Real-esrgan: Training real-world blind super-resolution with pure synthetic data", "year": "" }, { "authors": "Jianxiong Xiao; Krista A Ehinger; Aude Oliva; Antonio Torralba", "journal": "", "ref_id": "b6", "title": "Recognizing scene viewpoint using panoramic place representation", "year": "2012" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b7", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" } ]
[]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b27", "b11", "b29", "b37", "b22", "b12", "b6", "b32", "b19", "b0", "b12", "b15", "b19", "b39", "b2", "b37", "b8", "b17", "b32" ], "table_ref": [], "text": "Given two probability vectors and a cost matrix, the discrete optimal transport (OT) problem seeks an optimal solution to minimize the cost of transporting the probability vector toward another one. Its total transportation cost is an effective tool that compares two probability vectors. Therefore, OT has been studied in various research areas, e.g., text embedding (Kusner et al. 2015), image matching (Liu et al. 2020), domain adaptation (Courty et al. 2017), graph comparison (Nikolentzos, Meladianos, and Vazirgiannis 2017), and interpolation (Solomon et al. 2015).\nThere are many formulations for OT. Kantorovich (1942) was the first to formulate OT as the linear programming problem, and the linear OT (LOT) made great progress toward solving OT. Recently, the strongly convex-regularized OT (SROT) has attracted much attention, especially, the entropy-regularized OT (EROT) (Cuturi 2013;Blondel, Seguy, and Rolet 2018;Peyré and Cuturi 2019;Guo, Ho, and Jordan 2020). SROT is superior to LOT in terms of guaranteeing a unique solution and computational stability.\nMany algorithms have been studied to solve OT. The network simplex algorithm (Ahuja, Magnanti, and Orlin 1993) is a well-known classical algorithm for LOT and has been widely used. The Sinkhorn algorithm (Cuturi 2013) and primal-dual descent algorithms (Dvurechensky, Gasnikov, and Kroshnin 2018;Guo, Ho, and Jordan 2020) have been proposed to solve EROT faster. Recently, algorithms utilizing special structures of input data have been in the spotlight for solving OT faster, e.g., algorithms that utilize the lowrankness of the input data (Tenetov, Wolansky, and Kimmel 2018;Altschuler et al. 2019). Besides, several algorithms utilize the Gibbs kernel structure of the input cost matrix in the Sinkhorn algorithm, such as separability (Solomon et al. 2015;Bonneel, Peyré, and Cuturi 2016) and translation invariance (Getreuer 2013;Peyré and Cuturi 2019).\nIn this paper, we propose novel fast algorithms for OT utilizing a new special structure, cyclic symmetry, of input data. Specifically, we assume n-order cyclic symmetry for the input data; the input d-dimensional probability vector is a concatenation of n copies of an m(:= d/n)-dimensional vector, and the input d × d cost matrix is a block-circulant matrix consisting of n matrices with size m × m (see Assumption 1). Such OT with cyclic symmetry appears universally in various real-world examples: image processing, urban planning, and graph processing (see examples in Section 4). Intuitively, we can obtain an optimal solution to such a problem faster by solving OT for only one of the symmetric components of the input data and concatenating n copies of the obtained solution. However, this approach cannot work due to ignoring interactions between the symmetric components (see Appendix A). Unlike such an intuitive way, we propose novel fast algorithms utilizing cyclic symmetry for two crucial OT formulations: LOT and SROT.\nFirst, we propose a fast algorithm for LOT with cyclic symmetry (C-LOT). Figure 1 shows an overview of this algorithm. Our main idea is to reduce C-LOT, which has d 2 variables, to a small LOT, which has only m 2 variables, by utilizing cyclic symmetry. To achieve this reduction, we introduce auxiliary variables considering cyclic symmetry and rewrite C-LOT as a min-min optimization problem. Surprisingly, the inner min problem can be solved analytically, and the min-min problem becomes a small LOT. Therefore, this algorithm solves C-LOT faster by solving the small LOT instead. Using the network simplex algorithm to solve the small LOT, its time complexity bound becomes O(m 3 log m log(m∥C∥ ∞ ) + d 2 ) where C is the cost matrix and ∥C∥ ∞ := max i,j |C ij |. This is greatly improved from Figure 1: Overview of our algorithm for LOT with cyclic symmetry (C-LOT). This algorithm reduces C-LOT to a small LOT that has significantly fewer variables and solves the small LOT instead, resulting in fast computation. Note that the small cost matrix is not just a part of the original one; it aggregates the original cost matrix on the basis of cyclic symmetry, see (11).\nO(d 3 log d log(d∥C∥ ∞ )) when solving C-LOT directly.\nNext, we propose a fast algorithm for SROT with cyclic symmetry (C-SROT). Unlike C-LOT, we cannot reduce C-SROT to a small SROT due to the regularizer. To overcome this issue, we consider the Fenchel dual of C-SROT. By utilizing cyclic symmetry, we show that the Fenchel dual problem has only 2m variables, which is significantly fewer than the 2d variables in the naive dual of C-SROT. Therefore, this algorithm solves the small Fenchel dual problem by the alternating minimization algorithm (Beck 2017, Chapter 14). Since the number of variables is very few, its time complexity for one iteration will be reduced, resulting in fast computation as a whole. Especially, this algorithm for EROT with cyclic symmetry (C-EROT), which is a subclass of C-SROT, becomes a Sinkhorn-like algorithm. We call it cyclic Sinkhorn algorithm. The interesting point is that the Gibbs kernel in the cyclic Sinkhorn algorithm differs from that in the original Sinkhorn algorithm and is designed by considering cyclic symmetry. Its time complexity bound of each iteration is O(m 2 ), which is significantly improved from O(d 2 ) when solving C-EROT by the original Sinkhorn algorithm.\nFinally, we propose a two-stage Sinkhorn algorithm for C-EROT with approximate cyclic symmetry. In the real world, there are many cases where the input data exhibit only approximate cyclic symmetry due to slight noise and displacement. The cyclic Sinkhorn algorithm cannot be applied to such cases because strict cyclic symmetry of the input data is assumed. To overcome this issue, the two-stage Sinkhorn algorithm first runs the cyclic Sinkhorn algorithm to quickly obtain a strict symmetric solution. It then runs the original Sinkhorn algorithm to modify the solution. As a result, this algorithm obtains the optimal solution to C-EROT with approximate cyclic symmetry faster by utilizing cyclic symmetry at the first stage. In Section 7.2, we experimentally confirmed the fast computation of this algorithm when input data have approximate cyclic symmetry.\nIn summary, this paper introduces the concept of symme-try into the OT research field for the first time and proposes fast cyclic symmetry-aware algorithms that solve small optimization problems instead of the original OT. We validated the effectiveness of our algorithms in synthetic/real-world data with a strict/approximate cyclic symmetry structure." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b28", "b0", "b12", "b3", "b1", "b36", "b1", "b15", "b19", "b33", "b37", "b8", "b32", "b17", "b39", "b2", "b35", "b39", "b2", "b37", "b8", "b32", "b17" ], "table_ref": [], "text": "OT was initially formulated by (Monge 1781). Later (Kantorovich 1942) relaxed it as the linear programming problem, which permits splitting a mass from a single source to multiple targets. The linear OT (LOT) is easier to solve than Monge's form and has made great progress toward solving OT. To solve OT, many algorithms have been proposed. For example, the network simplex algorithm (Ahuja, Magnanti, and Orlin 1993) is one of the classical algorithms for LOT and has been widely used. Recently, algorithms have been proposed to solve OT faster by adding the entropy regularizer (Cuturi 2013;Altschuler, Niles-Weed, and Rigollet 2017;Lin, Ho, and Jordan 2019b;Alaya et al. 2019).\nThe dual form of the entropy-regularized OT can be solved faster by the Sinkhorn algorithm that updates dual variables via matrix-vector products (Sinkhorn 1967). For further acceleration, many improvements to the Sinkhorn algorithm have been proposed. For example, (Altschuler, Niles-Weed, and Rigollet 2017), (Lin, Ho, and Jordan 2019b), and (Alaya et al. 2019) proposed using greedy, randomized, and safe-screening strategies, respectively, to efficiently update the dual variables. Primal-dual algorithms have received much attention (Dvurechensky, Gasnikov, and Kroshnin 2018;Lin, Ho, and Jordan 2019a;Guo, Ho, and Jordan 2020) because they report faster computation than the Sinkhorn algorithm and its variants but are rarely used in practice due to the difficulty of implementation (Pham et al. 2020). This paper focuses on the network simplex algorithm and Sinkhorn algorithm because they are widely used.\nAs another line of work to solve OT faster, utilizing special structures of input data has been well studied (Solomon et al. 2015;Bonneel, Peyré, and Cuturi 2016;Peyré and Cuturi 2019;Getreuer 2013;Tenetov, Wolansky, and Kimmel 2018;Altschuler et al. 2019). Inspired by the fact that geodesic distance matrices can be low-rank approximated (Shamai et al. 2015), a low-rank approximation for the cost matrix in OT was introduced to reduce the time complexity of the Sinkhorn algorithm (Tenetov, Wolansky, and Kimmel 2018;Altschuler et al. 2019). Several approaches have utilized the Gibbs kernel structures of the cost matrix appearing in the Sinkhorn algorithms. The key to these approaches is to approximate the calculation involving the Gibbs kernel; by utilizing separability (Solomon et al. 2015;Bonneel, Peyré, and Cuturi 2016) or translation invariant (Peyré and Cuturi 2019;Getreuer 2013) of the Gibbs kernel on a fixed uniform grid, the matrix-vector product in the Sinkhorn algorithm can be replaced with convolutions. Thus, it can be computed faster by, e.g., a fast Fourier transform. This paper introduces the utilization of a new special but ubiquitous structure, cyclic symmetry, in OT." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "R ≥0 denotes the set of non-negative real numbers. ⟨•, •⟩ de- notes the inner product; that is, for vectors x, y ∈ R d , ⟨x, y⟩ = d-1 i=0 x i y i , and for matrices X, Y ∈ R d×d , ⟨X, Y⟩ = d-1 i,j=0 X ij Y ij . The probability simplex is de- noted as ∆ d := {x i ∈ R d | d-1 i=0 x i = 1, x i ≥ 0}. 1 d denotes the all-ones vector in R d ." }, { "figure_ref": [], "heading": "Regularized Optimal Transport (ROT)", "publication_ref": [ "b22" ], "table_ref": [], "text": "We define the regularized OT (ROT) that adds a convex regularizer to the linear OT (LOT) introduced by (Kantorovich 1942). Given two probability vectors a, b ∈ ∆ d and a cost matrix C ∈ R d×d ≥0 , ROT can be defined as\nmin T∈R d×d ⟨C, T⟩ + d-1 i,j=0 ϕ(T ij ), s.t. T1 d = a, T ⊤ 1 d = b,(1)\nwhere T is called a transportation matrix and ϕ : R → R ∪ {+∞} is a convex function, called a regularizer. We assume ϕ(x) = +∞ if x < 0; this assumption imposes the nonnegative constraint on T.\nROT ( 1) is a generalization of various OT formulations. For example, (1) leads to LOT when ϕ is given by\nϕ(x) = 0 if x ≥ 0, +∞ otherwise.\n(2) Also, (1) leads to the strongly convex-regularized OT (SROT) when ϕ is a strongly convex function; a function ϕ is called strongly convex if ϕ-µ 2 ∥•∥ is convex for some µ > 0. As an important subclass of SROT, (1) leads to the entropyregularized OT (EROT) introduced by (Cuturi 2013) when ϕ is given by\nϕ(x) = λx(log x -1) if x ≥ 0, +∞ otherwise,(3)\nwhere λ > 0." }, { "figure_ref": [], "heading": "C-ROT: ROT with Cyclic Symmetry", "publication_ref": [ "b42", "b40", "b30", "b14", "b22", "b18", "b20", "b7", "b41", "b21", "b24", "b29", "b31", "b29" ], "table_ref": [], "text": "This section explains our assumption of cyclic symmetry for ROT (1) and real-world examples of this problem. We assume that a, b, C in (1) have the following n-order cyclic symmetry. Assumption 1. There exists a divisor n of d, and the probability vectors a, b in (1) have a periodic structure:\na =     α α . . . α     , b =     β β . . . β     ,(4)\nwhere α, β ∈ R m ≥0 and m := d n is an integer. Also, the cost matrix C in (1) has a block-circulant structure:\nC =      C 0 C 1 • • • C n-1 C n-1 C 0 . . . . . . . . . . . . . . . C 1 C 1 • • • C n-1 C 0      ,(5)\nwhere\nC 0 , . . . , C n-1 ∈ R m×m ≥0 .\nIn this paper, we call ROT (1) with Assumption 1 Cyclic ROT (C-ROT). This problem appears universally in various real-world examples given below. Example 1 (Image with Cyclic Symmetry). Cyclic symmetry in images has been a central image research topic. Especially, because image data are represented in a rectangle form, mirror or 90 • rotational symmetry has been utilized for various tasks; mirror symmetry has been utilized for the face recognition (Zhao et al. 2003) and rendering (Wu, Rupprecht, and Vedaldi 2023), and 90 • rotational symmetry in medical and galaxy images has been utilized for the image segmentation (Pang et al. 2022) and morphology prediction (Dieleman, Willett, and Dambre 2015). Thus, we here consider ROT between images with cyclic symmetry, A and B ∈ R h×w ≥0 . For images with mirror symmetry, we assume mirror symmetry along the vertical axis;\nA ij = A i,w-j-1 , B ij = B i,w-j-1 , for 0 ≤ i < h and 0 ≤ j < w. We vectorize these images by appropriately ordering pixels as follows:\na = A π(0) , A π(1) , . . . , A π(hw-1) ⊤ , (6) b = B π(0) , A π(1) , . . . , B π(hw-1) ⊤ , π(k) = (k mod h, ⌊k/h⌋) 0 ≤ k < hw 2 k mod h, 3w 2 -⌊k/h⌋ -1 hw 2 ≤ k < hw .\nBy defining C as the Manhattan, Euclidean, or Chebyshev distance matrix between pixel positions, a, b, C satisfy Assumption 1; thus, C-ROT for n = 2 will appear. Similarly, by appropriately ordering pixels for a, b in the case of 90 • rotational symmetry, C-ROT for n = 4 will appear.\nExample 2 (Urban Planning with Cyclic Symmetry). ROT has straightforward applications in logistics and economy (Kantorovich 1942;Guillaume 2012). Imagine a situation where planners design the structure of a city, this structure is simply given by two probability distributions: the distributions of residents a and services b. In this context, the objective function value of ROT enables us to measure how close residents and services are and evaluate the city's efficiency. Several city structures, such as Howard's garden city (Howard 1965), assume that residents and services are equally located along cyclic symmetry to improve quality of life. In such structures, a, b and C, where C ij is given by the Euclidean distance between each resident a i and service b j , satisfy Assumption 1; thus, C-ROT will appear.\nExample 3 (Graph with Cyclic Symmetry). Graphs are commonly used to model real-world data. For example, chemical molecules and crystal structures can be modeled using graphs (Bonchev 1991;Xie and Grossman 2018), and their graphs often exhibit cyclic symmetry (Jaffé and Orchin 2002;Ladd 2014). To compare two graphs, computing their distance has been well-studied and OT-based approaches have been proposed (Nikolentzos, Meladianos, and Vazirgiannis 2017;Petric Maretic et al. 2019). We here consider ROT for computing a distance between two graphs with cyclic symmetry. Following (Nikolentzos, Meladianos, and Vazirgiannis 2017), we represent features for the vertices of a graph as the eigenvectors of its adjacency matrix. Like chemical molecules and crystal structures, we assume the vertex features are equally distributed along cyclic symmetry. By defining a i = b j := 1 d to ensure the same amount of outgoing/incoming flow from/to a vertex and C ij as the Manhattan, Euclidean, or Chebyshev distance in the eigenvectors' feature space, a, b, and C satisfy Assumption 1. Thus, C-ROT for two graphs will appear." }, { "figure_ref": [], "heading": "Fast Algorithms for C-ROT", "publication_ref": [], "table_ref": [], "text": "In this section, we propose fast algorithms for C-ROT. Note that several proofs are in the supplementary material." }, { "figure_ref": [], "heading": "Block-Cyclic Structure of Optimal Solution", "publication_ref": [], "table_ref": [], "text": "We first show the following lemma. Lemma 1. Under Assumption 1, there exists an optimal solution to (1) that has the following structure:\nT =      T 0 T 1 • • • T n-1 T n-1 T 0 . . . . . . . . . . . . . . . T 1 T 1 • • • T n-1 T 0      ,(7)\nwhere T 0 , . . . ,\nT n-1 ∈ R m×m ≥0 .\nThe proof is shown in Appendix B.\nFrom Assumption 1 and Lemma 1, C and T have the same block-circulant structure. Plugging ( 5) and ( 7) into C-ROT (1) yields the following optimization problem:\nmin T0,...,Tn-1∈R m×m n-1 k=0 ⟨C k , T k ⟩ + n-1 k=0 m-1 i,j=0 ϕ(T ijk ) s.t. n-1 k=0 T k 1 m = α, n-1 k=0 T ⊤ k 1 m = β,(8)\nwhere T ijk is the (i, j)-th entry of T k . Note that the objective function value of ( 8) is exactly 1 n of that of (1)." }, { "figure_ref": [], "heading": "Algorithm for C-LOT", "publication_ref": [ "b38" ], "table_ref": [], "text": "We here propose a fast algorithm for cyclic LOT (C-LOT), which is the special case of C-ROT (1) where ϕ is given by (2). From (8), C-LOT (1) can be rewritten as\nmin T0,...,Tn-1∈R m×m ≥0 n-1 k=0 ⟨C k , T k ⟩ s.t. n-1 k=0 T k 1 m = α, n-1 k=0 T ⊤ k 1 m = β.(9)\nBy introducing auxiliary variables S := n-1 k=0 T k and rewriting (9) for S, we can show the following theorem. Theorem 1. We consider a small LOT min\nS∈R m×m ≥0 ⟨G, S⟩ s.t. S1 m = α, S ⊤ 1 m = β, (10)\nwhere\nG ij := min 0≤k≤n-1 C ijk .(11)\nLet S * be an optimal solution of (10). Then, (T * k ) k=0,...,n-1 defined by\nT * ijk =    S * ij if k = min argmin 0≤k≤n-1 C ijk , 0 otherwise(12)\nis an optimal solution to (9). Also, the optimal objective function value of ( 9) is the same as that of (10). Note that argmin 0≤k≤n-1 C ijk will return a set of indices if the same minimum value exists in several indices, and we can choose any one but the smallest index by min.\nProof. We fix S := n-1 k=0 T k in (9). The matrix S satisfies S1 m = α, S ⊤ 1 m = β and we can rewrite (9) as\nmin S∈R m×m ≥0 , S1m=α, S ⊤ 1m=β     min T0,...,Tn-1∈R m×m ≥0 , n-1 k=0 T k =S n-1 k=0 ⟨C k , T k ⟩     .\nThe inner problem can be solved analytically and independently for each (i, j)-th entry of T 0 , . . . , T n-1 ; the optimal solution is given by ( 12), and the optimal objective function value is ⟨G, S⟩. Next, we solve the outer optimization problem for S. Because S ∈ R m×m ≥0 , S1 m = α, S ⊤ 1 m = β and the objective function is ⟨G, S⟩, this optimization problem is identical with (10).\nTheorem 1 indicates that C-LOT (1) can be reduced to the small LOT (10), which has significantly fewer m 2 variables than d 2 = m 2 n 2 variables of the original C-LOT (1). Therefore, we will obtain the optimal solution to C-LOT (1) by solving the small LOT (10) instead. The proposed algorithm is summarized in Algorithm 1. Also, Figure 1 shows the overview of this algorithm.\nWe evaluate the time complexity of Algorithm 1. The time complexity depends on the algorithm to solve the Algorithm 1: Fast Algorithm for C-LOT Require: a, b ∈ ∆ d and C ∈ R d×d ≥0 under Assumption 1. 1: Compute G whose entry is given by (11). 2: Compute the optimal solution S * to (10). 3: for i, j, k do 4:\nCompute T ijk by the relationship (12) 5: end for 6: Compute T by Lemma 1 with (T ijk ) 7: return T small LOT (10). We here use the network simplex algorithm, the most popular algorithm to solve LOT, to evaluate the time complexity. Tarjan (1997) showed that the time complexity of the network simplex algorithm to solve LOT (1) with the regularizer (2) is O(d 3 log d log(d∥C∥ ∞ )), where ∥C∥ ∞ := max i,j |C ij |. This enables the time complexity of line 2 in Algorithm 1 to be bounded by O(m 3 log m log(m∥C∥ ∞ )). Because line 1 and lines 3-7 can be conducted in O(d 2 ) time, the total time complexity of Algorithm 1 is O(m 3 log m log(m∥C∥ ∞ ) + d 2 ). This is significantly improved from O(d 3 log d log(d∥C∥ ∞ )) when solving C-LOT (1) directly." }, { "figure_ref": [], "heading": "Algorithm for C-SROT", "publication_ref": [ "b5" ], "table_ref": [], "text": "We propose a fast algorithm for cyclic SROT (C-SROT) which is the special case of C-ROT (1) where ϕ is a strongly convex regularizer. Note that because ϕ defined by ( 2) is not strongly convex, we cannot apply this algorithm to C-LOT.\nThe following theorem follows from Fenchel's duality theorem and optimality conditions in convex analysis (see, e.g., (Rockafellar 1970, Section 31)). Theorem 2. The Fenchel dual of the problem (8) is\nmax w,z∈R m ⟨w, α⟩ + ⟨z, β⟩ - n-1 k=0 m-1 i,j=0 ϕ ⋆ (w i + z j -C ijk ),(13)\nwhere ϕ ⋆ : R → R ∪ {+∞} is the Fenchel conjugate of ϕ defined by ϕ ⋆ (y) := sup{yx -ϕ(x) | x ∈ R}. Also, the optimal solutions to the problem (8), T * k , and to the problem (13), w * and z * , have the following relationship:\nT * ijk = (ϕ ⋆ ) ′ (w * i + z * j -C ijk ). (14\n)\nThe proof is shown in Appendix C. Note that ϕ ⋆ is a smooth and differentiable convex function because ϕ is strongly convex. Theorem 2 indicates that we will obtain the optimal solution to C-SROT (1) by solving the dual problem (13) instead because we can reconstruct it by the relationship ( 14) and Lemma 1.\nWe here propose to apply the alternating minimization algorithm (Beck 2017, Chapter 14) to (13); we iteratively optimize the objective function of ( 13) with respect to w while fixing z, and vice versa. When we fix z, the partial derivative of the objective function with respect to w i is\nα i - n-1 k=0 m-1 j=0 (ϕ ⋆ ) ′ (w i + z j -C ijk ),(15)\nand w i is optimal if (15) equals to 0. Because (15) monotonically decreases with respect to w i , we can find such w i easily by, e.g., the well-known Newton's method. This also applies to the optimization with respect to z while fixing w.\nThe alternating minimization algorithm for a smooth convex function is guaranteed to attain fast convergence (see (Beck and Tetruashvili 2013) for more details).\nThe distinguishing feature of this algorithm is treating a few dual variables. If the alternating minimization algorithm is used for the dual problem of (1) without considering cyclic symmetry, the number of dual variables is 2d = 2mn. In contrast, our algorithm treats only 2m dual variables, which is significantly reduced to 1 n . Therefore, the computational time required for one iteration in the alternating minimization will be considerably reduced." }, { "figure_ref": [], "heading": "Algorithm for C-EROT", "publication_ref": [], "table_ref": [], "text": "We here propose a fast algorithm for cyclic EROT (C-EROT), which is the crucial special case of C-ROT (1) where ϕ is given by (3). Because ( 3) is strongly convex, we can apply the cyclic-aware alternating minimization algorithm introduced in Section 5.3 to C-EROT.\nBecause ϕ ⋆ (y) = λ exp( y λ ), (15) can be written as\nα i -exp w i λ m-1 j=0 K ij exp z j λ ,(16)\nwhere\nK ij := n-1 k=0 exp - C ijk λ . (17\n)\nFrom ( 16), we can get optimal w i in closed form:\nw i = λ   log α i -log   m-1 j=0 K ij exp z j λ     .(18)\nWe can rewrite (18) and describe the optimal q j as follows:\np i = α i m-1 j=0 K ij q j , q j = β j m-1 i=0 K ij p i ,\nwhere p i := exp wi λ , q j := exp zj λ . This algorithm resembles the Sinkhorn algorithm (Cuturi 2013); we call it cyclic Sinkhorn algorithm. Note that the optimal solution T to C-EROT (1) can be easily reconstructed from the optimal w and z by ( 14) and Lemma 1. The proposed algorithm is summarized in Algorithm 2.\nWe evaluate the time complexity of Algorithm 2. The time complexity depends on the matrix-vector product iterations in lines 4 and 5 in Algorithm 2 to solve the Fenchel dual problem (13). In the original Sinkhorn algorithm, the time complexity of each iteration is O(d 2 ) = O(m 2 n 2 ) (Cuturi 2013). In contrast, in our cyclic Sinkhorn algorithm, the time complexity of each iteration is O(m 2 ); thus, our algorithm solves C-EROT significantly faster than the original Sinkhorn algorithm.\nAlgorithm 2: Fast Cyclic Sinkhorn Algorithm for C-EROT\nRequire: a, b ∈ ∆ d , C ∈ R d×d ≥0\nunder Assumption 1 and λ > 0. 1: Compute K whose entry is given by (17). 2: Initialize q ← 1 m . 3: repeat 4:\np ← α ⊘ (Kq) ▷ ⊘ denotes elementwise division 5:\nq ← β ⊘ (K ⊤ p) 6: until convergence 7: for i, j, k do 8:\nT ijk ← p i q j exp - C ijk λ\n9: end for 10: Compute T by Lemma 1 with (T ijk ) 11: return T" }, { "figure_ref": [], "heading": "Two-Stage Algorithm for C-EROT with Approximate Cyclic Symmetry", "publication_ref": [ "b32", "b19", "b12" ], "table_ref": [], "text": "There are many real-world cases in which input data show only approximate cyclic symmetry. In Example 1, C satisfies Assumption 1 strictly when using the pixel-wise Euclidean distance, but input distributions a, b (namely, images) often satisfy Assumption 1 only approximately due to slight noise and displacement. Thus, the above-proposed algorithms cannot be applied to such cases because they assume to satisfy Assumption 1 strictly. To overcome this issue, we here propose a fast two-stage Sinkhorn algorithm for C-EROT with approximate cyclic symmetry. Because EROT is commonly used thanks to its differentiability and computational efficiency (Peyré and Cuturi 2019;Guo, Ho, and Jordan 2020), we focused on C-EROT here. The two-stage Sinkhorn algorithm first runs the cyclic Sinkhorn algorithm (Algorithm 2) to quickly obtain a strict symmetric solution.\nIt then runs the original Sinkhorn algorithm (Cuturi 2013) to modify the solution. Therefore, this algorithm obtains the optimal solution to C-EROT with approximate cyclic symmetry faster by utilizing cyclic symmetry at the first stage. The proposed algorithm is described in Algorithm 3. If satisfying Assumption 1 strictly, the time complexity of this algorithm is the same as that of the cyclic Sinkhorn algorithm. If not, it will be complex due to mixing the two Sinkhorn algorithms at Stages 1 and 2. This analysis is for future research, but we experimentally confirmed that this algorithm shows fast computation when input data have approximate cyclic symmetry in Section 7.2." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "To validate the effectiveness of our algorithms, we conducted experiments on synthetic/real-world data that satisfy Assumption 1 strictly/approximately. In all experiments, we evaluated whether our algorithms, which solve small optimization problems instead of the original OT, show the same results as the original OT but with faster computation. These experiments were performed on a Windows laptop with Intel Core i7-10750H CPU, 32 GB memory. All the codes were implemented in Python. p ← α ⊘ (K q) ▷ ⊘ denotes elementwise division 9:\nq ← β ⊘ (K ⊤ p) 10: until convergence // Stage2: Sinkhorn algorithm (Cuturi 2013) 11: Initialize p, q as the n concatenated p, q, respectively. 12: Compute K ij = exp - q ← b ⊘ (K ⊤ p) 16: until convergence 17: return T ← diag(p)Kdiag(q)" }, { "figure_ref": [], "heading": "Synthetic Data w/ Strict Cyclic Symmetry", "publication_ref": [ "b0", "b12", "b13" ], "table_ref": [ "tab_0" ], "text": "We created 20 synthetic random data for each of the two dimensions, d ∈ {5000, 10000}, that satisfy Assumption 1 strictly in n = 50 (for details, see Appendix D). For validation, we evaluated the average and standard deviation over each 20 data of the objective function values, marginal constraint errors defined by ||T ⊤ 1 d -b|| 2 , and the computation time when using different algorithms: the network simplex algorithm (Ahuja, Magnanti, and Orlin 1993), Algorithm 1 using the network simplex algorithm in line 2 (we call it cyclic network simplex algorithm), the Sinkhorn algorithm (Cuturi 2013), and the cyclic Sinkhorn algorithm (Algorithm 2). We set λ = 0.5 for the regularizer (3). The computation time was recorded between inputting the data and outputting the optimal solution. Because these synthetic data also satisfy Assumption 1 for all n that are divisors of 50, namely n ∈ {2, 5, 10, 25, 50}, we conducted experiments for each n; larger n leads to smaller problems that output the same result. The network simplex algorithm was implemented using LEMON (Dezső, Jüttner, and Kovács 2011).\nTable 1 lists the results. The network simplex algorithm and cyclic one had the same optimal objective function value, but the latter showed faster computation times as n becomes larger. This was also the case when using the Sinkhorn algorithm and the cyclic one. These results support the effectiveness of our proposed algorithms; higher cyclic symmetry (i.e., larger n) results in faster computation time." }, { "figure_ref": [], "heading": "Real Data w/ Approximate Cyclic Symmetry", "publication_ref": [ "b10", "b32" ], "table_ref": [ "tab_1" ], "text": "For real-world data, we tested the case of mirror symmetry (n = 2) in Example 1 with the NYU Symmetry Database (Cicconet et al. 2017). In this dataset, we selected 6.034 ± 0.824 0.000 ± 0.000 1.477 ± 0.235 6.526 ± 0.917 0.000 ± 0.000 7.084 ± 0.728 5 6.034 ± 0.824 0.000 ± 0.000 0.300 ± 0.030 6.526 ± 0.917 0.000 ± 0.000 1.391 ± 0.155 10 6.034 ± 0.824 0.000 ± 0.000 0.136 ± 0.026 6.526 ± 0.917 0.000 ± 0.000 0.618 ± 0.073 25 6.034 ± 0.824 0.000 ± 0.000 0.080 ± 0.019 6.526 ± 0.917 0.000 ± 0.000 0.381 ± 0.044 50 6.034 ± 0.824 0.000 ± 0.000 0.056 ± 0.015 6.526 ± 0.917 0.000 ± 0.000 0.329 ± 0.034 Sinkhorn -6.233 ± 0.821 0.000 ± 0.000 3.271 ± 1.445 6.745 ± 0.916 0.000 ± 0.000 14.589 ± 4.213\nCyclic Sinkhorn 2 6.233 ± 0.821 0.000 ± 0.000 0.918 ± 0.463 6.745 ± 0.916 0.000 ± 0.000 3.973 ± 0.922 5 6.233 ± 0.821 0.000 ± 0.000 0.207 ± 0.170 6.745 ± 0.916 0.000 ± 0.000 1.262 ± 0.324 10 6.233 ± 0.821 0.000 ± 0.000 0.116 ± 0.036 6.745 ± 0.916 0.000 ± 0.000 0.636 ± 0.259 25 6.233 ± 0.821 0.000 ± 0.000 0.093 ± 0.036 6.745 ± 0.916 0.000 ± 0.000 0.381 ± 0.127 50 6.233 ± 0.821 0.000 ± 0.000 0.067 ± 0.034 6.745 ± 0.916 0.000 ± 0.000 0.320 ± 0.053 20 images with mirror symmetry along the vertical axis (the images are shown in Appendix E). These images were converted to gray-scale, resized to be 64 × 64 or 96 × 96 pixels, and normalized so that the sum of the intensity is 1.\nWe then obtained a, b by ( 6) and C by the pixel-wise Euclidean distance. For validation, we evaluated the same metrics as in Section 7.1 over 190(= 20 C 2 ) image pairs. Because EROT is commonly used in real applications (Peyré and Cuturi 2019), we focused on C-EROT here and compared the Sinkhorn algorithm, the cyclic one (Algorithm 2), and the two-stage one (Algorithm 3). Note that, in the twostage Sinkhorn algorithm, we stopped Stage 1 before the end of convergence to prevent the solution far from the optimal one for real images (for details, see Appendix F). We set λ as the same as in Section 7.1 for the regularizer (3). Table 2 lists the results. The cyclic Sinkhorn algorithm showed the fastest computation time. However, because this algorithm assumes to satisfy Assumption 1 strictly, its objective function value differed from that of the original Sinkhorn algorithm, and marginal error occurred. In contrast, the two-stage Sinkhorn algorithm showed the same objective function value as that of the original one and no marginal error but with faster computation time than using the original one. These results indicate that the cyclic Sinkhorn algorithm can be a good choice for real-world data because of its fastest computation time if users tolerate the objective function value difference and the marginal error. If not, the two-stage Sinkhorn algorithm is promising for realworld data, which solves C-EROT with approximate cyclic symmetry faster than the original Sinkhorn algorithm." }, { "figure_ref": [], "heading": "Discussions and Limitations", "publication_ref": [ "b16" ], "table_ref": [], "text": "Through this paper, we confirmed that our algorithms can solve C-ROT faster. For further progress, we discuss the following future issues. (I) In Assumption 1, we assume knowing the cyclic order n in advance. Because cyclic symmetry arises naturally from the physical structure of input data, this assumption is reasonable in some real-world cases. However, we must improve our algorithms for unknown-order cyclic symmetry. (II) It is unknown whether our algorithms can be generalized for other symmetries, e.g., dihedral symmetry (Gatermann and Parrilo 2004). Further development of our algorithms for general symmetries remains as future work. (III) The main contribution of this paper is showing the utilization of cyclic symmetry in OT with theoretical proofs, but we must test our algorithms in various real-world data for further development." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed novel fast algorithms for OT with cyclic symmetry. We showed that such OT can be reduced to a smaller optimization problem that has significantly fewer variables as higher cyclic symmetry exists in the input data. Our algorithms solve the small problem instead of the original OT and achieve fast computation. Through experiments, we confirmed the effectiveness of our algorithms in synthetic/real-world data with strict/approximate cyclic symmetry. This paper cultivates a new research direction, OT with symmetry, and paves the way for future research." }, { "figure_ref": [], "heading": "Appendices: Optimal Transport with Cyclic Symmetry A Simple Counter-Example to the Intuitive Utilization of Cyclic Symmetry in OT", "publication_ref": [], "table_ref": [], "text": "As explained in Section 1, the intuitive way, which solves OT for only one of the symmetric components of input data and concatenates n copies of the obtained solution, cannot work well. To explain this reason clearly, we here present a simple counter-example to this intuitive utilization of cyclic symmetry in OT.\nWe consider the 90 • rotational symmetry case (n = 4) of Example 1 in Section 4 with the following gray images.\nFigure A1: Images with 90 • rotational symmetry (n = 4).\nThese images obviously have 90 • rotational symmetry. Intuitively, to utilize the 90 • rotational symmetry in the above images, it looks good if we consider OT for only one of the symmetric parts of the images, i.e., only the red rectangular areas in Figure A1. However, when the cost matrix C is given by the pixel-wise Euclidean distance matrix, the optimal transportation plan for the (0, 1)-th entry (the value is 0.25) in the left image is to be transported toward (0, 2)-th entry (the value is 0.25), beyond the red rectangular area, in the right image. Thus, the OT for only one of the symmetric parts (the red rectangular areas) of the above images will give an incorrect transportation plan.\nTherefore, the intuitive utilization of cyclic symmetry, i.e., solving OT for only one of the symmetric components of input data and concatenating n copies of the obtained solution, cannot work well. Consequently, we must consider the interaction between the symmetric components and develop novel fast algorithms that appropriately utilize cyclic symmetry with theoretical proofs, leading to our algorithms in the main paper." }, { "figure_ref": [], "heading": "B Proof of Lemma 1", "publication_ref": [], "table_ref": [], "text": "Proof. Let T ′ be an optimal solution to (1). We define T * as follows:\nwhere\nis the block-circulant permutation matrix, I m denotes the m × m identity matrix, and O m denotes the m × m zero matrix. First, we will show that T * is a feasible solution to (1). T * satisfies the constraints of row summation because\nSimilarly, we can show that T * satisfies the constraints of column summation (i.e., (T * ) ⊤ 1 d = b). Thus, T * is a feasible solution to (1).\nNext, we will show that T * is an optimal solution to (1). For this purpose, we check the optimal function value of (1) when T = T * . Let f (T) be the objective function of (1), we get\nNote that, we use the convexity of f and Jensen's inequality in the inequality relationship of the above equation. Thus, T * is an optimal solution to (1). Finally, for l = 0, . . . , n -1, we get\nTherefore, T * has a block-circulant structure of (7)." }, { "figure_ref": [], "heading": "C Proof of Theorem 2", "publication_ref": [ "b10" ], "table_ref": [], "text": "Proof. We rewrite (8) with Lagrange multipliers w and z for the two equality constraints as follows:\nNote that Problem ( 8) is convex and the constraints are linear and that Slater's constraint qualification holds. Hence, the strong duality holds (see, e.g., (Boyd and Vandenberghe 2004, Section 5.2.3)), and we can swap the minand max-operations in (A1):\nOne of the optimality conditions is\nD Further Details of Synthetic Data in Section 7.1\nWe created synthetic data using the \"Random Generator\" class in Python. We set the random seed to 0. We here considered that the input data have n(= 50)-order cyclic symmetry.\nFor creating the synthetic d-dimensional input probability vectors a and b, we first sampled α and β by m(= d n )-dimensional uniform distribution with the half-open interval [0.0, 1.0). We then created a and b by concatenating n copies of α and β, respectively, like (4) and normalized them so that the sum of each is 1.\nFor creating the synthetic input cost matrix C, we first sampled C 0 , . . . , C n-1 by m × m-dimensional Gaussian distribution with the mean 3.0 and the standard deviation 5.0; note that we selected these parameters to keep the same order of magnitude of metrics, namely the objective function value and the marginal error, in all experiments in Section 7. We then add the absolute minimum value, namely |min k=0,...,n-1 (min i,j=0,...,m-1 C ijk )|, to all entries of C 0 , . . . , C n-1 to ensure their non-negativity. After that, we created C by concatenating n copies of C 0 , . . . , C n-1 like (5).\nE Selected Images for Real-World Data in Section 7.2\nIn Section 7.2, we selected the 20 images with mirror symmetry (n = 2) in the NYU Symmetry Database (Cicconet et al. 2017). We here show their images in Figure A2. As you see, there are various kinds of images.\nF Further Details of the Two-Stage Sinkhorn Algorithm used in Section 7.2\nIn Section 7.2, we stopped Stage 1 in the two-stage Sinkhorn algorithm before convergence to prevent the solution far from optimal for real images. For details, we first run the cyclic Sinkhorn algorithm until the marginal error || (diag( p)Kdiag( q))\n⊤ 1 m -β|| 2 is below 1.0 × 10 -3 . We then run the original Sinkhorn algorithm until the difference between its objective function value and the value obtained by directly solving C-SROT using the original Sinkhorn algorithm is below 1.0 × 10 -4 . " }, { "figure_ref": [], "heading": "Appendix Reference", "publication_ref": [], "table_ref": [], "text": "" } ]
We propose novel fast algorithms for optimal transport (OT) utilizing a cyclic symmetry structure of input data. Such OT with cyclic symmetry appears universally in various real-world examples: image processing, urban planning, and graph processing. Our main idea is to reduce OT to a small optimization problem that has significantly fewer variables by utilizing cyclic symmetry and various optimization techniques. On the basis of this reduction, our algorithms solve the small optimization problem instead of the original OT. As a result, our algorithms obtain the optimal solution and the objective function value of the original OT faster than solving the original OT directly. In this paper, our focus is on two crucial OT formulations: the linear programming OT (LOT) and the strongly convex-regularized OT, which includes the wellknown entropy-regularized OT (EROT). Experiments show the effectiveness of our algorithms for LOT and EROT in synthetic/real-world data that has a strict/approximate cyclic symmetry structure. Through theoretical and experimental results, this paper successfully introduces the concept of symmetry into the OT research field for the first time.
Optimal Transport with Cyclic Symmetry
[ { "figure_caption": "Algorithm 3 :3Fast Two-Stage Algorithm for C-EROT with Approximate Cyclic Symmetry Require: a, b ∈ ∆ d , C ∈ R d×d≥0 and λ > 0. // Stage1: Cyclic Sinkhorn algorithm 1: for i = 0, . . . , mb i+mk ▷ the average of n-divided b 4: end for 5: Compute K whose entry is given by (17). 6: Initialize q ← 1 m . 7: repeat 8:", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Experimental results in synthetic data. \"Obj. value\" indicates objective function value.", "figure_data": "Algorithmnd = 5000d = 10000Obj. valueMarginal errorTime (sec.)Obj. valueMarginal errorTime (sec.)Network Simplex-6.034 ± 0.824 0.000 ± 0.000 6.523 ± 1.013 6.526 ± 0.917 0.000 ± 0.000 33.660 ± 3.2382CyclicNetwork Simplex", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results in real-world data.", "figure_data": "Algorithm(h, w) = (64, 64), d = 4096(h, w) = (96, 96), d = 9216Obj. valueMarginal errorTime (sec.)Obj. valueMarginal errorTime (sec.)Sinkhorn4.320 ± 2.056 0.000 ± 0.000 16.610 ± 6.502 6.296 ± 3.100 0.000 ± 0.000 117.152 ± 53.442Cyclic Sinkhorn4.289 ± 2.048 0.001 ± 0.0013.837 ± 1.2866.250 ± 3.089 0.001 ± 0.00126.087 ± 11.985Two-Stage Sinkhorn 4.320 ± 2.056 0.000 ± 0.000 13.877 ± 6.244 6.296 ± 3.100 0.000 ± 0.00091.790 ± 43.000", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Shoichiro Takeda; Yasunori Akagi; Naoki Marumo; Kenta Niwa
[ { "authors": "R K Ahuja; T L Magnanti; J B Orlin", "journal": "Prentice-Hall, Inc", "ref_id": "b0", "title": "Network Flows: Theory, Algorithms, and Applications", "year": "1993" }, { "authors": "M Z Alaya; M Berar; G Gasso; A Rakotomamonjy", "journal": "", "ref_id": "b1", "title": "Screening Sinkhorn Algorithm for Regularized Optimal Transport", "year": "2019" }, { "authors": "J Altschuler; F Bach; A Rudi; J Niles-Weed", "journal": "", "ref_id": "b2", "title": "Massively scalable Sinkhorn distances via the Nyström method", "year": "2019" }, { "authors": "J Altschuler; J Niles-Weed; P Rigollet", "journal": "", "ref_id": "b3", "title": "Nearlinear time approximation algorithms for optimal transport via Sinkhorn iteration", "year": "2017" }, { "authors": "A Beck", "journal": "", "ref_id": "b4", "title": "First-Order Methods in Optimization", "year": "2017" }, { "authors": "A Beck; L Tetruashvili", "journal": "SIAM journal on Optimization", "ref_id": "b5", "title": "On the convergence of block coordinate descent type methods", "year": "2013" }, { "authors": "M Blondel; V Seguy; A Rolet", "journal": "", "ref_id": "b6", "title": "Smooth and Sparse Optimal Transport", "year": "2018" }, { "authors": "D Bonchev", "journal": "CRC Press", "ref_id": "b7", "title": "Chemical graph theory: introduction and fundamentals", "year": "1991" }, { "authors": "N Bonneel; G Peyré; M Cuturi", "journal": "ACM Transactions on Graphics", "ref_id": "b8", "title": "Wasserstein Barycentric Coordinates: Histogram Regression Using Optimal Transport", "year": "2016" }, { "authors": "S Boyd; L Vandenberghe", "journal": "Cambridge University Press", "ref_id": "b9", "title": "Convex Optimization", "year": "2004" }, { "authors": "M Cicconet; V Birodkar; M Lund; M Werman; D Geiger", "journal": "Pattern Recognition Letters", "ref_id": "b10", "title": "A convolutional approach to reflection symmetry", "year": "2017" }, { "authors": "N Courty; R Flamary; D Tuia; A Rakotomamonjy", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b11", "title": "Optimal Transport for Domain Adaptation", "year": "2017" }, { "authors": "M Cuturi", "journal": "", "ref_id": "b12", "title": "Sinkhorn Distances: Lightspeed Computation of Optimal Transport", "year": "2013" }, { "authors": "B Dezső; A Jüttner; P Kovács", "journal": "Electronic Notes in Theoretical Computer Science", "ref_id": "b13", "title": "LEMON -an Open Source C++ Graph Template Library", "year": "2011" }, { "authors": "S Dieleman; K W Willett; J Dambre", "journal": "Monthly Notices of the Royal Astronomical Society", "ref_id": "b14", "title": "Rotationinvariant convolutional neural networks for galaxy morphology prediction", "year": "2015" }, { "authors": "P Dvurechensky; A Gasnikov; A Kroshnin", "journal": "", "ref_id": "b15", "title": "Computational Optimal Transport: Complexity by Accelerated Gradient Descent Is Better Than by Sinkhorn's Algorithm", "year": "2018" }, { "authors": "K Gatermann; P A Parrilo", "journal": "Journal of Pure and Applied Algebra", "ref_id": "b16", "title": "Symmetry groups, semidefinite programs, and sums of squares", "year": "2004" }, { "authors": "P Getreuer", "journal": "Image Processing On Line", "ref_id": "b17", "title": "A Survey of Gaussian Convolution Algorithms", "year": "2013" }, { "authors": "C Guillaume", "journal": "", "ref_id": "b18", "title": "Optimal transportation and economic applications", "year": "2012" }, { "authors": "W Guo; N Ho; M Jordan", "journal": "", "ref_id": "b19", "title": "Fast Algorithms for Computational Optimal Transport and Wasserstein Barycenter", "year": "2020" }, { "authors": "E Howard", "journal": "Mit Press", "ref_id": "b20", "title": "Garden Cities of To-Morrow", "year": "1965" }, { "authors": "H H Jaffé; M Orchin", "journal": "Courier Corporation", "ref_id": "b21", "title": "Symmetry in chemistry", "year": "2002" }, { "authors": "L Kantorovich", "journal": "Doklady Akademii Nauk", "ref_id": "b22", "title": "On the transfer of masses (in Russian)", "year": "1942" }, { "authors": "M Kusner; Y Sun; N Kolkin; K Weinberger", "journal": "", "ref_id": "b23", "title": "From Word Embeddings To Document Distances", "year": "2015" }, { "authors": "M F C Ladd", "journal": "Oxford University Press", "ref_id": "b24", "title": "Symmetry of crystals and molecules", "year": "2014" }, { "authors": "T Lin; N Ho; M Jordan", "journal": "", "ref_id": "b25", "title": "On Efficient Optimal Transport: An Analysis of Greedy and Accelerated Mirror Descent Algorithms", "year": "2019" }, { "authors": "T Lin; N Ho; M I Jordan", "journal": "", "ref_id": "b26", "title": "On the Acceleration of the Sinkhorn and Greenkhorn Algorithms for Optimal Transport", "year": "2019" }, { "authors": "Y Liu; L Zhu; M Yamada; Y Yang", "journal": "Computer Vision and Pattern Recognition", "ref_id": "b27", "title": "Semantic Correspondence as an Optimal Transport Problem", "year": "2020" }, { "authors": "G Monge", "journal": "De l'Imprimerie Royale", "ref_id": "b28", "title": "Mémoire sur la théorie des déblais et des remblais", "year": "1781" }, { "authors": "G Nikolentzos; P Meladianos; M Vazirgiannis", "journal": "", "ref_id": "b29", "title": "Matching Node Embeddings for Graph Similarity", "year": "2017" }, { "authors": "S Pang; A Du; M A Orgun; Y Wang; Q Z Sheng; S Wang; X Huang; Z Yu", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b30", "title": "Beyond CNNs: Exploiting Further Inherent Symmetries in Medical Image Segmentation", "year": "2022" }, { "authors": "H Petric Maretic; M El Gheche; G Chierchia; P Frossard", "journal": "", "ref_id": "b31", "title": "GOT: An Optimal Transport framework for Graph comparison", "year": "2019" }, { "authors": "G Peyré; M Cuturi", "journal": "Now Publishers", "ref_id": "b32", "title": "Computational Optimal Transport: With Applications to Data Science", "year": "2019" }, { "authors": "K Pham; K Le; N Ho; T Pham; H Bui", "journal": "", "ref_id": "b33", "title": "On Unbalanced Optimal Transport: An Analysis of Sinkhorn Algorithm", "year": "2020" }, { "authors": "R T Rockafellar", "journal": "Princeton University Press", "ref_id": "b34", "title": "Convex Analysis", "year": "1970" }, { "authors": "G Shamai; Y Aflalo; M Zibulevsky; R Kimmel", "journal": "", "ref_id": "b35", "title": "Classical Scaling Revisited", "year": "2015" }, { "authors": "R Sinkhorn", "journal": "The American Mathematical Monthly", "ref_id": "b36", "title": "Diagonal equivalence to matrices with prescribed row and column sums", "year": "1967" }, { "authors": "J Solomon; F De Goes; G Peyré; M Cuturi; A Butscher; A Nguyen; T Du; L Guibas", "journal": "ACM Transactions on Graphics", "ref_id": "b37", "title": "Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains", "year": "2015" }, { "authors": "R E Tarjan", "journal": "Mathematical Programming", "ref_id": "b38", "title": "Dynamic trees as search trees via euler tours, applied to the network simplex algorithm", "year": "1997" }, { "authors": "E Tenetov; G Wolansky; R Kimmel", "journal": "SIAM Journal on Scientific Computing", "ref_id": "b39", "title": "Fast Entropic Regularized Optimal Transport Using Semidiscrete Cost Approximation", "year": "2018" }, { "authors": "S Wu; C Rupprecht; A Vedaldi", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b40", "title": "Unsupervised Learning of Probably Symmetric Deformable 3D Objects From Images in the Wild (Invited Paper)", "year": "2023" }, { "authors": "T Xie; J C Grossman", "journal": "Phys. Rev. Lett", "ref_id": "b41", "title": "Crystal Graph Convolutional Neural Networks for an Accurate and Interpretable Prediction of Material Properties", "year": "2018" }, { "authors": "W Zhao; R Chellappa; P J Phillips; A Rosenfeld", "journal": "ACM Comput. Surv", "ref_id": "b42", "title": "Face Recognition: A Literature Survey", "year": "2003" } ]
[ { "formula_coordinates": [ 2, 54, 295.8, 222.89, 12.72 ], "formula_id": "formula_0", "formula_text": "O(d 3 log d log(d∥C∥ ∞ )) when solving C-LOT directly." }, { "formula_coordinates": [ 3, 54, 303.34, 238.5, 76.02 ], "formula_id": "formula_1", "formula_text": "R ≥0 denotes the set of non-negative real numbers. ⟨•, •⟩ de- notes the inner product; that is, for vectors x, y ∈ R d , ⟨x, y⟩ = d-1 i=0 x i y i , and for matrices X, Y ∈ R d×d , ⟨X, Y⟩ = d-1 i,j=0 X ij Y ij . The probability simplex is de- noted as ∆ d := {x i ∈ R d | d-1 i=0 x i = 1, x i ≥ 0}. 1 d denotes the all-ones vector in R d ." }, { "formula_coordinates": [ 3, 112.06, 452.33, 180.44, 47.11 ], "formula_id": "formula_2", "formula_text": "min T∈R d×d ⟨C, T⟩ + d-1 i,j=0 ϕ(T ij ), s.t. T1 d = a, T ⊤ 1 d = b,(1)" }, { "formula_coordinates": [ 3, 116.84, 572.14, 111.62, 21.89 ], "formula_id": "formula_3", "formula_text": "ϕ(x) = 0 if x ≥ 0, +∞ otherwise." }, { "formula_coordinates": [ 3, 97.38, 668.18, 195.12, 21.89 ], "formula_id": "formula_4", "formula_text": "ϕ(x) = λx(log x -1) if x ≥ 0, +∞ otherwise,(3)" }, { "formula_coordinates": [ 3, 379.6, 140.88, 178.4, 48.93 ], "formula_id": "formula_5", "formula_text": "a =     α α . . . α     , b =     β β . . . β     ,(4)" }, { "formula_coordinates": [ 3, 359.61, 219.8, 198.39, 57.12 ], "formula_id": "formula_6", "formula_text": "C =      C 0 C 1 • • • C n-1 C n-1 C 0 . . . . . . . . . . . . . . . C 1 C 1 • • • C n-1 C 0      ,(5)" }, { "formula_coordinates": [ 3, 346.33, 280.07, 101.5, 13.32 ], "formula_id": "formula_7", "formula_text": "C 0 , . . . , C n-1 ∈ R m×m ≥0 ." }, { "formula_coordinates": [ 3, 323.75, 516.46, 234.25, 63.82 ], "formula_id": "formula_8", "formula_text": "a = A π(0) , A π(1) , . . . , A π(hw-1) ⊤ , (6) b = B π(0) , A π(1) , . . . , B π(hw-1) ⊤ , π(k) = (k mod h, ⌊k/h⌋) 0 ≤ k < hw 2 k mod h, 3w 2 -⌊k/h⌋ -1 hw 2 ≤ k < hw ." }, { "formula_coordinates": [ 4, 94.72, 479.11, 197.78, 57.12 ], "formula_id": "formula_9", "formula_text": "T =      T 0 T 1 • • • T n-1 T n-1 T 0 . . . . . . . . . . . . . . . T 1 T 1 • • • T n-1 T 0      ,(7)" }, { "formula_coordinates": [ 4, 115.41, 542.9, 66.32, 13.32 ], "formula_id": "formula_10", "formula_text": "T n-1 ∈ R m×m ≥0 ." }, { "formula_coordinates": [ 4, 62.38, 609.37, 230.12, 66.59 ], "formula_id": "formula_11", "formula_text": "min T0,...,Tn-1∈R m×m n-1 k=0 ⟨C k , T k ⟩ + n-1 k=0 m-1 i,j=0 ϕ(T ijk ) s.t. n-1 k=0 T k 1 m = α, n-1 k=0 T ⊤ k 1 m = β,(8)" }, { "formula_coordinates": [ 4, 354.16, 110.72, 203.85, 65.46 ], "formula_id": "formula_12", "formula_text": "min T0,...,Tn-1∈R m×m ≥0 n-1 k=0 ⟨C k , T k ⟩ s.t. n-1 k=0 T k 1 m = α, n-1 k=0 T ⊤ k 1 m = β.(9)" }, { "formula_coordinates": [ 4, 330.34, 227.7, 227.66, 20.59 ], "formula_id": "formula_13", "formula_text": "S∈R m×m ≥0 ⟨G, S⟩ s.t. S1 m = α, S ⊤ 1 m = β, (10)" }, { "formula_coordinates": [ 4, 394.17, 268.42, 163.83, 15.01 ], "formula_id": "formula_14", "formula_text": "G ij := min 0≤k≤n-1 C ijk .(11)" }, { "formula_coordinates": [ 4, 337.93, 320.67, 220.07, 35.48 ], "formula_id": "formula_15", "formula_text": "T * ijk =    S * ij if k = min argmin 0≤k≤n-1 C ijk , 0 otherwise(12)" }, { "formula_coordinates": [ 4, 327.68, 462.9, 222.15, 49.19 ], "formula_id": "formula_16", "formula_text": "min S∈R m×m ≥0 , S1m=α, S ⊤ 1m=β     min T0,...,Tn-1∈R m×m ≥0 , n-1 k=0 T k =S n-1 k=0 ⟨C k , T k ⟩     ." }, { "formula_coordinates": [ 5, 90.59, 456, 201.91, 50.92 ], "formula_id": "formula_17", "formula_text": "max w,z∈R m ⟨w, α⟩ + ⟨z, β⟩ - n-1 k=0 m-1 i,j=0 ϕ ⋆ (w i + z j -C ijk ),(13)" }, { "formula_coordinates": [ 5, 109.94, 569.87, 178.41, 13.59 ], "formula_id": "formula_18", "formula_text": "T * ijk = (ϕ ⋆ ) ′ (w * i + z * j -C ijk ). (14" }, { "formula_coordinates": [ 5, 288.35, 573.16, 4.15, 8.64 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 5, 363.92, 70.78, 194.08, 30.55 ], "formula_id": "formula_20", "formula_text": "α i - n-1 k=0 m-1 j=0 (ϕ ⋆ ) ′ (w i + z j -C ijk ),(15)" }, { "formula_coordinates": [ 5, 368.42, 362.49, 189.58, 30.32 ], "formula_id": "formula_21", "formula_text": "α i -exp w i λ m-1 j=0 K ij exp z j λ ,(16)" }, { "formula_coordinates": [ 5, 384.26, 409.35, 169.59, 30.55 ], "formula_id": "formula_22", "formula_text": "K ij := n-1 k=0 exp - C ijk λ . (17" }, { "formula_coordinates": [ 5, 553.85, 420.08, 4.15, 8.64 ], "formula_id": "formula_23", "formula_text": ")" }, { "formula_coordinates": [ 5, 335.02, 456.37, 222.98, 33.53 ], "formula_id": "formula_24", "formula_text": "w i = λ   log α i -log   m-1 j=0 K ij exp z j λ     .(18)" }, { "formula_coordinates": [ 5, 352.1, 509.87, 173.3, 26.29 ], "formula_id": "formula_25", "formula_text": "p i = α i m-1 j=0 K ij q j , q j = β j m-1 i=0 K ij p i ," }, { "formula_coordinates": [ 6, 54, 72.2, 135.26, 13.32 ], "formula_id": "formula_26", "formula_text": "Require: a, b ∈ ∆ d , C ∈ R d×d ≥0" }, { "formula_coordinates": [ 6, 85.88, 174.6, 96.86, 14.67 ], "formula_id": "formula_27", "formula_text": "T ijk ← p i q j exp - C ijk λ" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b44", "b47", "b68", "b44", "b47", "b35", "b68", "b47", "b68", "b70", "b62", "b3" ], "table_ref": [], "text": "Point-based representation is of great importance to computer graphics and computer vision. In the modern era of deep learning, neural networks can be designed to learn features from point clouds, facilitating 3D perception tasks such as object classification, object detection, and semantic segmentation in many downstream applications. Nevertheless, such evolutions still leave 3D perception a challenging and unsolved problem. A typical disadvantage of point-based representation is that surface information is implied by point density and orientation, if any. Due to such ambiguity, techniques for data augmentation on point clouds are relatively scarce and challenging to design.\nRecent advances in using neural networks to represent 3D data have opened new opportunities to revise and explore this problem from a new perspective [45,48,69]. One type of method is the so-called neural implicit representation based on the idea of training a neural network that can return queries of the 3D space from input coordinates [45,47,48,59]. Particularly, one can train a neural network to encode a 3D point to various attributes such as occupancy, color, or a general feature vector. The power of a neural implicit representation is that the queries can be performed at arbitrary points, and no special mechanism is required for value interpolation. Another type of methods [36,69,77] employs upsampling to achieve both distribution uniformity and proximity-to-surface. The advantages of the upsampling-based method lie in self-supervision and more uniformly distributed dense representation without the need of surface ground truth.\nIn this work, we investigate both types of strategies and leverage them as a systematic way for data augmentation at test time. Particularly, for implicit representation, we leverage the convolutional occupancy network [48] to encode the 3D point clouds to a regular grid representation that allows the interpolation of features at an arbitrary location. For the upsampling-based method, we employ the self-upsampling method [69] to obtain a dense and uniformly distributed proximity-to-surface point cloud. We propose an effective technique to aggregate features of the original and augmented point clouds to generate the final prediction. We select the task of object classification and semantic segmentation as the downstream task to validate our augmentation technique, as they play a key role in many practical applications, including perception in robotics and autonomous driving. We experiment with point cloud data from Mod-elNet40 [71], ShapeNet [7], ScanObjectNN [63] and Se-manticKITTI [4] dataset, which demonstrates significant performance improvement.\nIn summary, our key contributions are: • We analyze and compare existing reconstruction approaches, including surface-based sampling and point cloud upsampling for test-time augmentation. • We propose a test-time augmentation method for 3D point cloud deep learning, which is suitable for both approaches; • We identified a self-supervised point cloud upsampling method as a robust method for our test-time augmentation.\nIt uses the proximity-to-surface cues to sample augmented point clouds. • Extensive experiments and analysis prove the effectiveness of our augmentation method on two downstream tasks, including object classification and semantic segmentation on synthetic and real-world datasets. [79] with the supreme performance achieved on classification, retrieval, and segmentation tasks. There are also approaches [16,21,25,65,73] specially designed for semantic segmentation. Huang et al. [25] learn the local structure, particularly for semantic segmentation, by applying learning algorithms from recurrent neural networks. SPG [32] constructs graphs between coarsely segmented super-points for large-scale point cloud semantic segmentation." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b77", "b77", "b69", "b70", "b70", "b59", "b48", "b52", "b4", "b17", "b44", "b47", "b0", "b39", "b2" ], "table_ref": [], "text": "To balance the efficiency and accuracy, hybrid works [33,42,78] utilize the characteristics of multiple representations. PointGrid [33] assigns fix number of points in each grid cell, making the conventional CNN feasible. While it runs fast, the accuracy is still not high. PVCNN [42] represents the input in points and performs the convolutions in voxels with a superior performance achieved than sole point or voxel representations. To handle large-scale lidar point cloud data, FusionNet [78] divides the input into voxels and extracts features from both voxels and inner points. Neural 3D Reconstruction. 3D reconstruction works can be classified into four categories in terms of the output representation: voxel-based, point-based, mesh-based, and implicit function-based methods.\nSimilar to semantic segmentation, voxel is also a popular representation for 3D reconstruction [11,70,71]. In the category, voxel grids are used to store either occupancy that encodes whether the voxel is occupied or not [11,71] or SDF information that holds signed projective distances from voxels to the closest surfaces [14,38,60]. However, as mentioned in the segmentation works, such methods inherit the limitations of high memory costs.\nAnother line of works output point clouds directly for 3D reconstruction [15,39,49,75]. These methods design generative models to produce dense points for scene representation. Despite the efficiency, the generated points cannot sufficiently represent complicated surfaces as there is no topology between the points.\nMesh is another popular output representation for 3D reconstruction. In this category, some works deform shapes with simple topology to more complicated shapes, which usually constrain to certain fixed templates [27,53] or topologies [5,58]. To reconstruct a shape of arbitrary topology, AtlasNet [18] warps multiple 2D planes into 3D shapes. Despite the superior results, this method can result in selfintersecting mesh faces.\nTo overcome the limitations of the above explicit representations (voxel, point, mesh), more recent works focus on implicit representations that employ occupancy [45,48] and distance field [6, 47] with a neural network to infer an occupancy probability or distance value for the input 3D points. As implicit representation models shape continuously, more detail is preserved, and more complicated shape topologies can be obtained. In this work, we also employ implicit representation to aim to augment a point cloud for various downstream tasks. Point Cloud Upsampling. Point cloud upsampling can produce a dense, uniform, and complete point cloud from a sparse and noisy complete with or without missing parts.\nTraditional Point Cloud Upsampling: A seminal point cloud upsampling algorithm is to interpolate points as vertices of a Voronoi diagram [1]. [40] later proposed an algorithm by introducing the locally optimal linear projector for surface reconstruction and using it to project a set of points onto the input point cloud. This work was followed by [22], who proposed a weighted locally linear operator in order to make the point cloud distribution more even. [23] introduces an edge-aware resampling method by sampling points on edge and calculating the normals at those points. All of the above-mentioned methods are not data-driven and thus heavily rely on priors like normal estimation.\nDeep-Learning Based Point Cloud Upsampling: PU-Net [77] was the first deep learning-based point cloud upsampling method that used a multi-branch feature expansion module to extract multi-scale features and expand a point cloud in the feature space. This was followed by EC-Net [76], which achieves edge-aware point cloud upsampling by learning distance features obtained by the perturbation of the generated point cloud relative to the input point cloud. In this work, we propose to use point cloud upsampling as a test-time augmentation technique. " }, { "figure_ref": [], "heading": "Our Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Overview", "publication_ref": [ "b47", "b16", "b68" ], "table_ref": [], "text": "Given a point set {p i } n i=1 with p i ∈ R 3 represented by a matrix x 0 ∈ R n×3 . Without loss of generality, we assume at inference, x 0 is passed to pre-trained network f for feature extraction, and the features are passed to a network g for final label prediction f (x 0 ). Our goal is to achieve performance improvement in the downstream task via test-time augmentation, where the final prediction can be defined as:\ng(ϕ(f (x 0 ), f (x 1 ), f (x 2 ), ...))(1)\nwhere ϕ is an aggregation function to combine multiple features resulting from the original point set x 0 and the augmented point sets x 1 , x 2 , etc. Note that the network f and g are pre-trained and left untouched in test-time augmentation; only the input is augmented. Traditionally, a simple method for test-time augmentation is jittering, which adds Gaussian noise to perturb the point cloud x 0 to generate an augmented point cloud x k :\nx k = x 0 + λz k (2)\nwhere z k ∼ N (0, I) is a random noise vector from a normal distribution, and λ is a scalar value to control the noise magnitude. This simple augmentation has been widely adopted As can be seen, the shape quality of supervised reconstruction [48] using neural implicit representation is finer and smoother compared to unsupervised method [17] and Screened Poisson method [28]. In addition, we can obtain a dense and uniformly distributed proximity-to-surface point cloud using Self-supervised Upsampling [69], which contributes to the success of our method. Best viewed with zoom.\nsince the seminal PointNet [50]. An issue of such augmentation is that it does not consider the underlying surface or point distribution because the noise z is independent of x 0 , resulting in marginal performance improvement in many cases. In this work, we viewpoint set x 0 as a noisy estimate of a latent surface representation S, and therefore, we define point cloud augmentation as the process of sampling additional point clouds x k that explain the same surface. We propose to sample augmented point clouds x k (k ≥ 1) in two ways: surface sampling and point cloud up-sampling. The sampled point clouds can then be leveraged for downstream tasks such as classification and segmentation. Our method is visualized in Figure 1.\nIn the following sections, we explain the technique for sampling augmented point clouds using an implicit representation network (Section 3.2) and a self-supervised point upsampling network (Section 3.3). We then present downstream tasks that leverage the proposed test-time augmentation and discuss feature aggregation and final label prediction for point cloud classification and segmentation." }, { "figure_ref": [], "heading": "Augmentation by Implicit Field Reconstruction", "publication_ref": [ "b44", "b47", "b44" ], "table_ref": [], "text": "We are motivated by the recent advances in geometry reconstruction using neural implicit representation. The basic idea is to learn a mapping f θ : R 3 -→ {0, 1} using a neural network parameterized by θ. This function implicitly encodes the geometry in the 3D space to allow the query of the occupancy at any point in the 3D space. To obtain the geometry explicitly, the Marching Cubes algorithm [43] can generate a triangle mesh containing surfaces at zero crossings in the implicit field. Our neural implicit field is built upon the convolutional occupancy network [45,48]. The convolutional occupancy network uses a combination of convolutional and linear layers, thus endowing its features with equivariance and scalability. This enables the network to produce implicit representations for both single objects and large-scale scenes. Our implementation uses the network variant that stores features on a 3D regular grid.\nEncoder. The encoder is a shallow PointNet [50] but with local pooling layers. By using these input features generated by the local PointNet encoder, we obtain a 32 3 volumetric feature grid that captures the local information in the neighborhood of the points, which is necessary to capture local geometric information about the shape of the input point cloud. Due to memory constraints, the volumetric feature can represent rich 3D information but is restricted to small resolutions and sizes.\nDecoder. To endow the encoder features with inductive bias, the occupancy network uses a 3D UNet encoder [81] to process the volumetric feature grid. Since U-Net contains convolutional operations, this also introduces translational equivariance in the encoder features, which makes it able to predict the occupancy of different shapes but from the same categories. These aggregated feature maps from the U-Net [54] are then fed into a decoder for predicting occupancy labels. To predict the occupancy value at any arbitrary position, we use tri-linear interpolation to find the features at that point by using the features of all points belonging to the same voxel in the volumetric grid. This point's location and features are passed through a decoder that outputs an occupancy value for each 3D grid location.\nSurface Sampling. We render an output mesh of the given input point cloud from the predicted occupancy of the grid points of the convolutional occupancy network using the MISE algorithm [45]. We then produce an augmented version x k of the original point cloud x 0 by randomly sampling a point cloud from the vertices of the rendered mesh, where k indicates the k-th augmentation." }, { "figure_ref": [], "heading": "Augmentation by Point Cloud Upsampling", "publication_ref": [ "b68", "b44", "b66", "b68" ], "table_ref": [], "text": "Inspired by [69], we upsample input sparse point cloud\nx = {p i } n i=1 ∈ R n×3 to dense point cloud y = {p i } N i=1 ∈ R N ×3 including N = ⌊r × n⌋\npoints, where r is desirable scaling factor (set default to 4). The high-resolution point cloud y must be dense, uniform, complete, and noisetolerant. The self-supervised point cloud upsampling strategy includes four steps: seeds sampling, surface projection, outliers removal, and arbitrary-scale point cloud generation.\nSeeds Sampling. To obtain uniformly sampled seed points, given a point cloud, we divide the 3D space into equally spaced voxels and estimate the distance from centers to the surface by computing the distance to the triangles formed by the nearest points. Then we choose the centers in a preset range as the seed points.\nSurface Projection. Given a seed point c, we obtain the coordinate of the projection point of the seed point c as: c p = c + n × d, where n ∈ [-1, 1] 3 and d ∈ R are projection direction and projection distance, respectively. The n and d can be obtained by two multi-layer fully-connected neural networks f n , and f d , which borrows from Occupancy Network [45] and DGCNN [67]. The detail of architectures and training procedures can be found in [69].\nOutliers Removal. For a projection point c p , we determine a point as an outlier if b p > 1.5b, where b p is the average bias between c p and its nearest points and b is the average bias of all projection points.\nIn practice, outlier removal can be regarded as optional, but we empirically found that outlier removal can yield some minor performance improvement of downstream tasks such as classification and part segmentation, and therefore use this step by default in the augmentation.\nPoint Cloud Generation. We upsample the input point cloud x 0 to a dense point cloud y using the upsampling network. Then, we sample a fixed number of points from the upsampled point cloud y by using the farthest-point sampling algorithm to obtain an augmented point cloud " }, { "figure_ref": [], "heading": "Downstream Tasks.", "publication_ref": [ "b51" ], "table_ref": [], "text": "Object Classification.\nTo leverage the augmented point clouds for classification, for both PointNet [50], DGCNN [66], and PointNeXt [52], we extract the global features of each point cloud x k including the original point cloud x 0 , and then take an average of the features before passing them to the classifier. Without changing of notation, assume that f is the global feature extractor, and g is the classifier, we can write the label prediction as:\ng(avgpool(f (x 0 ), f (x 1 ), f (x 2 ), ...))(3)\nSemantic and Part Segmentation. For semantic segmentation and part segmentation, the aggregation function is more evolved. The basic idea is first to perform segmentation on each point cloud, and then aggregate the results to produce the final segmentation for the original point cloud x 0 , but now the aggregation occurs at a per-point level instead of the global features. Let f i (x k ) be the features of point i in point cloud x k , the label prediction of point i in the original point cloud x 0 can be written as:\ng(ϕ(f i (x 0 ), {f π1,i (x 1 )}, {f π2,i (x 2 )}, ...))(4)\nwhere π k,i indicates the corresponding points of point i in x 0 to point cloud x k , and g as the classifier or any postprocessing network. Here we propose a simple algorithm to establish such correspondences via nearest neighbors on the logit vectors, which are detailed in Algorithm 1.\nAlgorithm 1: Pseudo-code for our test-time augmentation for the segmentation task. " }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b47", "b68", "b66", "b51", "b62", "b70", "b3", "b47", "b62", "b70", "b51", "b3", "b68" ], "table_ref": [], "text": "We implement our method in Pytorch. We use the convolutional occupancy network [48], and self-supervised point upsampling network [69] for test-time augmentation. For downstream tasks, we experiment with pre-trained models for classification and part segmentation such as Point-Net [50], DGCNN [67], PointNeXt [52] as well as for largescale semantic scene segmentation such as RandLANet [21].\nDataset and metric. Our experiments are conducted on different datasets such as ShapeNet [7], ScanObjectNN [63], ModelNet40 [71], and SemanticKITTI [4] datasets, including indoor and outdoor environments with both synthetic and real data. the ShapeNet dataset [7].\nWe employ several popular metrics for evaluation, such as the overall and mean percentage accuracy are computed for the classification task, the Instance and Category Intersection-Over-Union (mInsIoU, mCatIoU) are utilized for the part segmentation task, and the mean Accuracy (mACC) and mean IoU (mIoU) are used for semantic segmentation task.\nData processing. For ShapeNet [7] dataset, we use the preprocessed data produced by PointNet [50], which is an early version of ShapeNet (version 0) to train the segmentation network. Nonetheless, the ShapeNet data used to train the convolutional occupancy network [48] is a different version (version 1). Since the number of objects differs in these variants of ShapeNet, we only use the objects that appear in both datasets. For ScanObjectNN [63], and ModelNet40 [71] datasets, we follow the instruction in the official implementation of PointNeXt [52]. For SemanticKITTI [4] dataset, we follow the instruction in the official implementation of RandLANet [21]. We also follow Self-UP [69] to prepare the data for point cloud upsampling." }, { "figure_ref": [], "heading": "Classification Results", "publication_ref": [ "b62", "b70", "b51", "b47", "b68" ], "table_ref": [ "tab_1" ], "text": "The object classification results are shown in Table 1 and are conducted on two challenging datasets (ScanObjectNN [63], and ModelNet40 [71]). ScanObjectNN presents considerable problems to the various point cloud analysis algorithms already in use due to occlusions and noise. Based on Point-NeXt [52], we conduct experiments on PB_T50_RS, the most challenging and widely deployed version of ScanOb-jectNN. Note that the reported performance of PointNet and DGCNN in our paper is higher than the original PointNet and DGCNN paper because we adopt the re-implementation of PointNet and DGCNN from the PointNeXt paper, which includes optimized training strategies.\nAs can be seen, the optimized baseline model by Point-Net [50] performed very well on both ModelNet40 and ScanObjectNN classification. Despite such, applying augmentation with our method leads to a performance boost of 1 -2%, which is a significant gain given the saturating accuracy of this dataset. We also empirically found that augmenting with more than one sampled point cloud does not significantly improve this task.\nNote that as convolutional occupancy network [48] requires ground truth signed distance functions to train surface reconstruction, we only perform the classification task using self-supervised point upsampling [69]." }, { "figure_ref": [ "fig_4" ], "heading": "Segmentation Results", "publication_ref": [ "b51", "b3" ], "table_ref": [ "tab_2", "tab_3" ], "text": "Part Segmentation. The part segmentation results are shown in Table 2 and Table 3. It can be seen that by applying our method, the mInsIoU, and mCatIoU are improved compared to the baseline approach. The results also demonstrate the robustness of our method as it works well with different network backbones, e.g., PointNet [50] that involves only per-point and global point cloud features, DGCNN [66] which establishes and learns dynamic graphs in point neighborhoods, and the SOTA PointNeXt [52]. It is worth noting that the improvement is mainly gained from the refinement of the segmentation boundaries (Figure 4). Semantic Segmentation. To assess the generalizability of our strategy, we also tested on real-world data from Se-manticKITTI [4] dataset. As SemanticKITTI data is captured by LiDAR sensors, it is favorable to use point upsampling as the augmentation technique. It can be seen in Table 4, by applying our method, the mAcc and mIoU are improved compared to the baseline approach." }, { "figure_ref": [], "heading": "Additional Analysis", "publication_ref": [ "b66", "b47", "b68", "b16", "b29", "b47", "b68" ], "table_ref": [ "tab_4", "tab_5", "tab_6", "tab_7" ], "text": "We perform additional experiments to validate the performance of our test-time augmentation. We select the segmentation task for these experiments as it produces dense prediction, which can be seen as generalized classifications.\nPoint density. In Figure 3, we plot the segmentation accuracies (mIoU) across different numbers of input points. Specifically, we randomly sample 128, 256, 512, 1024, and 2048 points as input to perform the segmentation. Compared to PointNet, it can be seen that our augmentation offers significant performance improvement on sparse point clouds We also found that by varying the number of input points (Figure 3), DGCNN cannot perform well on sparse point clouds with a very large performance gap between the sparse and dense point clouds (more than 20% between 128 and 2048 points). This is because for sparse point clouds, the neighbor graphs by DGCNN degenerate [67]. Despite such, our test-time augmentation can still improve the performance and significantly reduce the performance gap to around 6%. This shows that our test-time augmentation is robust to the number of input points.\nAblation study. We conduct an ablation study on the part segmentation task on ShapeNet and provide the results in Table 5. We select the segmentation task as it is a generalized form of classification at per-point level, and also aim to justify the more complex design choices in the aggregation function for this task. We use inputs with 2048 points. Our baseline is an implementation that k-nearest neighbors are performed with just 3D coordinates as features. By adding logits as features, we can have 2% gain in mIoU (model A). We also test different aggregate functions like max pooling and average pooling, and find that average pooling performs better (model B vs. C). Additionally, we repeat the sampling to obtain multiple augmented point clouds. By fusing the segmentation of these augmented point clouds to the original point cloud, further improvement can be achieved (model A&C and B&C). This shows that it is critical to compute accurate correspondences between the augmented point cloud and the original point cloud to achieve higher accuracies. From the above analysis, we can see that multi-sampling and changing aggregate functions can yield further improvement.\nComparison among augmentation techniques. We provide a comparison to study which augmentation technique should be used in practice. We compare the popular Screened Poisson reconstruction [28] to convolutional neural network [48] and point cloud upsampling [69]. The results in Table 6 show that our augmentation techniques are more favorable in performance than Screened Poisson reconstruction. The performance between the convolutional occupancy network and self-supervised upsampling are rather similar, with the convolutional occupancy network is slightly better in the instance IoU metric. We hypothesize that when (ground truth) surface information is available, it could be used to supervise the augmentation, else point cloud upsampling could be an effective and robust augmentation in several scenarios. We also explore a recent unsupervised reconstruction [17] but find that the shape quality is poor compared to Screened Poisson reconstruction, and thus unsuitable for augmentation. Exploring more robust reconstruction could lead to interesting augmentation techniques for future work.\nComparison to traditional augmentation. Adding Gaussian noise is a commonly used traditional data augmentation scheme that perturbs the points by sampling from a Gaussian distribution. Combining the results from this perturbation in test-time augmentation is known as voting [30,41]. In our implementation, we sample offsets from a zero-mean Gaussian with different standard deviation σ and add the [48] 82.66 79.55 Self-UP [69] 82.70 78.99 offsets back to the original point clouds to form augmented point clouds. For our TTA, we sample one more point cloud and then average the global features of the additional point cloud and the original point cloud before passing them to the classifier. As can be seen in Table 7, our method outperforms the traditional augmentation scheme due to the implicit representation that allows more effective point sampling. Comparison with train-time augmentation is left for future work, as it requires model retraining, which is both more expensive and less robust especially when only pretrained models are provided. Augmentation without normals. We experiment with implicit surface representation for data augmentation in a practical setting where normals are not available. In this case, existing methods have to rely on the pure 3D coordinates (xyz), causing a performance decrease, while our method can easily solve this problem by sampling the 3D points as well as the normal vectors directly from the implicit surface. In this way, our method can maintain the performance to the same level regardless of the normal existence. This is verified in Table 8 that our augmentation outperforms the baseline when only the 3D coordinates (xyz) are available.\nComputation overhead. While having better performance, modern TTA, including our method, relies on a neural network to predict augmented samples from each input and thus has more overhead compared to traditional methods. For example, if we use M augmented point clouds, the overhead is approximate M times the original time cost. To circumvent this problem, we propose to exploit parallelism and use batched inference instead. More discussions can be found in the supplementary material." }, { "figure_ref": [], "heading": "Discussion and Conclusions", "publication_ref": [], "table_ref": [], "text": "We presented a new method for augmenting point clouds at test time by leveraging neural implicit and point upsampling networks to sample augmented point clouds and showed that such augmentation works effectively for the classification and semantic segmentation task. Our results are encouraging since this is one of the first attempts to design a test-time augmentation technique for 3D point cloud deep learning.\nA main difference between our TTA and traditional methods is that traditional methods only use simple transformations and are thus lightweight, but not input-aware and less robust. While our TTA requires more resources, the extra computation remains affordable and our method shows good results across tasks and datasets. We believe further explorations to reduce such performance trade-offs would be valuable contributions to this less-explored area of test-time augmentation for 3D point clouds." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgment. This research was supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant (MSS23C010), and Ningbo 2025 Science and Technology Innovation Major Project (No. 2022Z072), and an internal grant from HKUST (R9429). This work is partially done when Srinjay Sarkar was a research resident at VinAI Research, Vietnam." }, { "figure_ref": [], "heading": "Test-Time Augmentation for 3D Point Cloud Classification and Segmentation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Supplementary Material", "publication_ref": [], "table_ref": [], "text": "6. Experiments 6.1. Computation overhead While having better performance, modern TTA, including our method, relies on a neural network to predict augmented samples from each input and thus has more overhead compared to traditional methods. For example, if we use M augmented point clouds, the overhead is approximately M times the original time cost. To circumvent this problem, we propose to exploit parallelism and use batched inference instead. First, our computation overhead is sub-linear, which means that even if we have 10 times as many augmented samples (the same number of augmented samples that we used for all of our experiments), the overhead will only increase by a factor of two (see Table 9). This overhead is manageable and can be reduced even further through the utilization of batched prediction, as demonstrated in Table 9 below. Additionally, additional engineering like deploying the network to an inference-only framework (TensorFlow Lite) would further optimize inference. Second, we can reduce the number of augmented samples (M ∈ {2, 4, 8, 10}), which would result in milder improvement in comparison to the baseline but would incur significantly less overhead (see Table 9). Finally, the overhead of TTA can be offset by its ease of use compared to other methods for performance improvement, e.g., when only pre-trained models are given or when retraining the entire model is not possible. " }, { "figure_ref": [], "heading": "Semantic Segmentation on S3DIS dataset", "publication_ref": [], "table_ref": [], "text": "We provide quantitative and qualitative results of semantic segmentation on S3DIS dataset in Table 10 and Figure 5.\nAs can be seen, our TTA is effective and improves upon the baseline PointNeXt." }, { "figure_ref": [], "heading": "Semantic Segmentation on SemanticKITTI", "publication_ref": [], "table_ref": [], "text": "We provide qualitative results of semantic segmentation on SemanticKITTI dataset in Figure 6." }, { "figure_ref": [], "heading": "Part Segmentation on ShapeNet dataset", "publication_ref": [], "table_ref": [], "text": "We provide more qualitative results of part segmentation on ShapeNet dataset in Figure 7 and Figure 8. " } ]
Data augmentation is a powerful technique to enhance the performance of a deep learning task but has received less attention in 3D deep learning. It is well known that when 3D shapes are sparsely represented with low point density, the performance of the downstream tasks drops significantly. This work explores test-time augmentation (TTA) for 3D point clouds. We are inspired by the recent revolution of learning implicit representation and point cloud upsampling, which can produce high-quality 3D surface reconstruction and proximity-to-surface, respectively. Our idea is to leverage the implicit field reconstruction or point cloud upsampling techniques as a systematic way to augment point cloud data. Mainly, we test both strategies by sampling points from the reconstructed results and using the sampled point cloud as test-time augmented data. We show that both strategies are effective in improving accuracy. We observed that point cloud upsampling for test-time augmentation can lead to more significant performance improvement on downstream tasks such as object classification and segmentation on the ModelNet40, ShapeNet, ScanObjectNN, and SemanticKITTI datasets, especially for sparse point clouds.
Test-Time Augmentation for 3D Point Cloud Classification and Segmentation
[ { "figure_caption": "Figure 1 .1Figure1. Illustration of our test-time augmentation method for point clouds downstream tasks such as classification and segmentation. We view the input point cloud as a noisy estimate of a latent surface and propose using an implicit field represented by an occupancy network or a point cloud upsampling network to sample augmented point clouds so that the point clouds share the same underlying surfaces. We then perform the downstream task on each point cloud and aggregate the point features to produce the final result.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Visual comparison of different reconstruction and upsampling methods. From left to right: Input, Up-sampling point clouds, Unsupervised reconstruction, Poisson reconstruction, and Supervised reconstruction.As can be seen, the shape quality of supervised reconstruction[48] using neural implicit representation is finer and smoother compared to unsupervised method[17] and Screened Poisson method[28]. In addition, we can obtain a dense and uniformly distributed proximity-to-surface point cloud using Self-supervised Upsampling[69], which contributes to the success of our method. Best viewed with zoom.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "x k with the desired number of points, where k indicates the k-th augmented point cloud. The examples are shown in Figure 2.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(feat, X_i) for each point q in neighbors: logit = agg(logit, get_log(q)) label = argmax(logit)", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualization of part segmentation results. From left to right: Input, Reconstruction, Ours, Difference map, and GT. In the difference map, where blue and red points indicate correct and wrong labels, respectively, our test-time augmentation mainly deals with the labels along the boundaries, improving accuracies through aggregating predictions from augmented point clouds.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "3D object classification in ModelNet40[71] and ScanOb-jectNN[63] using self-supervised upsampling point clouds[69].", "figure_data": "MethodModelNet40ScanObjectNN (PB_T50_RS)oAcc mAcc oAccmAccPointNet [50]89.20 86.20 68.2063.40Ours92.07 88.78 76.6972.93DGCNN [67]92.90 90.20 78.1073.60Ours94.23 91.79 87.7185.84PointNeXt [52]93.96 91.14 88.1886.83Ours95.48 92.96 90.3888.99PointMixer [10]91.41 87.89 82.5180.03Ours92.71 90.42 84.1881.25PointTransformer [80] 90.64 87.84 82.3180.77Ours92.55 89.73 83.6681.3790908580807075128 256 512 1024 204860128 256 512 1024 2048PointNet OursDGCNN Ours90908080707060128 256 512 1024 204860128 256 512 1024 2048PointNet OursDGCNN OursFigure 3. TTA using surface sampling (top row) and self-supervisedupsampling (bottom row) on part segmentation on ShapeNet withdifferent numbers of points. We found that applying TTA for sparsepoint clouds of the surface sampling method yields significant im-provement. In contrast, the improvement of TTA on upsamplingpoint clouds is more stable, thanks to dense and uniformly dis-tributed proximity-to-surface point clouds. The horizontal axis isin log scale. Best viewed with zoom.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Part segmentation on ShapeNet [7] using selfsupervised upsampling point clouds as input.", "figure_data": "2048 pointsmInsIoU mCatIoUPointNet [50]80.7483.73Ours82.8886.25DGCNN [67]81.0884.18Ours83.3886.70PointNeXt [52]84.2386.73Ours85.0787.60", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Part segmentation on ShapeNet [7] using surface sampling with different numbers of points.", "figure_data": "Method128 points256 pointsmInsIoU mCatIoU mInsIoU mCatIoUPointNet [50]79.0681.7283.1285.12Ours79.5582.6683.2585.82DGCNN [67]59.7566.3469.8874.57Ours71.6381.9579.9885.65", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation studies of our test-time augmentation on ShapeNet [7] using surface sampling. Performing k-nearest neighbor search on high-dimensional feature space (model A, B) and using the average function (model B) for aggregating predictions result in improved accuracies. The performance can be further boosted by using extra augmented point clouds (model A&C and B&C). The reported metric is mCatIoU.", "figure_data": "2048 pointsPointNet [50] DGCNN [67]xyz (max)86.4584.26A: w/ logit (max)88.3085.90B: w/ logit (avg)88.4386.05C: w/ 10x samples86.3984.13A&C (max)88.2685.96B&C (avg)88.5886.16(128 and 256 points) and performs similarly to PointNetwhen the input points get denser.", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison of different augmentation methods on part segmentation with PointNet [50] as backbone on ShapeNet [7] dataset. TTA is done using Screened Poisson reconstruction [28], Neural Implicit representation[48], and Point Clouds Upsampling[69].", "figure_data": "128 pointsmCatIoU mInsIoUPoisson [28]81.7977.86Implicit", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Comparison with traditional augmentation with different values of σ on the classification task using surface sampling on ShapeNet [7]. The backbone of our TTA is PointNet [50].", "figure_data": "2048 points mAccσ = 0.0598.21σ = 0.0797.69σ = 0.196.54Ours98.53", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Augmentation with normals using surface sampling on ShapeNet [7]: we classify an original point cloud w/o normal vectors by using an implicit field to sample the normals and then classify the augmented point cloud. The backbone is PointNet [50].", "figure_data": "2048 pointsmAccOrg. xyz97.73Aug. xyz98.53Aug. xyz & normals 98.38", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" } ]
Tuan-Anh Vu; Srinjay Sarkar; Zhiyuan Zhang; Binh-Son Hua; Sai-Kit Yeung
[ { "authors": "Marc Alexa; Johannes Behr; Daniel Cohen-Or; Shachar Fleishman; David Levin; Cláudio T Silva", "journal": "IEEE Trans. Vis. Comput. Graph", "ref_id": "b0", "title": "Computing and rendering point set surfaces", "year": "2003" }, { "authors": "Iro Armeni; Ozan Sener; Helen Amir R Zamir; Ioannis Jiang; Martin Brilakis; Silvio Fischer; Savarese", "journal": "", "ref_id": "b1", "title": "3d semantic parsing of large-scale indoor spaces", "year": "2016" }, { "authors": "Murat Seckin; Ayhan ; Philipp Berens", "journal": "", "ref_id": "b2", "title": "Test-time data augmentation for estimation of heteroscedastic aleatoric uncertainty in deep neural networks", "year": "2018" }, { "authors": "Jens Behley; Martin Garbade; Andres Milioto; Jan Quenzel; Sven Behnke; Cyrill Stachniss; Jurgen Gall", "journal": "", "ref_id": "b3", "title": "Semantickitti: A dataset for semantic scene understanding of lidar sequences", "year": "2019" }, { "authors": "Heli Ben-Hamu; Haggai Maron; Itay Kezurer; Gal Avineri; Yaron Lipman", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b4", "title": "Multi-chart generative surface modeling", "year": "2018" }, { "authors": "Rohan Chabra; Jan E Lenssen; Eddy Ilg; Tanner Schmidt; Julian Straub; Steven Lovegrove; Richard Newcombe", "journal": "", "ref_id": "b5", "title": "Deep local shapes: Learning local sdf priors for detailed 3d reconstruction", "year": "2020" }, { "authors": "X Angel; Thomas A Chang; Leonidas J Funkhouser; Pat Guibas; Qi-Xing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Jianxiong Su; Li Xiao; Fisher Yi; Yu", "journal": "", "ref_id": "b6", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Yunlu Chen; Vincent Tao Hu; Efstratios Gavves; Thomas Mensink; Pascal Mettes; Pengwan Yang; G M Cees; Snoek", "journal": "", "ref_id": "b7", "title": "Pointmixup: Augmentation for point clouds", "year": "2020" }, { "authors": "Shuyang Cheng; Zhaoqi Leng; Ekin Dogus Cubuk; Barret Zoph; Chunyan Bai; Jiquan Ngiam; Yang Song; Benjamin Caine; Vijay Vasudevan; Congcong Li; Quoc V Le; Jonathon Shlens; Dragomir Anguelov", "journal": "", "ref_id": "b8", "title": "Improving 3d object detection through progressive population based augmentation", "year": "2020" }, { "authors": "Jaesung Choe; Chunghyun Park; Francois Rameau; Jaesik Park; In So Kweon", "journal": "", "ref_id": "b9", "title": "Pointmixer: Mlp-mixer for point cloud understanding", "year": "2022" }, { "authors": "Danfei Christopher B Choy; Junyoung Xu; Kevin Gwak; Silvio Chen; Savarese", "journal": "", "ref_id": "b10", "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "year": "2016" }, { "authors": "Angela Dai; Matthias Nießner", "journal": "", "ref_id": "b11", "title": "3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation", "year": "2018" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Niessner", "journal": "", "ref_id": "b12", "title": "Scannet: Richlyannotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Angela Dai; Charles Ruizhongtai Qi; Matthias Nießner", "journal": "", "ref_id": "b13", "title": "Shape completion using 3d-encoder-predictor cnns and shape synthesis", "year": "2017" }, { "authors": "Haoqiang Fan; Hao Su; Leonidas J Guibas", "journal": "", "ref_id": "b14", "title": "A point set generation network for 3d object reconstruction from a single image", "year": "2017" }, { "authors": "Benjamin Graham; Martin Engelcke; Laurens Van Der Maaten", "journal": "", "ref_id": "b15", "title": "3d semantic segmentation with submanifold sparse convolutional networks", "year": "2018" }, { "authors": "Amos Gropp; Lior Yariv; Niv Haim; Matan Atzmon; Yaron Lipman", "journal": "", "ref_id": "b16", "title": "Implicit geometric regularization for learning shapes", "year": "2020" }, { "authors": "Thibault Groueix; Matthew Fisher; Vladimir G Kim; Bryan Russell; Mathieu Aubry", "journal": "", "ref_id": "b17", "title": "AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation", "year": "2018" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b18", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Andrew G Howard", "journal": "", "ref_id": "b19", "title": "Some improvements on deep convolutional neural network based image classification", "year": "2013" }, { "authors": "Qingyong Hu; Bo Yang; Linhai Xie; Stefano Rosa; Yulan Guo; Zhihua Wang; Niki Trigoni; Andrew Markham", "journal": "", "ref_id": "b20", "title": "Randla-net: Efficient semantic segmentation of large-scale point clouds", "year": "2020" }, { "authors": "Hui Huang; Dan Li; Hao Zhang; U Ascher; Daniel Cohen-Or", "journal": "ACM Trans. Graph", "ref_id": "b21", "title": "Consolidation of unorganized point clouds for surface reconstruction", "year": "2009" }, { "authors": "Hui Huang; Shihao Wu; Minglun Gong; Daniel Cohen-Or; U Ascher", "journal": "ACM Transactions on Graphics", "ref_id": "b22", "title": "Edge-aware point set resampling", "year": "2013" }, { "authors": "Jing Huang; Suya You", "journal": "", "ref_id": "b23", "title": "Point cloud labeling using 3d convolutional neural network", "year": "2016" }, { "authors": "Qiangui Huang; Weiyue Wang; Ulrich Neumann", "journal": "", "ref_id": "b24", "title": "Recurrent slice networks for 3d segmentation of point clouds", "year": "2018" }, { "authors": "Evangelos Kalogerakis; Melinos Averkiou; Subhransu Maji; Siddhartha Chaudhuri", "journal": "", "ref_id": "b25", "title": "3d shape segmentation with projective convolutional networks", "year": "2017" }, { "authors": "Angjoo Kanazawa; J Michael; David W Black; Jitendra Jacobs; Malik", "journal": "", "ref_id": "b26", "title": "End-to-end recovery of human shape and pose", "year": "2018" }, { "authors": "Matthew Michael Kazhdan; Hugues Bolitho; Hoppe", "journal": "Eurographics Association", "ref_id": "b27", "title": "Poisson surface reconstruction", "year": "2006" }, { "authors": "Ildoo Kim; Younghoon Kim; Sungwoong Kim", "journal": "", "ref_id": "b28", "title": "Learning loss for test-time augmentation", "year": "2020" }, { "authors": "Roman Klokov; Victor Lempitsky", "journal": "", "ref_id": "b29", "title": "Escape from cells: Deep kd-networks for the recognition of 3d point cloud models", "year": "2017" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "", "ref_id": "b30", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "Loic Landrieu; Martin Simonovsky", "journal": "", "ref_id": "b31", "title": "Large-scale point cloud semantic segmentation with superpoint graphs", "year": "2018" }, { "authors": "Truc Le; Ye Duan", "journal": "", "ref_id": "b32", "title": "Pointgrid: A deep network for 3d shape understanding", "year": "2018" }, { "authors": "Hui Li; Peng Wang; Chunhua Shen; Guyu Zhang", "journal": "", "ref_id": "b33", "title": "Show, attend and read: A simple and strong baseline for irregular text recognition", "year": "2019" }, { "authors": "Ruihui Li; Xianzhi Li; Pheng-Ann Heng; Chi-Wing Fu", "journal": "", "ref_id": "b34", "title": "PointAugment: An auto-augmentation framework for point cloud classification", "year": "2020" }, { "authors": "Ruihui Li; Xianzhi Li; Pheng-Ann Heng; Chi-Wing Fu", "journal": "", "ref_id": "b35", "title": "Point cloud upsampling via disentangled refinement", "year": "2021" }, { "authors": "Yangyan Li; Rui Bu; Mingchao Sun; Wei Wu; Xinhan Di; Baoquan Chen", "journal": "", "ref_id": "b36", "title": "Pointcnn: Convolution on x-transformed points", "year": "2018" }, { "authors": "Yiyi Liao; Simon Donne; Andreas Geiger", "journal": "", "ref_id": "b37", "title": "Deep marching cubes: Learning explicit surface representations", "year": "2018" }, { "authors": "Chen-Hsuan Lin; Chen Kong; Simon Lucey", "journal": "", "ref_id": "b38", "title": "Learning efficient point cloud generation for dense 3d object reconstruction", "year": "2018" }, { "authors": "Yaron Lipman; Daniel Cohen-Or; David Levin; Hillel Tal-Ezer", "journal": "ACM Trans. Graph", "ref_id": "b39", "title": "Parameterization-free projection for geometry reconstruction", "year": "2007" }, { "authors": "Yongcheng Liu; Bin Fan; Shiming Xiang; Chunhong Pan", "journal": "", "ref_id": "b40", "title": "Relation-shape convolutional neural network for point cloud analysis", "year": "2019" }, { "authors": "Zhijian Liu; Haotian Tang; Yujun Lin; Song Han", "journal": "", "ref_id": "b41", "title": "Pointvoxel cnn for efficient 3d deep learning", "year": "2019" }, { "authors": "William E Lorensen; Harvey E Cline", "journal": "SIG-GRAPH Comput. Graph", "ref_id": "b42", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1987" }, { "authors": "Alexander Lyzhov; Yuliya Molchanova; Arsenii Ashukha; Dmitry Molchanov; Dmitry Vetrov", "journal": "", "ref_id": "b43", "title": "Greedy policy search: A simple baseline for learnable test-time augmentation", "year": "2020" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b44", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "Nikita Moshkov; Botond Mathe; Attila Kertesz-Farkas; Reka Hollandi; Peter Horvath", "journal": "Scientific reports", "ref_id": "b45", "title": "Test-time augmentation for deep learning-based cell segmentation on microscopy images", "year": "2020" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b46", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Songyou Peng; Michael Niemeyer; Lars Mescheder; Marc Pollefeys; Andreas Geiger", "journal": "", "ref_id": "b47", "title": "Convolutional occupancy networks", "year": "2008" }, { "authors": "Sergey Prokudin; Christoph Lassner; Javier Romero", "journal": "", "ref_id": "b48", "title": "Efficient learning on point clouds with basis point sets", "year": "2019" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b49", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2008" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "", "ref_id": "b50", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Guocheng Qian; Yuchen Li; Houwen Peng; Jinjie Mai; Hasan Hammoud; Mohamed Elhoseiny; Bernard Ghanem", "journal": "NeurIPS", "ref_id": "b51", "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies", "year": "2022" }, { "authors": "Anurag Ranjan; Timo Bolkart; Soubhik Sanyal; Michael J Black", "journal": "", "ref_id": "b52", "title": "Generating 3d faces using convolutional mesh autoencoders", "year": "2018" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b53", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Ikuro Sato; Hiroki Nishimura; Kensuke Yokoi", "journal": "", "ref_id": "b54", "title": "Apac: Augmented pattern classification with neural networks", "year": "2015" }, { "authors": "Divya Shanmugam; Davis Blalock; Guha Balakrishnan; John Guttag", "journal": "", "ref_id": "b55", "title": "When and why test-time augmentation works", "year": "2020" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b56", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "Ayan Sinha; Jing Bai; Karthik Ramani", "journal": "", "ref_id": "b57", "title": "Deep learning 3d shape surfaces using geometry images", "year": "2016" }, { "authors": "Michael Vincent Sitzmann; Gordon Zollhöfer; Wetzstein", "journal": "", "ref_id": "b58", "title": "Scene representation networks: Continuous 3d-structureaware neural scene representations", "year": "2019" }, { "authors": "David Stutz; Andreas Geiger", "journal": "", "ref_id": "b59", "title": "Learning 3d shape completion from laser scan data with weak supervision", "year": "2018" }, { "authors": "Christian Szegedy; Wei Liu; Yangqing Jia; Pierre Sermanet; Scott Reed; Dragomir Anguelov; Dumitru Erhan; Vincent Vanhoucke; Andrew Rabinovich", "journal": "", "ref_id": "b60", "title": "Going deeper with convolutions", "year": "2015" }, { "authors": "Lyne Tchapmi; Christopher Choy; Iro Armeni; Junyoung Gwak; Silvio Savarese", "journal": "IEEE", "ref_id": "b61", "title": "Segcloud: Semantic segmentation of 3d point clouds", "year": "2017" }, { "authors": "Angelina Mikaela; Quang-Hieu Uy; Binh-Son Pham; Duc Hua; Thanh Nguyen; Sai-Kit Yeung", "journal": "", "ref_id": "b62", "title": "Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data", "year": "2019" }, { "authors": "Guotai Wang; Wenqi Li; Michael Aertsen; Jan Deprest; Sébastien Ourselin; Tom Vercauteren", "journal": "Neurocomputing", "ref_id": "b63", "title": "Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks", "year": "2019" }, { "authors": "Shenlong Wang; Simon Suo; Wei-Chiu Ma; Andrei Pokrovsky; Raquel Urtasun", "journal": "", "ref_id": "b64", "title": "Deep parametric continuous convolutional neural networks", "year": "2018" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "ACM Transactions on Graphics", "ref_id": "b65", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b66", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "Zongji Wang; Feng Lu", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b67", "title": "Voxsegnet: Volumetric cnns for semantic part segmentation of 3d shapes", "year": "2019" }, { "authors": "Zhao Wenbo; Liu Xianming; Zhong Zhiwei; Jian Junjun; Gao Wei; Li Ge; Ji Xiangyang", "journal": "", "ref_id": "b68", "title": "Self-supervised arbitrary-scale point clouds upsampling via implicit neural representation", "year": "2022" }, { "authors": "Jiajun Wu; Chengkai Zhang; Tianfan Xue; Joshua B William T Freeman; Tenenbaum", "journal": "", "ref_id": "b69", "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "year": "2016" }, { "authors": "Zhirong Wu; Shuran Song; Aditya Khosla; Fisher Yu; Linguang Zhang; Xiaoou Tang; Jianxiong Xiao", "journal": "", "ref_id": "b70", "title": "3d shapenets: A deep representation for volumetric shapes", "year": "2015" }, { "authors": "Zhige Xie; Kai Xu; Wen Shan; Ligang Liu; Yueshan Xiong; Hui Huang", "journal": "Computer Graphics Forum", "ref_id": "b71", "title": "Projective feature learning for 3d shapes with multi-view depth images", "year": "2015" }, { "authors": "Chenfeng Xu; Bichen Wu; Zining Wang; Wei Zhan; Peter Vajda; Kurt Keutzer; Masayoshi Tomizuka", "journal": "", "ref_id": "b72", "title": "Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation", "year": "2020" }, { "authors": "Yifan Xu; Tianqi Fan; Mingye Xu; Long Zeng; Yu Qiao", "journal": "", "ref_id": "b73", "title": "Spidercnn: Deep learning on point sets with parameterized convolutional filters", "year": "2018" }, { "authors": "Guandao Yang; Xun Huang; Zekun Hao; Ming-Yu Liu; Serge Belongie; Bharath Hariharan", "journal": "", "ref_id": "b74", "title": "Pointflow: 3d point cloud generation with continuous normalizing flows", "year": "2019" }, { "authors": "Lequan Yu; Xianzhi Li; Chi-Wing Fu; Daniel Cohen-Or; Pheng-Ann Heng", "journal": "", "ref_id": "b75", "title": "Ec-net: an edge-aware point set consolidation network", "year": "2018" }, { "authors": "Lequan Yu; Xianzhi Li; Chi-Wing Fu; Daniel Cohen-Or; Pheng-Ann Heng", "journal": "", "ref_id": "b76", "title": "Pu-net: Point cloud upsampling network", "year": "2018" }, { "authors": "Feihu Zhang; Jin Fang; Benjamin W Wah; Philip Hs Torr", "journal": "", "ref_id": "b77", "title": "Deep fusionnet for point cloud semantic segmentation", "year": "2020" }, { "authors": "Zhiyuan Zhang; Binh-Son Hua; Sai-Kit Yeung", "journal": "", "ref_id": "b78", "title": "Shellnet: Efficient point cloud convolutional neural networks using concentric shells statistics", "year": "2019" }, { "authors": "Hengshuang Zhao; Li Jiang; Jiaya Jia; Vladlen Philip Hs Torr; Koltun", "journal": "", "ref_id": "b79", "title": "Point transformer", "year": "2021" }, { "authors": "A Özgün Çiçek; S Abdulkadir; T Lienkamp; O Brox; Ronneberger", "journal": "Medical Image Computing and Computer-Assisted Intervention (MICCAI)", "ref_id": "b80", "title": "3d u-net: Learning dense volumetric segmentation from sparse annotation", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 366.6, 541.72, 179.18, 9.65 ], "formula_id": "formula_0", "formula_text": "g(ϕ(f (x 0 ), f (x 1 ), f (x 2 ), ...))(1)" }, { "formula_coordinates": [ 3, 395.62, 661.96, 150.16, 9.68 ], "formula_id": "formula_1", "formula_text": "x k = x 0 + λz k (2)" }, { "formula_coordinates": [ 4, 308.56, 642.85, 236.55, 22.82 ], "formula_id": "formula_2", "formula_text": "x = {p i } n i=1 ∈ R n×3 to dense point cloud y = {p i } N i=1 ∈ R N ×3 including N = ⌊r × n⌋" }, { "formula_coordinates": [ 5, 93.94, 597.11, 193.09, 9.65 ], "formula_id": "formula_3", "formula_text": "g(avgpool(f (x 0 ), f (x 1 ), f (x 2 ), ...))(3)" }, { "formula_coordinates": [ 5, 342.6, 97.3, 203.18, 9.65 ], "formula_id": "formula_4", "formula_text": "g(ϕ(f i (x 0 ), {f π1,i (x 1 )}, {f π2,i (x 2 )}, ...))(4)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b7", "b8", "b9", "b10", "b8", "b11", "b12", "b13", "b12", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b20", "b22", "b23", "b24", "b25", "b26", "b27" ], "table_ref": [], "text": "With the rapid development of big data [1], [2], artificial intelligence, and Web 3.0 [3], [4], large language models (LLMs) [5]- [8] have become a research hotspot. LLMs are deep learning models that learn the underlying patterns and rules of language by training on large-scale corpora. They possess powerful capabilities in generating and understanding natural language and have been widely applied in natural language processing (NLP) [9], machine translation [10], dialogue systems [11], AI-generated content (AIGC) [9], social cognitive computing, among other fields. Education is a significant domain that plays a crucial role in the development and progress of human society. Traditional educational models face challenges such as individual differences among students, insufficient allocation of teaching resources, and the assessment of teaching effectiveness [12]. Therefore, incorporating LLMs into the field of education holds the potential to provide support for personalized learning [13], intelligent tutoring, adaptive assessment [14], and other aspects, thereby improving the quality of education and the learning experience.\nIn the digital era, the education field currently faces various challenges [13], including low student engagement [15] and unequal distribution of teaching resources [16]. Traditional classroom teaching struggles to meet the personalized needs of different students. LLMs, as powerful natural language processing tools, have the potential to revolutionize traditional teaching models by enabling personalized learning and intelligent tutoring. Furthermore, with the advent of the big data era, the education field has accumulated a vast amount of learning data [17]. Utilizing this data for in-depth analysis and mining can reveal learners' patterns [18], evaluate learning outcomes [19], and provide personalized recommendations [20], [21]. LLMs have advantages in processing and analyzing large-scale data, making their application in the education field capable of providing deeper learning support and personalized education.\nLarge models refer to models with a massive number of parameters and computational capabilities [22]. LLMs are one type of large models, often involving billions of parameters. The essence of large models lies in their ability to handle complex tasks and large-scale data, enabling them to learn richer language patterns and knowledge representations [21]. This makes large models highly applicable in the field of education. Smart/intelligent education refers to the provision of personalized, adaptive, and intelligent educational services through the utilization of technologies such as artificial intelligence and big data. For smart education, educational large models (EduLLMs) refer to educational application models based on LLMs. By learning from extensive educational data and corpora, EduLLMs can provide personalized learning support [23], intelligent tutoring [24], and educational assessment capabilities to students [25]. The research status of EduLLMs demonstrates significant potential and opportunities. Firstly, EduLLMs can identify students' learning patterns and characteristics by learning from massive educational data, enabling the provision of personalized learning support and recommendations for educational resources. Secondly, EduLLMs can be applied to intelligent tutoring, providing real-time problemsolving, learning advice, and academic guidance through dialogue and interaction with students. Moreover, EduLLMs have the potential for educational assessment, automatically evaluating students' knowledge mastery, learning outcomes, and expressive abilities, thereby providing more comprehensive student evaluation and teaching feedback to educators.\nHowever, the research on LLM4Edu still faces challenges and issues. Firstly, social cognitive learning is challenging in LLM4Edu. Data privacy and security are crucial considerations to ensure the protection of students' personal information [26]. The interpretability and fairness of LLM4Edu are also focal points [27], requiring the large models' decision-making processes to be interpretable and avoiding unfair biases caused by data. Moreover, the development and deployment of educational large models need to fully consider educational practices and teachers' professional knowledge to ensure the models are closely integrated with actual teaching [28].\nThis paper is a systematic summary and analysis of the research background, motivation, and applications of educational large models. By reviewing existing research, we provide an in-depth understanding of the potential and challenges of educational large models for education practitioners, researchers, and policymakers, offering guidance and insights for further advancing the development and application of EduLLMs. The main contributions of this article are as follows:\n• This paper first reviews the background of education, LLMs, and smart education, respectively. It then introduces the connection between LLMs and education and also discusses mart education (Section II).\n• This paper provides an in-depth understanding of the key technologies of EduLLMs, including natural language processing (NLP), machine learning, data mining, computer vision, etc. (Section III). • We also discuss how LLMs empower education from the perspective of various applications of education under LLMs (Section IV-A). It further exhibits several distinct characteristics of education under LLMs (Section IV-B). • We also summarize the key points in EduLLMs, including training data and preprocessing, the training process, and integration with various technologies (Section V).\n• Finally, we highlight some key challenges existing in LLM4Edu (Section VI-A), and discuss potential future directions for LLM4Edu in more detail (Section VI-B)." }, { "figure_ref": [], "heading": "II. EDUCATION AND LLMS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Background of Education", "publication_ref": [ "b28", "b29" ], "table_ref": [], "text": "Education is a conscious process of facilitating and guiding individual development [29]. It involves imparting knowledge, fostering skills, and shaping attitudes and values, with the aim of promoting holistic growth and self-realization in learners.\nThe goal of education is to cultivate intellectual, emotional, moral, creative, and social adaptability in individuals, enabling them to make positive contributions to society.\nEducation takes various forms, including but not limited to:\n• School education: Traditional school education is the most common and widely accepted form, where students receive organized instruction from teachers and acquire knowledge and skills. • Online education: With the advancement of digital technologies, the internet and online platforms provide new forms of education [30]. Students can engage in learning through online courses, distance education, and other digital avenues. • Community education: It refers to educational activities conducted within a community, providing specific training and learning opportunities to meet the educational needs of community members. • Self-directed learning: Learning is the key to education.\nSelf-directed learning emphasizes the ability of students to explore and learn autonomously, acquiring knowledge and skills through self-motivation and self-management. In general, education involves various roles, including but not limited to:\n• Teachers: Teachers have a core role in education. They are responsible for organizing, imparting knowledge, and guiding student learning and development. " }, { "figure_ref": [], "heading": "B. Background of LLMs", "publication_ref": [ "b4", "b21", "b30", "b31", "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b40", "b41", "b42", "b43", "b44", "b45", "b46", "b47", "b48" ], "table_ref": [], "text": "What is a large language model (LLM) [5], [22]? What are its characteristics? What is the relationship between large models and AI, data science, and other interdisciplinary fields? What are the key technologies employed in large models? A LLM possesses powerful language generation and understanding capabilities. Its objective is to train on massive amounts of language data to learn the statistical patterns and semantic relationships within the language, to generate coherent and accurate text, and to understand and respond to human queries [31]. Here are several characteristics of LLMs:\n1. Natural language generation: LLMs can generate highquality, coherent natural language text. They can understand the context and generate appropriate responses, articles, stories, and more based on input prompts or questions [32].\n2. Semantic understanding: LLMs can comprehend the semantic relationships within human language, including vocabulary, syntax, and context [33]. They can parse and understand complex sentence structures, extract key information, and generate relevant responses.\n3. Context awareness: LLMs can perform language understanding and generation based on context [34]. They can understand the history of a conversation and generate responses that are coherent and related to the context.\n4. Wide range of applications: LLMs have extensive applications in natural language processing, virtual assistants [35], intelligent customer service [36], and intelligent writing [37], among others. They can provide language generation and understanding support for various tasks and scenarios.\n5. Continuous learning: LLMs can continuously learn and update themselves by training on new data [38]. They can accumulate new language knowledge and patterns by learning from fresh data, improving their performance and capabilities.\nLarge models employ several key technologies. Here, we describe five of them in detail:\n1. Transformer model: It serves as the foundational architecture for large models [39]. It utilizes self-attention mechanisms to handle the dependency relationships within input sequences [40]. It effectively captures long-range dependencies, enabling the model to better understand and generate text.\n2. Pre-training and fine-tuning: Large models typically employ a two-stage approach of pre-training [41] and finetuning [42]. In the pre-training stage, the model undergoes self-supervised learning using a large-scale unlabeled corpus to learn the statistical patterns and semantic relationships of language. In the fine-tuning stage, the model is further trained and adjusted using labeled task-specific data to adapt to specific task requirements.\n3. Large-scale datasets: Large models require massive language datasets for training [43]. These datasets often include text data from the internet, books, news articles, and more. The use of large-scale data provides abundant language inputs and enhances the model's generalization ability.\n4. High computational resources [44]: Large models necessitate significant computational resources for training and inference. High-performance graphics processing units (GPUs) or specialized deep learning accelerators, such as TPUs, are commonly used to accelerate computations and achieve efficient model training and inference.\n5. Iterative optimization algorithms: Large models are typically trained using iterative optimization algorithms such as stochastic gradient descent (SGD) [45] and adaptive optimization algorithms like ADMA [46]. These algorithms update the model's parameters through backpropagation, minimizing the loss function and optimizing the model's performance.\nIn addition to the aforementioned key technologies, research on large models also involves aspects such as scaling up model size [47], data handling and selection [48], model compression and acceleration [49], and more. With advancing technology, the application of large models in natural language processing, intelligent dialogues, text generation, and other fields will become more extensive and mature." }, { "figure_ref": [], "heading": "C. Smart Education", "publication_ref": [ "b49", "b50", "b51", "b52", "b53" ], "table_ref": [], "text": "Smart education refers to an educational model that utilizes advanced information technology and the theories and methods of educational science to provide personalized, efficient, and innovative learning and teaching experiences. Its core idea is to leverage the advantages of information technology to offer intelligent and personalized learning environments and resources, thereby promoting students' comprehensive development and enhancing learning outcomes.\nSmart education is closely related to artificial intelligence (AI) and LLMs [50]. AI is the scientific and engineering field that aims to simulate and mimic human intelligence, while LLMs are a type of deep learning model with the capability to handle large-scale data and complex tasks. Through the applications of AI and LLMs, smart education can achieve more accurate learning analysis and assessment, personalized learning support and guidance, automated learning resource recommendations, and innovative teaching methods. However, smart education currently faces several issues and challenges:\n• Shift in roles for teachers and students: Smart education involves transforming the roles of teachers and students from traditional transmitters and receivers of knowledge to collaborators and explorers [51]. This requires teachers to possess new teaching philosophies and skills to adapt to and guide students in the learning approaches and needs within a smart education environment. • Data privacy and security [52], [53]: Smart education involves the collection and analysis of large amounts of student data to provide personalized learning support and assessment. However, this raises concerns about student privacy and data security [54]. It is crucial to establish robust data management and protection mechanisms to ensure the safety and lawful use of student data. • Technological infrastructure and resources: Implementing smart education requires adequate technological infrastructure and resource support, including network connectivity, computing devices, educational software, etc. However, some regions and schools may face challenges regarding technological conditions and resource scarcity, limiting the widespread adoption and application of smart education. • Ethical and moral issues: The application of smart education raises ethical and moral questions, such as data privacy, algorithm bias, and fairness in artificial intelligence. It is necessary to establish guidelines and regulations to ensure that the application of smart education not only yields educational benefits but also adheres to ethical principles and social fairness. • Balancing personalization and social equity: Smart education aims for personalized learning support, but excessive reliance on personalization may widen the gaps between learners. It is essential to strike a balance between personalization and social equity, ensuring that the application of smart education does not exacerbate educational inequalities but instead provides equal learning opportunities for all learners. In conclusion, smart education refers to an educational model that utilizes advanced information technology and the theories and methods of educational science to provide personalized, efficient, and innovative learning and teaching experiences. It is closely related to AI and large models. However, unlike mere technological applications, smart education also involves a range of issues and challenges, including the transformation of teacher roles, data privacy and security, technological infrastructure and resources, ethical and moral concerns, balancing personalization and social equity, and innovation in educational content and assessment systems. Addressing these issues and promoting the sustainable development of education requires collaborative efforts from the education sector, technology industry, and society as a whole." }, { "figure_ref": [ "fig_0" ], "heading": "D. LLMs for Education", "publication_ref": [ "b54", "b55", "b56", "b57" ], "table_ref": [], "text": "Large models have close relationships with artificial intelligence, data science, and other interdisciplinary fields. Large models are an important research direction within the field of artificial intelligence. They use deep learning and large-scale data training methods to simulate human language capabilities and achieve natural language processing tasks. In the field of data science, large models can be applied to tasks such as text mining, sentiment analysis, machine translation, and extracting valuable information from text data. Furthermore, large models involve computer science, machine learning, cognitive science, and other interdisciplinary fields. Through the study of language and intelligence, they drive the crossfertilization and development between these disciplines.\nIn recent years, the emergence of LLMs, such as GPT-3, has sparked widespread attention and discussion. LLMs are AI technologies based on deep learning that possess powerful language generation and understanding capabilities. At the same time, the field of education faces many challenges and opportunities, such as personalized learning, educational resource inequality, and instructional effectiveness assessment. As a result, the education sector has begun to explore how to integrate LLMs with education to enhance teaching quality and effectiveness. Here are the significance and several ongoing practical areas, which can be depicted in Fig. 1 1. Personalized learning: Large models can provide personalized learning content and recommendations based on students' learning needs and interests. By analyzing students' learning data and behavioral patterns [55], [56], large models can design unique learning paths and resources for each student [57], helping them learn and grow more efficiently.\n2. Instructional support tools: LLMs can serve as assistants to teachers, providing intelligent instructional support tools and platforms [58]. Teachers can utilize the generated content and recommendations from LLMs to design teaching activities, monitor students' learning progress, and provide personalized teaching support.\n3. Educational assessment and feedback: LLMs can analyze students' assignments, exams, and other learning data to provide assessment and feedback on their learning progress. By automatically generating comments and suggestions, LLMs can help teachers gain a more accurate understanding of students' learning achievements and challenges, and provide corresponding guidance and support.\n4. Educational resource and content creation: LLMs can be used for the creation and generation of educational resources and content. They can generate teaching materials, exercises, case studies, and more based on instructional goals and needs, providing teachers with a rich array of resources and inspiration." }, { "figure_ref": [], "heading": "III. KEY TECHNOLOGIES FOR EDULLMS", "publication_ref": [ "b58", "b59", "b60", "b61", "b62", "b63", "b64" ], "table_ref": [], "text": "Educational LLMs involve several key technologies. Here are 10 key technologies related to educational large language models (EduLLMs), along with detailed descriptions for each:\n1. Natural language processing (NLP): NLP is one of the core technologies behind EduLLMs. It encompasses techniques such as text analysis, semantic understanding, and sentiment analysis, enabling the models to comprehend and process human language [59]. NLP enables EduLLMs to understand student queries, generate language responses, and extract important information from text.\n2. Deep learning (DL): DL is a branch of machine learning [60] that involves constructing and training deep neural network models for learning and inference [61]. EduLLMs often rely on deep learning architectures such as convolutional neural networks (CNNs) or recurrent neural networks (RNNs) to process and analyze educational data and generate meaningful outputs. Many DL techniques have been developed.\n3. Reinforcement learning (RL) [62]: RL trains an agent to make decisions through trial and error and reward mechanisms. In EduLLMs, reinforcement learning can be employed to optimize model responses and recommendations, allowing the models to adjust based on student feedback and outcomes to provide more accurate and effective learning support [63].\n4. Data mining (DM) [64], [65]: DM is the process of extracting useful information and patterns from large datasets. EduLLMs can utilize data mining techniques to discover student learning patterns, behavior trends, and knowledge gaps, providing the foundation for personalized learning and offering insights for educational research. " }, { "figure_ref": [], "heading": "Personalized learning experience", "publication_ref": [], "table_ref": [], "text": "Recommend related learning materials." }, { "figure_ref": [], "heading": "Content creation and generation", "publication_ref": [], "table_ref": [], "text": "Generate teaching outlines, practice questions, and lesson plans." }, { "figure_ref": [], "heading": "Language learning and teaching", "publication_ref": [], "table_ref": [], "text": "Provide grammar and vocabulary exercises and enhance their language communication abilities." }, { "figure_ref": [], "heading": "Cross-language communication and translation", "publication_ref": [], "table_ref": [], "text": "Provide real-time translation services." }, { "figure_ref": [], "heading": "Educational research and data analysis", "publication_ref": [], "table_ref": [], "text": "Offer employment prospects, career development paths, and advice on relevant skill development." }, { "figure_ref": [], "heading": "Virtual experiments and simulations", "publication_ref": [], "table_ref": [], "text": "Provide virtual experiment and simulation environments." }, { "figure_ref": [], "heading": "Career planning and guidance", "publication_ref": [], "table_ref": [], "text": "Offer employment prospects, career development paths, and advice." }, { "figure_ref": [], "heading": "Exam preparation and test-taking support", "publication_ref": [], "table_ref": [], "text": "Offer practice questions, explanations, and strategies." }, { "figure_ref": [], "heading": "Academic writing assistance", "publication_ref": [], "table_ref": [], "text": "Provide guidance on structuring essays, citing sources, refining arguments, and enhancing overall clarity and coherence." }, { "figure_ref": [], "heading": "Interactive learning experiences", "publication_ref": [], "table_ref": [], "text": "Create interactive and immersive learning experiences." }, { "figure_ref": [], "heading": "Lifelong learning and continuing education", "publication_ref": [], "table_ref": [], "text": "Enable them to acquire new skills, explore new fields, and pursue personal development." }, { "figure_ref": [], "heading": "Computer vision (CV):", "publication_ref": [ "b65", "b66", "b67", "b68", "b69" ], "table_ref": [], "text": "The powerful CV technologies enable computers to understand and interpret images and videos. In education, EduLLMs can employ computer vision techniques to analyze students' facial expressions, postures, and behaviors, providing more accurate emotion analysis and learning feedback [66].\n6. Speech recognition and synthesis: Speech recognition technology converts speech into text, while speech synthesis technology converts text into speech. EduLLMs can utilize these technologies to engage in speech interactions with students, offering support for oral practice, speech assessment, and pronunciation correction [67].\n7. Multimodal learning [68]: It involves the fusion of various sensors and data sources, such as text, images, audio, and video. EduLLMs can process and analyze multimodal data to gain a more comprehensive understanding of students' learning situations and needs [69].\n8. Personalized recommendation systems: They utilize ML and DM techniques to provide students with personalized learning resources and suggestions based on their interests, learning history, and learning styles [70]. EduLLMs can play a significant role in personalized recommendation systems, leveraging student data and behavior patterns to recommend suitable learning materials, courses, and activities.\nTherefore, the combination of these key technologies enables EduLLMs to offer personalized, adaptive, and targeted educational support. The applications foster innovation in education, improving learning outcomes and teaching quality. However, these applications of EduLLMs also face challenges, such as privacy protection, data bias, and algorithm transparency. These need to be appropriately addressed in technological development and practical implementation." }, { "figure_ref": [], "heading": "IV. LLM-EMPOWERED EDUCATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Applications of Education under LLMs", "publication_ref": [ "b70", "b71" ], "table_ref": [ "tab_1" ], "text": "Possible applications of LLMs for education can be found in various educational scenarios, providing personalized learning, teaching assistance, and educational research support. Here are 12 potential application scenarios of LLM4Edu, along with specific descriptions and examples, as shown in Table I: 1. Learning assistance tools: EduLLMs can serve as learning assistance tools, providing support to students in problem-solving, generating study materials, and organizing knowledge. For example, students can ask the model for solution methods to mathematical problems, and the model can generate detailed explanations and step-by-step processes to help students understand and master the concepts.\n2. Personalized learning experience: EduLLMs can offer personalized learning content and suggestions based on students' learning needs and interests. For instance, the model can recommend related reading materials, practice questions, and learning resources based on students' learning histories and interests, catering to their individualized requirements.\n3. Content creation and generation: EduLLMs can assist educators and content creators in generating educational materials and resources. For example, the models can automatically generate teaching outlines, practice questions, and lesson plans, providing educators with diverse and enriched teaching resources.\n4. Language learning and teaching: LLM-empowered education has potential applications in language learning and teaching. For instance, the models can provide grammar and vocabulary exercises to help students improve their language skills. The models can also generate dialogue scenarios for students to practice real-life conversations, enhancing their language communication abilities.\n5. Cross-language communication and translation: LLMs can assist in cross-language communication and translation in smart education. For instance, the large models can provide real-time translation services, helping students and educators overcome language barriers and facilitating cross-cultural communication and collaboration.\n6. Educational research and data analysis: EduLLMs can analyze extensive educational data (aka educational data mining) [71] and provide deep insights and research support. For example, the models can assist researchers in analyzing student's learning behaviors and performances, discovering effective teaching methods and strategies, and providing evidence for educational policy-making.\n7. Virtual experiments and simulations: EduLLMs can provide virtual experiment and simulation environments, allowing students to engage in practical experiences. For example, the models can offer virtual chemistry laboratories, enabling students to conduct chemical experiments in safe and controlled environments, honing their practical skills and scientific thinking.\n8. Career planning and guidance: EduLLMs provide career planning and guidance to students. For instance, the models can offer employment prospects, career development paths, and advice on relevant skill development based on student's interests, skills, and market demands, assisting students in making informed career planning decisions.\n9. Exam preparation and test-taking support: EduLLMs can assist students in preparing for exams and improve their test-taking skills. They can offer practice questions, explanations, and strategies for different types of exams, helping students familiarize themselves with the format, content, and techniques required for successful performance.\n10. Academic writing assistance: LLMs can aid students in improving their academic writing skills. They can provide guidance on structuring essays, citing sources, refining arguments, and enhancing overall clarity and coherence. These models can also assist students in developing critical thinking and analytical skills necessary for academic success.\n11. Interactive learning experiences: EduLLMs will create interactive and immersive learning experiences. For example, they can simulate historical events, scientific experiments, or virtual field trips, allowing students to engage actively and learn through realistic scenarios. These interactive experiences can enhance student engagement and deepen their understanding of complex concepts.\n12. Lifelong learning and continuing education: Educational LLMs can support lifelong learning [72] and continuing education initiatives. They can provide resources, courses, and learning opportunities for individuals outside traditional educational settings, enabling them to acquire new skills, explore new fields, and pursue personal or professional development at any stage of life.\nThe versatility of educational LLMs allows for their application across a wide range of educational contexts, from K-12 classrooms to higher education institutions, vocational training, and beyond. By leveraging the capabilities of these models, educational stakeholders can enhance the quality, accessibility, and effectiveness of teaching and learning experiences. In summary, the applications of EduLLMs encompass learning assistance tools, personalized learning experiences, content creation and generation, language learning and teaching, student assignment evaluation, cross-language communication and translation, educational research and data analysis, virtual experiments and simulations, learning content recommendations, and career planning and guidance. These scenarios demonstrate the potential of EduLLMs to provide personalized, efficient, and innovative educational services.\nHowever, it is crucial to balance technological advancements with ethical considerations in the application of EduLLMs, ensuring that their usage aligns with educational goals and values while prioritizing individual privacy and data security." }, { "figure_ref": [ "fig_1" ], "heading": "B. Characteristics of Education under LLMs", "publication_ref": [], "table_ref": [], "text": "Education under large language models (LLMs) exhibits several distinct characteristics, as shown in Fig. 2 1. Personalized learning: LLMs have the ability to process and analyze vast amounts of data, allowing for personalized learning experiences. They can adapt instructional content, pacing, and assessments to match the unique needs and preferences of individual learners. This personalization enhances the effectiveness and engagement of the learning process." }, { "figure_ref": [], "heading": "Characteristics of Education under LLMs", "publication_ref": [ "b72", "b73" ], "table_ref": [], "text": "2. Adaptive feedback: LLMs can provide immediate and adaptive feedback to learners. They can identify areas of weakness or misconceptions and offer tailored explanations and guidance. This real-time feedback helps learners to understand concepts more effectively and make progress at their own pace.\n3. Access to diverse resources: For smart education, LLMs have access to a vast amount of information and knowledge. They can provide learners with a wide range of resources, including texts, images, videos, and interactive materials. This access to diverse resources enhances the depth and breadth of learning, enabling learners to explore various perspectives and engage with rich content.\n4. Natural language interaction: LLMs are proficient in understanding and generating human language. Learners can engage in natural language conversations with LLMs, asking questions, seeking clarifications, and discussing ideas. This natural language interaction promotes a more conversational and interactive learning experience.\n5. Continuous learning support: LLMs can provide continuous learning support beyond traditional classroom hours. Learners can access educational materials, review lessons, and seek assistance from LLMs at any time. Note that this flexibility in learning support accommodates different schedules and learning preferences.\n6. Content generation and creation: LLMs can assist in generating educational content. They can automate the creation of quizzes, exercises, and learning materials based on specific learning objectives. This content generation capability reduces the burden on educators and allows for the creation of diverse and customized learning resources.\n7. Multilingual capabilities: LLMs are capable of processing and generating content in multiple languages [73]. This enables learners from different linguistic backgrounds to access educational materials in their native languages, promoting inclusivity and accessibility.\n8. Analyzing learning data: Educational LLMs can analyze learning data and provide insights into learners' progress, strengths, and areas for improvement. Educators can utilize these analytics to gain a deeper understanding of learners' learning patterns, adjust instructional strategies, and provide targeted interventions.\n9. Ethical considerations: Education under LLMs raises ethical considerations. It is essential to ensure transparency, accountability, and privacy in the use of learner data. Clear guidelines and safeguards should be in place to protect learners' privacy and prevent potential biases or misuse of data.\n10. Collaboration between humans and LLMs: LLMs are tools that can enhance and augment human teaching and learning [74]. They are not meant to replace human educators but rather to collaborate with them. Educators can leverage LLMs to provide personalized support, curate content, and facilitate meaningful learning experiences." }, { "figure_ref": [], "heading": "V. KEY POINTS IN LLMSEDU", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Training Data and Preprocessing", "publication_ref": [], "table_ref": [], "text": "Preprocessing steps applied to the data before training may include tokenization, normalization, and other data cleaning techniques. Tokenization involves breaking the text into smaller units, such as words or subwords, to facilitate processing. Normalization may include converting text to lowercase to ensure uniformity and remove case-specific variations. Other cleaning techniques may involve removing irrelevant HTML tags, special characters, or noisy data to enhance the quality of the training data. For educational purposes, when training models to understand and generate text in an educational context, it is crucial to curate datasets that include diverse educational content. This can range from textbooks and scholarly articles to educational websites and forums. These preprocessing steps should be tailored to preserve the educational context, ensuring that the model learns to generate coherent and contextually relevant educational content." }, { "figure_ref": [], "heading": "B. Training Process", "publication_ref": [], "table_ref": [], "text": "Pre-training and fine-tuning play a key role in the construction of educational LLMs. First, in the pre-training stage, the model is initialized through large general text data to achieve the learning of general language features such as syntax, semantics, and logical relationships. This provides the model with broad language understanding capabilities, allowing it to understand and process a variety of language tasks. Next, in the fine-tuning phase, fine-tuning is performed by collecting domain-specific data according to specific task requirements in the education field. This ensures that the model can better adapt to the tasks and show superior performance in the education field. During the fine-tuning process, pre-trained model weights are used for initialization, which provides a strong foundation for the model to learn specific tasks. Adjust model parameters through supervised learning to adapt them to the specific requirements of the task, and ensure that the model reaches a satisfactory level on educational tasks through performance evaluation. Hyperparameter tuning further optimizes model performance, such as by adjusting the learning rate and batch size. Ultimately, by saving the fine-tuned model, it becomes a powerful tool that can be deployed and applied to specific educational tasks. Therefore, the entire training process enables the model to achieve excellent results in a wide range of language understanding and specific educational tasks, providing a powerful language processing tool for smart education." }, { "figure_ref": [], "heading": "C. Integration with Educational Technologies", "publication_ref": [], "table_ref": [], "text": "Finally, they can be seamlessly integrated into various practical applications within educational technology to enhance the overall learning experience. LLMs can power chatbots, providing personalized support by addressing queries related to course content, assignments, or general information, with the added advantage of 24/7 availability. LLMs can be incorporated into intelligent tutoring systems, delivering personalized learning experiences by offering customized guidance and recommendations to students. They can also automate the generation of educational content, including quizzes, tests, and study materials, thereby saving educators valuable time. Moreover, LLMs have applications on language learning platforms, facilitating conversational practice through realistic dialogue simulations and offering real-time feedback on grammar usage. These technologies can extend to virtual labs and simulations, enhancing students' practical learning experiences through natural language interactions. Overall, the application of LLMs in educational technology necessitates considerations for ethical issues, data privacy, and potential biases in the models. Continuous user feedback and improvement are crucial for optimizing learning outcomes." }, { "figure_ref": [], "heading": "VI. CHALLENGES AND FUTURE DIRECTIONS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Challenges and Issues", "publication_ref": [ "b52", "b74", "b75", "b76", "b77", "b78", "b79", "b80" ], "table_ref": [], "text": "The application of LLMs for education brings forth numerous potential challenges and issues. Here are 10 possible challenges related to LLM4Edu, along with detailed descriptions:\n1. Privacy protection [53], [75]: In general, EduLLMs deal with a vast amount of student data, including personal information, learning records, and behavioral data. This raises concerns regarding privacy protection. Ensuring the security and privacy of student data becomes a significant challenge, necessitating rigorous data security measures and privacy policies to safeguard student rights.\n2. Data bias: The data used during the training process of EduLLMs may contain biases, which can result in biased outputs from the models [76]. For instance, if there are biases in the training data concerning gender or race, the models may reflect these biases and have unfair effects on students. Eliminating data bias is an important challenge to ensure the fairness and reliability of the models.\n3. Algorithm transparency: EduLLMs often consist of complex neural network models, and their decision-making processes can be difficult to interpret and understand. Algorithm transparency refers to the extent to which the model's decision-making process can be explained and understood [77]. In education, students and teachers need to understand how the models make recommendations and evaluations to trust and utilize them.\n4. Technical feasibility: Educational LLMs typically require substantial computational resources and storage space for training and inference. In certain educational environments, especially in resource-constrained schools or regions, these requirements may not be met. Hence, ensuring the technical feasibility of EduLLMs to operate reliably in various educational settings is a critical challenge.\n5. Human interaction and emotion: Education involves rich human interactions and emotional experiences. EduLLMs still face challenges in simulating human teacher-student interactions. For example, in terms of emotion analysis, models may struggle to accurately understand students' emotional states and provide appropriate support [78]. Addressing these challenges, especially in the Metaverse [79], [80], requires further research and technological innovation.\n6. Accessibility: The application of EduLLMs should have broad accessibility to meet the needs of diverse learners. This includes support for students with disabilities, such as assistive features for visually and hearing-impaired students. Ensuring that accessibility needs are considered in the design and implementation of EduLLMs is a significant challenge.\n7. Credibility and quality assessment: Ensuring the credibility and quality assessment of EduLLMs is crucial. Students and teachers need to have confidence that the recommendations and feedback provided by the models are accurate and reliable [81]. Therefore, conducting credibility and quality assessments of EduLLMs is an important challenge. This involves establishing evaluation criteria and metrics to validate the model's performance and effectiveness while ensuring its reliability in educational practice.\n8. Teacher roles and professional development: The use of EduLLMs may impact teacher roles and professional development. Firstly, EduLLMs can provide instructional assistance and personalized learning support, alleviating teachers' workload. Secondly, teachers need to adapt to and master the technologies and tools related to EduLLMs to collaborate and work effectively with them. This presents new requirements and challenges for teacher professional development." }, { "figure_ref": [], "heading": "B. Future Directions", "publication_ref": [], "table_ref": [], "text": "Here are some possible research directions for EduLLMs in the future, along with detailed descriptions:\n1. Model interpretability: Educational LLMs often consist of complex neural network structures, and their decisionmaking processes can be difficult to interpret and understand. To establish the credibility and acceptability of EduLLMs, further research is challenging on how to explain the model's decision-making process, enabling teachers, students, and other stakeholders to comprehend and trust the model's recommendations and evaluations.\n2. Personalized learning support: One major application of EduLLMs is to provide personalized learning support. Future research can explore how to better utilize models to understand students' learning needs, interests, and learning styles, in order to offer more accurate and personalized learning suggestions and resources.\n3. Emotional intelligence: Education involves emotional factors such as students' emotional states and experiences. Future research can focus on integrating emotional intelligence into EduLLMs, enabling the models to accurately recognize and understand students' emotional states and provide appropriate emotional support and guidance when needed.\n4. Evaluation and assessment: Evaluating the effectiveness and impact of EduLLMs is important. Future research can focus on establishing effective evaluation methods and metrics to assess the influence of EduLLMs on students' learning outcomes, learning processes, and learning experiences.\n5. Social equity: The application of EduLLMs in providing personalized learning may raise issues of social equity. Future research can explore how to address these issues through the design and implementation of models, ensuring that their applications do not exacerbate educational inequalities but instead promote a fair and inclusive learning environment.\n6. Educational ethics: The application of EduLLMs raises ethical issues such as privacy protection, data usage, and the model's moral responsibility. Future research can focus on establishing appropriate ethical guidelines and frameworks to guide the development, use, and evaluation of EduLLMs.\n7. Cross-cultural adaptability: The research and application of EduLLMs need to consider the needs and differences of learners from different cultures and backgrounds. Future research can focus on making EduLLMs cross-culturally adaptable to better meet the needs of learners worldwide.\n8. Long-term learning and development: Research on EduLLMs should not only focus on short-term effects during the learning process but also consider students' long-term learning and development. Future research can explore how EduLLMs can support students' long-term learning goals, facilitate continuous growth, and promote lifelong learning." }, { "figure_ref": [], "heading": "VII. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "The application of LLMs in the field of education has broad prospects. This review provides a systematic summary and analysis of the research background, motivation, and application of educational large models. It first introduces the research background and motivation of LLMs and explains the essence of large models. It then discusses the relationship between intelligent education and educational LLMs, and summarizes the current research status of educational LLMs. Finally, by reviewing existing research, this article provides guidance and insights for educators, researchers, and policy-makers to gain a deep understanding of the potential opportunities and challenges of educational LLMs, and provides guidance for further advancing the development and application of educational LLMs. However, the development and applications of educational LLMs still face technical, ethical, and practical challenges, requiring further research and exploration.\nWith the advancement of technology and the evolution of educational needs, educational large models will play an increasingly important role in providing more efficient and personalized support and services for education. We believe that AI-driven education is one of the most innovative and forward-looking directions in the field of education today. It can be foreseen that in the future, with the continuous development and improvement of artificial intelligence, the future of smart education will be more digitalized and humanized, as well as more diverse and personalized." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "ACKNOWLEDGMENT This research was supported in part by the National Natural Science Foundation of China (Nos. 62002136 and 62272196), Natural Science Foundation of Guangdong Province (No. 2022A1515011861), Fundamental Research Funds for the Central Universities of Jinan University (No. 21622416), the Young Scholar Program of Pazhou Lab (No. PZL2021KF0023), Engineering Research Center of Trustworthy AI, Ministry of Education (Jinan University), and Guangdong Key Laboratory of Data Security and Privacy Preserving. Dr. Wensheng Gan is the corresponding author of this paper." } ]
With the rapid development of artificial intelligence technology, large language models (LLMs) have become a hot research topic. Education plays an important role in human social development and progress. Traditional education faces challenges such as individual student differences, insufficient allocation of teaching resources, and assessment of teaching effectiveness. Therefore, the applications of LLMs in the field of digital/smart education have broad prospects. The research on educational large models (EduLLMs) is constantly evolving, providing new methods and approaches to achieve personalized learning, intelligent tutoring, and educational assessment goals, thereby improving the quality of education and the learning experience. This article aims to investigate and summarize the application of LLMs in smart education. It first introduces the research background and motivation of LLMs and explains the essence of LLMs. It then discusses the relationship between digital education and EduLLMs and summarizes the current research status of educational large models. The main contributions are the systematic summary and vision of the research background, motivation, and application of large models for education (LLM4Edu). By reviewing existing research, this article provides guidance and insights for educators, researchers, and policy-makers to gain a deep understanding of the potential and challenges of LLM4Edu. It further provides guidance for further advancing the development and application of LLM4Edu, while still facing technical, ethical, and practical challenges requiring further research and exploration.
Large Language Models in Education: Vision and Opportunities
[ { "figure_caption": "Fig. 1 :1Fig. 1: Architecture of LLMs for education (LLM4Edu).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: The characteristics of education under LLMs.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Students are the recipients of education. They acquire knowledge and skills through learning and practice, aiming for personal development and growth.", "figure_data": "• Parents: As an important supportive and guardianshiprole in education, they are concerned with their children'slearning and development, providing necessary resourcesand environments.• Educational institutions: Schools, universities, trainingorganizations, and other educational institutions provideeducational resources and environments, organizing andmanaging educational activities.• Government and society: They play roles in educationpolicy-making, resource allocation, and social support,providing necessary support and safeguards for education.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Several applications of LLM4Edu", "figure_data": "FunctionDescriptionLearning assistance toolsProvide support in problem-solving, generating study materials, andorganizing knowledge.", "figure_id": "tab_1", "figure_label": "I", "figure_type": "table" }, { "figure_caption": ":", "figure_data": "Access to diverse resourcesAnalyzing learning dataMultilingual capabilitiesNatural language interactionEthical considerationsPersonalized learningAdaptive feedbackContinuous learning support", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Wensheng Gan; Zhenlian Qi; Jiayang Wu; Chun-Wei Lin
[ { "authors": "J Sun; W Gan; Z Chen; J Li; P S Yu", "journal": "", "ref_id": "b0", "title": "Big data meets Metaverse: A survey", "year": "2022" }, { "authors": "J Sun; W Gan; H Chao; P S Yu; W Ding", "journal": "IEEE Internet of Things Journal", "ref_id": "b1", "title": "Internet of behaviors: A survey", "year": "2023" }, { "authors": "S Wan; H Lin; W Gan; J Chen; P S Yu", "journal": "", "ref_id": "b2", "title": "Web3: The next internet revolution", "year": "2023" }, { "authors": "W Gan; Z Ye; S Wan; P S Yu", "journal": "ACM", "ref_id": "b3", "title": "Web 3.0: The future of internet", "year": "2023" }, { "authors": "W X Zhao; K Zhou; J Li; T Tang; X Wang; Y Hou; Y Min; B Zhang; J Zhang; Z Dong", "journal": "", "ref_id": "b4", "title": "A survey of large language models", "year": "2023" }, { "authors": "W Gan; Z Qi; J Wu; J C W Lin", "journal": "IEEE", "ref_id": "b5", "title": "Large language models in education: Vision and opportunities", "year": "2023" }, { "authors": "W Gan; S Wan; P S Yu", "journal": "IEEE", "ref_id": "b6", "title": "Model-as-a-service (MaaS): A survey", "year": "2023" }, { "authors": "F Zeng; W Gan; Y Wang; N Liu; P S Yu", "journal": "", "ref_id": "b7", "title": "Large language models for robotics: A survey", "year": "2023" }, { "authors": "J Wu; W Gan; Z Chen; S Wan; H Lin", "journal": "", "ref_id": "b8", "title": "AI-generated content (AIGC): A survey", "year": "2023" }, { "authors": "Y Xiao; L Wu; J Guo; J Li; M Zhang; T Qin; T Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b9", "title": "A survey on non-autoregressive generation for neural machine translation and beyond", "year": "2023" }, { "authors": "C Ziems; J Yu; Y.-C Wang; A Halevy; D Yang", "journal": "", "ref_id": "b10", "title": "The moral integrity corpus: A benchmark for ethical dialogue systems", "year": "2022" }, { "authors": "F M Aldhafeeri; A A Alotaibi", "journal": "Education and Information Technologies", "ref_id": "b11", "title": "Effectiveness of digital education shifting model on high school students' engagement", "year": "2022" }, { "authors": "H Lin; S Wan; W Gan; J Chen; H Chao", "journal": "IEEE", "ref_id": "b12", "title": "Metaverse in education: Vision, opportunities, and challenges", "year": "2022" }, { "authors": "D S Mcnamara; T Arner; R Butterfuss; Y Fang; M Watanabe; N Newton; K S Mccarthy; L K Allen; R D Roscoe", "journal": "International Journal of Human-Computer Interaction", "ref_id": "b13", "title": "iSTART: Adaptive comprehension strategy training and stealth literacy assessment", "year": "2023" }, { "authors": "H Kristianto; L Gandajaya", "journal": "Interactive Technology and Smart Education", "ref_id": "b14", "title": "Offline vs online problem-based learning: A case study of student engagement and learning outcomes", "year": "2023" }, { "authors": "P S Smith; P J Trygstad; E R Banilower", "journal": "Education Policy Analysis Archives", "ref_id": "b15", "title": "Widening the gap: Unequal distribution of resources for K-12 science instruction", "year": "2016" }, { "authors": "P J Piety; D T Hickey; M Bishop", "journal": "", "ref_id": "b16", "title": "Educational data sciences: Framing emergent practices for analytics of learning, organizations, and systems", "year": "2014" }, { "authors": "J D Vermunt; V Donche", "journal": "Educational Psychology Review", "ref_id": "b17", "title": "A learning patterns perspective on student learning in higher education: state of the art and moving forward", "year": "2017" }, { "authors": "A A Aziz; K M Yusof; J M Yatim", "journal": "Procedia-Social and Behavioral Sciences", "ref_id": "b18", "title": "Evaluation on the effectiveness of learning outcomes from students' perspectives", "year": "2012" }, { "authors": "C Fang; Q Lu", "journal": "Complexity", "ref_id": "b19", "title": "Personalized recommendation model of highquality education resources for college students based on data mining", "year": "2021" }, { "authors": "P Bhargava; V Ng", "journal": "", "ref_id": "b20", "title": "Commonsense knowledge reasoning and generation with pre-trained language models: A survey", "year": "2022" }, { "authors": "E Kasneci; K Seßler; S Küchemann; M Bannert; D Dementieva; F Fischer; U Gasser; G Groh; S Günnemann; E Hüllermeier", "journal": "Learning and Individual Differences", "ref_id": "b21", "title": "ChatGPT for good? on opportunities and challenges of large language models for education", "year": "2023" }, { "authors": "N S Raj; V Renumol", "journal": "Journal of Computers in Education", "ref_id": "b22", "title": "A systematic literature review on adaptive content recommenders in personalized learning environments from 2015 to 2020", "year": "2022" }, { "authors": "Z Wang; W Yan; C Zeng; Y Tian; S Dong", "journal": "International Journal of Intelligent Systems", "ref_id": "b23", "title": "A unified interpretable intelligent learning diagnosis framework for learning performance prediction in intelligent tutoring systems", "year": "2023" }, { "authors": "J Rudolph; S Tan; S Tan", "journal": "Journal of Applied Learning and Teaching", "ref_id": "b24", "title": "ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?", "year": "2023" }, { "authors": "R Marshall; A Pardo; D Smith; T Watson", "journal": "British Journal of Educational Technology", "ref_id": "b25", "title": "Implementing next generation privacy and ethics research in education technology", "year": "2022" }, { "authors": "R F Kizilcec; H Lee", "journal": "", "ref_id": "b26", "title": "Algorithmic fairness in education", "year": "2022" }, { "authors": "H Lee", "journal": "Anatomical Sciences Education", "ref_id": "b27", "title": "The rise of ChatGPT: Exploring its potential in medical education", "year": "2023" }, { "authors": "L J Zachary; L Z Fain", "journal": "John Wiley & Sons", "ref_id": "b28", "title": "The mentor's guide: Facilitating effective learning relationships", "year": "2022" }, { "authors": "V Shunkov; O Shevtsova; V Koval; T Grygorenko; L Yefymenko; Y Smolianko; O Kuchai", "journal": "", "ref_id": "b29", "title": "Prospective directions of using multimedia technologies in the training of future specialists", "year": "2022" }, { "authors": "R Tang; Y.-N Chuang; X Hu", "journal": "", "ref_id": "b30", "title": "The science of detecting LLMgenerated texts", "year": "2023" }, { "authors": "D Baidoo-Anu; L O Ansah", "journal": "Journal of AI", "ref_id": "b31", "title": "Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of Chat-GPT in promoting teaching and learning", "year": "2023" }, { "authors": "L Weissweiler; V Hofmann; A Köksal; H Schütze", "journal": "", "ref_id": "b32", "title": "The better your syntax, the better your semantics? probing pretrained language models for the english comparative correlative", "year": "2022" }, { "authors": "Y Meng; J Huang; Y Zhang; J Han", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Generating training data with language models: Towards zero-shot language understanding", "year": "2022" }, { "authors": "S Agarwal; B Agarwal; R Gupta", "journal": "Library Hi Tech", "ref_id": "b34", "title": "Chatbots and virtual assistants: a bibliometric analysis", "year": "2022" }, { "authors": "J Gao; L Ren; Y Yang; D Zhang; L Li", "journal": "International Journal of Emerging Markets", "ref_id": "b35", "title": "The impact of artificial intelligence technology stimuli on smart customer experience and the moderating effect of technology readiness", "year": "2022" }, { "authors": "M Salvagno; F S Taccone; A G Gerli", "journal": "Critical Care", "ref_id": "b36", "title": "Can artificial intelligence help for scientific writing?", "year": "2023" }, { "authors": "U Ertugrul", "journal": "Journal of Teacher Education and Lifelong Learning", "ref_id": "b37", "title": "Lifelong learning motivation scale (LLMs): Validity and reliability study", "year": "2023" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "P Shaw; J Uszkoreit; A Vaswani", "journal": "", "ref_id": "b39", "title": "Self-attention with relative position representations", "year": "2018" }, { "authors": "B Zoph; G Ghiasi; T.-Y Lin; Y Cui; H Liu; E D Cubuk; Q Le", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b40", "title": "Rethinking pre-training and self-training", "year": "2020" }, { "authors": "J Howard; S Ruder", "journal": "", "ref_id": "b41", "title": "Universal language model fine-tuning for text classification", "year": "2018" }, { "authors": "N Kandpal; H Deng; A Roberts; E Wallace; C Raffel", "journal": "PMLR", "ref_id": "b42", "title": "Large language models struggle to learn long-tail knowledge", "year": "2023" }, { "authors": "F Zeng; W Gan; Y Wang; P S Yu", "journal": "IEEE", "ref_id": "b43", "title": "Distributed training of large language models", "year": "2023" }, { "authors": "B Jin; Ž Kereta", "journal": "SIAM Journal on Imaging Sciences", "ref_id": "b44", "title": "On the convergence of stochastic gradient descent for linear inverse problems in banach spaces", "year": "2023" }, { "authors": "M Reyad; A M Sarhan; M Arafa", "journal": "", "ref_id": "b45", "title": "A modified adam algorithm for deep neural network optimization", "year": "2023" }, { "authors": "M Kang; J.-Y Zhu; R Zhang; J Park; E Shechtman; S Paris; T Park", "journal": "", "ref_id": "b46", "title": "Scaling up gans for text-to-image synthesis", "year": "2023" }, { "authors": "P Zhu; X Hou; K Tang; Y Liu; Y.-P Zhao; Z Wang", "journal": "Information Sciences", "ref_id": "b47", "title": "Unsupervised feature selection through combining graph learning and L2, 0-norm constraint", "year": "2023" }, { "authors": "C Xu; J Mcauley", "journal": "", "ref_id": "b48", "title": "A survey on model compression and acceleration for pretrained language models", "year": "2023" }, { "authors": "R Bajaj; V Sharma", "journal": "Procedia Computer Science", "ref_id": "b49", "title": "Smart education with artificial intelligence based determination of learning styles", "year": "2018" }, { "authors": "T Hampel; R Keil-Slawik", "journal": "Journal on Educational Resources in Computing", "ref_id": "b50", "title": "steam: structuring information in teamdistributed knowledge management in cooperative learning environments", "year": "2001" }, { "authors": "Z Chen; J Wu; W Gan; Z Qi", "journal": "IEEE", "ref_id": "b51", "title": "Metaverse security and privacy: An overview", "year": "2022" }, { "authors": "Y Chen; W Gan; Y Wu; P S Yu", "journal": "Information Sciences", "ref_id": "b52", "title": "Privacy-preserving federated mining of frequent itemsets", "year": "2023" }, { "authors": "M May; S George", "journal": "IEEE", "ref_id": "b53", "title": "Using students' tracking data in e-learning: Are we always aware of security and privacy concerns?", "year": "2011" }, { "authors": "P Fournier-Viger; W Gan; Y Wu; M Nouioua; W Song; T Truong; H Duong", "journal": "Springer", "ref_id": "b54", "title": "Pattern mining: Current challenges and opportunities", "year": "2022" }, { "authors": "W Gan; J C W Lin; P Fournier-Viger; H C Chao; P S Yu", "journal": "ACM Transactions on Knowledge Discovery from Data", "ref_id": "b55", "title": "A survey of parallel sequential pattern mining", "year": "2019" }, { "authors": "C Herodotou; B Rienties; A Boroowa; Z Zdrahal; M Hlosta", "journal": "Educational Technology Research and Development", "ref_id": "b56", "title": "A large-scale implementation of predictive learning analytics in higher education: The teachers' role and perspective", "year": "2019" }, { "authors": "F Filgueiras", "journal": "", "ref_id": "b57", "title": "Artificial intelligence and education governance", "year": "2023" }, { "authors": "T A Al-Qablan; M H Mohd Noor; M A Al-Betar; A T Khader", "journal": "", "ref_id": "b58", "title": "A survey on sentiment analysis and its applications", "year": "2023" }, { "authors": "M I Jordan; T M Mitchell", "journal": "Science", "ref_id": "b59", "title": "Machine learning: Trends, perspectives, and prospects", "year": "2015" }, { "authors": "Y Lecun; Y Bengio; G Hinton", "journal": "Nature", "ref_id": "b60", "title": "Deep learning", "year": "2015" }, { "authors": "L P Kaelbling; M L Littman; A W Moore", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b61", "title": "Reinforcement learning: A survey", "year": "1996" }, { "authors": "T Carta; C Romac; T Wolf; S Lamprier; O Sigaud; P.-Y Oudeyer", "journal": "", "ref_id": "b62", "title": "Grounding large language models in interactive environments with online reinforcement learning", "year": "2023" }, { "authors": "W Gan; J C -W. Lin; P Fournier-Viger; H Chao; P S Yu", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b63", "title": "HUOPM: High-utility occupancy pattern mining", "year": "2020" }, { "authors": "W Gan; J C -W. Lin; P Fournier-Viger; H Chao; V S Tseng; P S Yu", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b64", "title": "A survey of utility-oriented pattern mining", "year": "2021" }, { "authors": "C Thomas; D B Jayagopi", "journal": "", "ref_id": "b65", "title": "Predicting student engagement in classrooms using facial behavioral cues", "year": "2017" }, { "authors": "A B Wong; Z Huang; K Wu", "journal": "Speech Communication", "ref_id": "b66", "title": "Leveraging audible and inaudible signals for pronunciation training by sensing articulation through a smartphone", "year": "2022" }, { "authors": "J Wu; W Gan; Z Chen; S Wan; P S Yu", "journal": "IEEE", "ref_id": "b67", "title": "Multimodal large language models: A survey", "year": "2023" }, { "authors": "R Martinez-Maldonado; V Echeverria; G Fernandez Nieto; S Buckingham Shum", "journal": "", "ref_id": "b68", "title": "From data to insights: A layered storytelling approach for multimodal learning analytics", "year": "2020" }, { "authors": "L Li; Y Zhang; L Chen", "journal": "", "ref_id": "b69", "title": "Prompt distillation for efficient LLMbased recommendation", "year": "2023" }, { "authors": "A Peña-Ayala", "journal": "Expert Systems with Applications", "ref_id": "b70", "title": "Educational data mining: A survey and a data miningbased analysis of recent works", "year": "2014" }, { "authors": "B Li; R Pang; Y Zhang; T N Sainath; T Strohman; P Haghani; Y Zhu; B Farris; N Gaur; M Prasad", "journal": "IEEE", "ref_id": "b71", "title": "Massively multilingual asr: A lifelong learning solution", "year": "2022" }, { "authors": "H Huang; T Tang; D Zhang; W X Zhao; T Song; Y Xia; F Wei", "journal": "", "ref_id": "b72", "title": "Not all languages are created equal in LLMs: Improving multilingual capability by cross-lingual-thought prompting", "year": "2023" }, { "authors": "M Bernabei; S Colabianchi; A Falegnami; F Costantino", "journal": "Computers and Education: Artificial Intelligence", "ref_id": "b73", "title": "Students' use of large language models in engineering education: A case study on technology acceptance, perceptions, efficacy, and detection chances", "year": "2023" }, { "authors": "W Gan; C.-W J Lin; H C Chao; S L Wang; P S Yu", "journal": "IEEE", "ref_id": "b74", "title": "Privacy preserving utility mining: a survey", "year": "2018" }, { "authors": "P Schramowski; C Turan; N Andersen; C A Rothkopf; K Kersting", "journal": "Nature Machine Intelligence", "ref_id": "b75", "title": "Large pre-trained language models contain human-like biases of what is right and wrong to do", "year": "2022" }, { "authors": "E Rader; K Cotter; J Cho", "journal": "", "ref_id": "b76", "title": "Explanations as mechanisms for supporting algorithmic transparency", "year": "2018" }, { "authors": "K Aldrup; B Carstensen; U Klusmann", "journal": "Educational Psychology Review", "ref_id": "b77", "title": "Is empathy the key to effective teaching? a systematic review of its association with teacherstudent interactions and student outcomes", "year": "2022" }, { "authors": "J Sun; W Gan; H Chao; P S Yu", "journal": "", "ref_id": "b78", "title": "Metaverse: Survey, applications, security, and opportunities", "year": "2022" }, { "authors": "R Yang; L Li; W Gan; Z Chen; Z Qi", "journal": "", "ref_id": "b79", "title": "The human-centric Metaverse: A survey", "year": "2023" }, { "authors": "D Boud; E Molloy", "journal": "Assessment & Evaluation in Higher Education", "ref_id": "b80", "title": "Rethinking models of feedback for learning: the challenge of design", "year": "2013" } ]
[]
2023-11-22
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8" ], "table_ref": [], "text": "A multimodal model combines multiple data types, including images, text, audio, and more. Traditional large language models (LLMs) [1], [2] are primarily trained and applied to text data, but they have limitations in understanding other data types. Pure text LLMs, such as GPT-3 [3], BERT [4], and RoBERTa [5], excel in tasks like text generation and encoding, but they lack a comprehensive understanding and processing of other data types. To address this issue, multimodal LLMs integrate multiple data types, overcoming the limitations of pure text models and opening up possibilities for handling diverse data types. GPT-4 [6] serves as an excellent example of a multimodal LLM. It can accept inputs in the form of both images and text, and it demonstrates human-level performance in various benchmark tests. Multimodal perception is a fundamental component for achieving general artificial intelligence, as it is crucial for knowledge acquisition and interaction with the real world. Furthermore, the application of multimodal inputs greatly expands the potential of language models in high-value domains, such as multimodal robotics, document intelligence, and robot technology. Research indicates that native support for multimodal perception provides * Corresponding author: wsgan001@gmail.com. Please cite: J. Wu, W. Gan, Z. Chen, S. Wan, and P. S. Yu, \"Multimodal Large Language Models: A Survey,\" in IEEE International Conference on Big Data, pp. 1-10, 2023. new opportunities for applying multimodal LLMs to novel tasks. Through extensive experimentation, multimodal LLMs have shown superior performance in common-sense reasoning compared to single-modality models, highlighting the benefits of cross-modal transfer for knowledge acquisition.\nIn recent years, the development of multimodal models has showcased additional application possibilities. Apart from text generation models, multimodal models have been increasingly applied in fields such as human-computer interaction, robot control, image search, and speech generation. However, transferring the capabilities of LLMs to the domain of multimodal text and images remains an active area of research, as puretext LLMs are typically trained only on textual corpora and lack perceptual abilities for visual signals. There are several reviews for multimodal models, but each of these articles has a different focus. Summaira et al. [7] provided a detailed introduction to the application of different modalities by categorizing them based on modes. Wang et al. [8] presented a comprehensive compilation of the latest algorithms used in multimodal large-scale models and the datasets employed in recent experiments, offering convenience to readers. Yin et al. [9] classified and differentiated various types of multimodal algorithms in recent years within their review.\nHowever, these articles primarily start with an introduction to large-scale models, lacking an overview of the development process and practical applications of multimodal models. This paper aims to address this gap by starting with the fundamental definition of multimodal. It provides an overview of the historical development of multimodal algorithms and discusses the potential applications and challenges in this field.\n• We start by defining the concept of multimodal models/algorithms, and then delve into the historical development of multimodal algorithms. • We provide a practical guide for various technical aspects related to multimodal models, including knowledge representation, learning objective selection, model construction, information fusion, and the usage of prompts. • We review the up-to-date algorithms used in multimodal models, along with commonly used datasets. This provides basic resources for future research and evaluation. • Finally, we explore several applications of multimodal models and discuss several key challenges that arise from their current development.\nThe rest of this article is organized as follows: In Section II, we discuss related concepts of the multimodal. In Section III, we indicate the practical guide for technical points. Moreover, in Section IV, we organized relevant models. Moreover, we present several promising directions for multimodal and various types of datasets in Section V and highlight the challenges in Section VI. Finally, we conclude this survey in Section VII." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "II. RELATED CONCEPTS", "publication_ref": [ "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "Multimodal refers to expressing or perceiving complex things through multiple modalities, as shown in Fig. 1. Multi-modality can be classified into homogeneous modalities, such as images captured from two different cameras, and heterogeneous modalities, such as the relationship between images and textual language. Multimodal data, from a semantic perception standpoint, refers to the integration of information from various sensory modalities, such as visual, auditory, tactile, and olfactory inputs, to form a unified and meaningful representation of the environment [10]. From a data perspective, multimodal data can be seen as a combination of different data types, such as images, numerical data, text, symbols, audio, time series, or complex data structures composed of sets, trees, graphs, and even combinations of various information resources from different databases or knowledge bases. The exploration and analysis of heterogeneous data sources can be understood as multimodal learning. Using multimodal data allows for a more comprehensive and holistic representation of things, making multimodal research an important area of study. Significant breakthroughs have been achieved in areas such as sentiment analysis, machine translation, natural language processing, and cutting-edge biomedical research [11] by leveraging multimodal approaches.\nDuring the evolution of multimodal research, four distinct stages can be identified, as shown in Fig. 2.\nSingle modality . It was characterized by its reliance on basic computing capabilities. In the 1980s, statistical algorithms and image-processing techniques were used for face recognition systems. The work laid the foundation for early methods in this field. Concurrently, the research team at IBM made significant contributions to speech recognition, e.g., the use of hidden Markov models (HMMs) [12], which improved the accuracy and reliability of speech recognition technology. Further progress was made in the 1990s. Kanade's team developed the Eigenfaces method for face recognition [13]. This utilized principal component analysis (PCA) to extract facial features and recognize individuals based on statistical patterns in face images [14]. Companies such as Dragon Systems focused on advancing speech recognition systems, developing technology capable of converting spoken language into written text with increasing accuracy [15].\nModality conversion (2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010). In this stage, researchers devoted significant resources to the study of human-computer interaction. The goal was to enable computers to simulate human behavior and enhance convenience in the daily lives of people. Several notable advancements took place during this period. In 2001, the AMI project proposed the utilization of computers to record and process meeting data. This project aimed to develop technologies that could analyze audio, video, and text data from meetings, enabling more efficient information retrieval and collaboration [16]. In 2003, the CALO project made significant contributions by introducing chatbot technology, which served as the predecessor to Siri. The CALO project, which stands for \"Cognitive Assistant that Learns and Organizes\", aimed to develop an intelligent virtual assistant capable of understanding and responding to human language and performing tasks [17]. In 2008, the social signal processing (SSP) project introduced the concept of social signal processing networks. This project focused on analyzing non-verbal cues, such as facial expressions, gestures, and voice tones, to understand social interactions and facilitate more natural human-computer communication [18].\nModality fusion (2010-2020). In this stage, the integration of deep learning techniques and neural networks led to notable advancements in the field. In 2011, a pioneering multimodal deep learning algorithm was introduced by Ngiam [19]. This algorithm played a crucial role in advancing the field by enabling the fusion and analysis of multiple modalities, such as images and text. It facilitated the joint learning of features from different modalities and contributed to enhanced performance in tasks like image classification, speech recognition, and video analysis. In 2012, a multimodal learning algorithm based on deep Boltzmann machines (DBMs) [20] aimed to model the dependencies and interactions between different modalities. By leveraging the power of deep learning and the generative modeling capabilities of DBMs, we can capture the intricate relationships among modalities and improve the understanding and representation of complex multimodal data. In 2016, a neural image captioning algorithm with semantic attention was introduced [21], revolutionizing the way images were processed and described. This algorithm had the functionality to generate descriptive captions for images, allowing automated image understanding and interpretation. By combining computer vision techniques with deep neural networks, the algorithm could analyze the visual content of an image and generate human-like descriptions, improving accessibility and enabling applications like automatic image tagging, image search, and assistive technologies for the visually impaired.\nLarge-scale multimodal (2020-?). The rapid development of large-scale models has opened up new opportunities for multimodal algorithms. In 2021, the CLIP model was introduced [22]. By shattering the conventional paradigm of fixed category labels, CLIP liberates the burden of assembling massive datasets with predetermined class counts. Instead, CLIP empowers the collection of image-text pairs and leverages unsupervised techniques to either predict their similarity or generate them. In 2022, DALL-E 2, a product in Ope-nAI, utilizes a diffusion model conditioned on CLIP image embeddings [23]. It can generate high-quality images and artwork based on text prompts. Microsoft also introduced BEiT-3 (BERT Pretraining of Image Transformers) [24]. BEiT-3 uses a shared multiway transformer structure to complete pre-training through masked data. It can be migrated to various downstream tasks of vision and visual language. In 2023, KOSMOS-1 was released by Microsoft [25]. KOSMOS-1 is a cutting-edge multimodal LLM that boasts an impressive array of capabilities, including the ability to process and integrate information from diverse modalities, follow instructions with precision, and adapt to new contexts through in-context learning. This model integrates language and perception to enable itself to see and speak, making it proficient in tasks such as visual dialogue, image captioning, and zero-shot image classification. Another notable model, namely PaLM-E [26], combines advanced language and vision models, e.g., PaLM and ViT-22B. They could excel in visual tasks like object detection and scene classification, while also demonstrating proficiency in language tasks, e.g., generating code and solving math equations. PaLM-E provides a new benchmark in visuallanguage performance without task-specific fine-tuning." }, { "figure_ref": [ "fig_3" ], "heading": "III. PRACTICAL GUIDE FOR TECHNICAL POINTS", "publication_ref": [ "b26", "b27", "b28", "b29", "b30", "b29", "b22", "b31", "b32", "b33", "b34", "b35", "b36", "b37", "b22" ], "table_ref": [], "text": "The technical points of multimodal large models include, but are not limited to, knowledge representation, learning objective selection, model structure construction, information fusion, and the usage of prompts, as shown in Fig. 3.\nKnowledge representation. Both text and images require tokenization and embedding. Tokens are the basic units of input for the models, while embeddings are the vector representations of tokens used for calculations. In the case of text, Word2Vec [27] was commonly used for tokenization, including some methods like CBOW and Skip-gram. Although Word2Vec is computationally efficient, it suffers from vocabulary limitations. As a result, subword tokenization methods, such as byte pair encoding [28], divide words into smaller units. This approach has been applied to various transformer models, like BERT. In contrast, image tokenization is more complex than text. It can be categorized into three types [29], including region-based, grid-based, and patch-based. Regionbased methods utilize pre-trained object detectors to extract features. Grid-based methods directly apply convolutional neural networks to extract grid-based information from images. While patch-based methods involve dividing the image into smaller blocks and extracting linear projections from those blocks. According to the data from the METER model [30], optimizing the visual feature side has a much greater impact on the results than optimizing the text side. In the construction of multimodal pretraining models, the embedding layers or complexity of visual features surpass those of text features, highlighting the significance of visual information. Multimodal models can learn more knowledge from visual features. Learning objectives selection. It is crucial in multimodal pretraining. Currently, common learning tasks in multimodal pretraining include image-text contrast (ITC), masked language modeling (MLM), masked visual modeling (MVM), and image-text matching (TM) [31]. ITC involves constructing positive and negative sample pairs to align images and texts through contrastive learning. In addition, by leveraging MLM and MVM techniques, it can learn to infer the subtle connections between language and visual data by reconstructing masked linguistic tokens from a combination of linguistic knowledge and visual cues. In this way, it could improve its ability to comprehend and generate multimodal content. TM can be seen as a binary classification task that aims to predict whether an image and text pair match. In general, using different learning objectives in combination can enhance the performance of multimodal models. For instance, in the UNITER model, incorporating more learning objectives generally leads to better results. UNITER utilizes multiple learning objectives, such as MLM and ITC, and performs well across various specialized scenarios. However, using too many learning objectives may not always yield favorable results. This was validated in the experiment on the METER [30].\nModel construction. Based on the different model structures, multimodal models can be categorized into encoder-only and encoder-decoder models. Encoder-only models utilize only the encoder part of the Transformer. The multimodal input is directly processed by the encoder to produce the output. Common examples of encoder-only models include CLIP [23] and ALBEF [32], which are suitable for tasks like imagetext retrieval but not ideal for tasks like image captioning. The encoder-decoder models incorporate both the encoder and decoder parts of the Transformer. The decoder receives the previously generated tokens and its own output to generate the output sequence auto-regressively. Encoder-decoder models, such as T5 [33] and SimVLM [34], leverage the decoder's capabilities and are suitable for generation tasks, but may not be as well-suited for tasks like image-text retrieval.\nInformation fusion. After encoding different modalities separately, it is necessary to design an encoder for multimodal encoding. Based on different fusion methods, multimodal models can be categorized into fusion encoder and dual encoder models [35]. The fusion encoder utilizes fusion methods to interact between modalities. Through self-attention or crossattention operations, the fusion encoder generates fused representations of the modalities. Fusion methods mainly include single-stream and dual-stream approaches. The single-stream approach assumes that there exists a simple alignment or correlation between the two modalities, and applies self-attention mechanisms directly to the modalities before concatenating them. The dual-stream model assumes that intra-modal and cross-modal interactions should be modeled separately to obtain better multimodal representations using cross-attention mechanisms. Fusion encoders model cross-modal interactions at different levels and have achieved good performance in certain inference tasks. However, in tasks such as imagetext retrieval, encoding the interactions of all image-text pairs leads to slow inference speed. The dual encoder employs separate single-modal encoders to encode the two modalities. After sufficient encoding, a simple dot product or shallow attention layer is used to calculate similarity scores between them, without relying on complex Transformer structures. The fusion encoder is suitable for inference tasks, while the Dual encoder is suitable for retrieval tasks. Therefore, we combine different model architectures or information fusion methods to enhance the capabilities of multimodal models. This is also the mechanism behind the implementation of multimodal unification. For example, VLMO adopts the \"Three Experts\" approach, pretraining on image-only, text-only, and imagetext data to handle different modalities, and achieves good performance in tasks such as inference and retrieval [36].\nThe usage of prompt. The prompt method is primarily used to reduce the gap between pretraining and fine-tuning in downstream tasks. By modifying the templates of downstream tasks, prompt aims to minimize the differences between pretraining and fine-tuning, thereby reducing the cost of finetuning and improving the model's performance in downstream applications. It has the ability to handle zero or small data samples, which has been widely adopted in various LLMs [37]. The prompt method plays a crucial role in multimodal pretraining tasks as well. For example, in visual ChatGPT [38], a prompt manager is used to generate informative prompts that facilitate ChatGPT's understanding and generation of related images. In CLIP, the prompt method is applied in zero-shot tasks by generating informative prompts for text, resulting in improved performance [23]." }, { "figure_ref": [], "heading": "IV. PRACTICAL GUIDE FOR ALGORITHMS", "publication_ref": [ "b38", "b39", "b23" ], "table_ref": [], "text": "The algorithms in multimodal can be categorized into two types, including foundation models and large-scale multimodal pre-trained models. The foundation modal is the basic framework for multimodal. Many novel large-scale multimodal pretrained models are improved based on it.\nA. Foundation model.\nTransformer [39] was proposed in 2017, disrupting traditional deep learning models and achieving good performance in machine translation tasks. It gained attention for its ability to undergo self-supervised pre-training on largescale corpora and subsequent fine-tuning on downstream tasks. This paradigm has been followed by many pre-trained largescale models. The weight-sharing property of the Transformer, which is independent of the input sequence length, makes it suitable for multimodal applications. Certain modules within the model can share weight parameters. The weight-sharing concept in the Transformer arises from the fact that both the self-attention module and the feed-forward neural network are unaffected by the length of the input sequence. This weightsharing concept can also be applied to multimodal models. For example, in a multimodal setting involving images and text, the weight parameters learned from image training can be used for text training, and the results remain effective, sometimes even without the need for fine-tuning.\nVIT. The exceptional performance of the Transformer model with its self-attention mechanism in the domain of natural language processing (NLP) has attracted much attention in computer vision. Many studies have started to incorporate the Transformer mechanism into computer vision tasks. However, the Transformer has limitations in terms of input data size, requiring careful consideration of input strategies. Google, drawing inspiration from previous work, proposed the vision transformer (ViT) model, empowered by powerful computational resources. The ViT model addresses the input size limitation by segmenting images into patches (e.g., dividing an image into 16 patches) [40]. These patches are then processed and transformed into inputs that the Transformer can handle through linear mapping. This breakthrough has bridged the gap between computer vision and NLP. ViT not only enables the Transformer to process images but also introduces more efficient image feature extraction strategies compared to previous approaches.\nBEiT. If ViT can be seen as the adaptation of the Transformer model in computer vision, then BEiT can be considered as the adaptation of BERT in computer vision [24]. Generative pre-training is an important method and training objective in self-supervised learning, where the model learns how to generate data without relying on labels or manual annotations. Generative pre-training has achieved significant success in natural language processing. BEiT addresses two key challenges in generative pre-training for computer vision. The first challenge is how to convert image information into discrete tokens similar to NLP. BEiT uses the discrete visual embedding aggregation method to discretize images. The second challenge is how to incorporate image information into the pre-training process effectively. BEiT leverages the well-established ViT structure to process image information. By addressing these two points, BEiT successfully applies the masked language modeling (MLM) and masked image modeling (MIM) methods to the field of computer vision, bringing generative pre-training into the domain of computer vision and enabling large-scale self-supervised pre-training." }, { "figure_ref": [], "heading": "B. Large-scale multimodal pre-trained models", "publication_ref": [ "b37", "b40", "b41", "b42", "b43", "b44", "b45", "b46", "b47", "b48" ], "table_ref": [], "text": "Visual ChatGPT [38] incorporates different visual foundation models (VFMs) to handle various visual tasks, such as image understanding and generation. This allows users to send and receive not only languages but also images, enabling complex visual questions and instructions that require the collaboration of multiple AI models with multi-steps. This system also introduces Prompt Manager, which helps leverage VFMs and receive their feedback in an iterative This iterative process continues until the system meets the requirements of users or reaches the ending condition. By injecting visual model information into ChatGPT through prompts, the system aligns visual features with the text space, enhancing the visual understanding and generation capabilities of ChatGPT. Visual ChatGPT has the ability to handle modalities beyond languages and images. While the system initially focuses on languages and images, it opens up possibilities for incorporating other modalities like videos or voices. This flexibility eliminates the need to train a completely new multi-modality model every time a new modality or function is introduced.\nMM-REACT [41] combines ChatGPT with various visual models to enable multi-modal tasks, primarily demonstrated through the VQA format. In answering questions, ChatGPT utilizes visual models as tools and decides whether to use them based on the specific question. This system shares similarities with previous works that used caption models and languageimage models for VQA. In those approaches, the caption model converted images into text, which was then used as evidence by a larger model to generate answers. However, MM-REACT differs in its ability to autonomously decide whether to invoke visual models.\nFrozen [42] introduced the novel concept of employing LLMs in multi-modal in-context learning. The specific approach involves transforming images into embeddings using a visual encoder. These embeddings are then concatenated with the text, creating a combined data format that integrates both modalities. Subsequently, the model uses an autoregressive approach to predict the next token. Throughout the training process, the LLM remains frozen, while the visual encoder is trainable. This allows the final model to retain its language modeling capabilities while acquiring the ability to perform contextual learning in a multi-modal setting.\nBLIP-2 [43] adopts a similar approach to Flamingo in encoding images, utilizing a Qformer model to extract image features. The Qformer plays a role analogous to Flamingo's perceiver resampler. This model then facilitates image-text interaction through cross-attention. During training, BLIP-2 freezes both the visual encoder and LLMs and only finetunes the Qformer. However, when fine-tuning on specific downstream task datasets, BLIP-2 unlocks the visual encoder and fine-tunes it alongside Qformer. The training process for BLIP-2 consists of two stages. i) Only Qformer and the visual encoder participate in training. They are trained using classic multi-modal pretraining tasks such as imagetext matching, contrastive learning, and image-grounded text generation. This stage enables Qformer to learn how to quickly extract text-related features from the visual encoder. ii) The Qformer-encoded vectors are inserted into the LLM for caption generation. BLIP-2 demonstrates promising performance in both zero-shot and fine-tuning scenarios for VQA. It has good transferability across different datasets for the same task.\nLLaMA-Adapter [44] introduces efficient fine-tuning in LLaMA by inserting adapters, which can be extended to multimodal scenarios. Adapters are adaptation prompt vectors that are concatenated to the last layers of the Transformer as tunable parameters. When applied to multi-modal settings, images are first encoded into multiscale feature vectors using a frozen visual encoder. These vectors are then aggregated through concatenation and projection operations before being element-wise added to the adaptation prompt vectors.\nMiniGPT-4 [45] is a reproduction of certain functionalities of GPT-4 based on the combination of BLIP-2 and Vicuna. It directly transfers the Qformer and visual encoder from BLIP-2 and freezes them along with LLM, leaving only a linear layer on the visual side for fine-tuning. This compression of tunable parameters results in a model size of 15 M. Additionally, a two-stage fine-tuning strategy is adopted. i) Caption generation is used as the training task. The model generates multiple captions, and then these captions are rewritten using ChatGPT to create detailed and vivid descriptions. ii) A set of highquality image-text pairs is constructed for further fine-tuning. This set of image-text pairs is used to refine the model.\nLLaVA [46] and MiniGPT-4 are similar, as both aim to achieve multimodal instruction fine-tuning. However, they differ in terms of data generation and training strategies, leading to the development of the LLaVA model. In data generation, LLaVA leverages GPT-4 to create diverse instruction finetuning data, including multi-turn QA, image descriptions, and complex reasoning tasks. This ensures that the model is capable of handling a wide range of queries. Since the current interface of GPT-4 only accepts text inputs, image information needs to be transformed into textual format. This study uses the five captions and bounding box coordinates provided for each image in the COCO dataset as textual descriptions inputted to GPT-4. Regarding the training strategy, LLaVA adopts a two-stage approach. i) The model is fine-tuned using 600,000 image-text pairs filtered from the cc3m dataset according to specific rules. The fine-tuning process freezes the visual and language models, focusing only on fine-tuning the linear layer. ii) Using the aforementioned data generation strategy, 160,000 instruction fine-tuning data samples are generated. The model is then further fine-tuned using language model loss. During this stage, the visual model is frozen, and both the linear layer and the language model are fine-tuned.\nPICa [47] was the first attempt to use LLMs for solving the VQA task. Its objective was to enable LLM to understand and process image information. To achieve this, previous research employed a caption model to convert images into corresponding textual descriptions. The caption, along with the question, was then inputted into GPT-3, forming a triplet (question, caption, answer), and in-context learning was utilized to train GPT-3 to answer new questions. In the few-shot in-context learning scenario, PICa achieved better performance than Frozen but still fell short of Flamingo. This can be attributed to the loss of visual information during the conversion of images into captions. Visual information plays a crucial role in answering questions, and the process of converting images into text inevitably leads to a loss of visual details and semantics, limiting the model's performance.\nPNP-VQA [48] utilizes a caption model and pre-trained language model (PLM) to address the VQA task. However, it differs from PICa in terms of the choice of PLM, as it employs a question-answering model called UnifiedQAv2. PNP-VQA focuses on achieving zero-shot VQA capability. To address the issue of losing image information in captions, PNP-VQA introduces an image-question matching module before generating the captions. This module identifies patches in the image that are most relevant to the given question. Captions are then generated specifically for these selected patches. These caption-patch pairs, along with the original question, are used as context and fed into the UnifiedQAv2 model. This approach ensures that the generated captions are closely related to the question by incorporating relevant image patches as context. By incorporating the Image-Question Matching module and utilizing UnifiedQAv2 as the PLM, PNP-VQA aims to improve the relevance and accuracy of the generated captions for VQA. This strategy allows the model to effectively leverage both image and question information in order to generate more contextually relevant answers.\nImg2LLM [49] aims to address two main challenges when using LLM for VQA tasks. i) Modality disconnection, where LLM cannot handle visual information effectively; ii) Task disconnection, where LLM, pre-trained through text generation, struggles to utilize captions for VQA without fine-tuning. To overcome these challenges, the authors propose transferring visual information through (question, answer) pairs. Specifically, the approach involves generating captions for images using a caption model or a method similar to PNP-VQA. From these captions, relevant words, such as nouns and adjectives, that could potentially serve as answers to certain questions are extracted. Subsequently, a question generation model is used to generate corresponding questions, thus creating (question, an-swer) pairs. These pairs serve as demonstrations in in-context learning, aiding LLM in answering questions about the given image. By transmitting visual information through (question, answer) pairs, Img2LLM addresses modality disconnection and task disconnection issues, enabling LLM to better utilize visual information for VQA tasks." }, { "figure_ref": [], "heading": "V. PRACTICAL GUIDE FOR VARIOUS TASKS", "publication_ref": [ "b49", "b22", "b50", "b51", "b52", "b53", "b54", "b55", "b56", "b57", "b58", "b59", "b60", "b61" ], "table_ref": [ "tab_1" ], "text": "Image captioning. Image captioning is a task that involves generating short textual descriptions for given images. It is a multimodal task that deals with multimodal datasets consisting of images and short textual descriptions. Multimodal translation tasks are open-ended and subjective, so the generated content is not unique. The goal of this task is to convert visual representations into textual representations to address the translation challenge. Models that convert visual modalities into text need to capture the semantic information of the images and need to detect key objects, actions, and features of the objects. Moreover, it should infer the relationships between objects in the image. Image captioning can be used to provide textual alternatives for images, which is particularly helpful for blind and visually impaired users [50]. By generating short textual descriptions, these users can better understand and perceive the content of the images. It provides them with an opportunity to interact with the visual world, enhancing their experience and engagement.\nText-to-Image generation. Text-to-image generation is indeed one of the most popular applications of multimodal learning. It addresses the challenge of translating text into images. Models such as OpenAI's DALL-E 2 [23] and Google's Imagen [51] have made significant breakthroughs in this area, attracting widespread attention. The work of these models can be the inverse process of image captioning. By providing short textual descriptions as prompts, text-to-image models can generate novel images that accurately reflect the semantics of the text. Recently, there has also been an emergence of text-tovideo models. These models have a wide range of applications. They can assist in photo editing and graphic design, while also providing inspiration for digital art. They offer users a tool to directly convert text into visual content, driving the development and innovation of the creative industry. The advancements in these technologies offer new possibilities for creating and understanding images.\nSign language recognition. The goal of this task is to recognize sign language gestures and convert them into text. Gestures are captured through cameras. To accurately recognize the gestures, the corresponding audio and both modalities must be aligned. Sign language recognition is a task based on alignment methods, as it requires the model to align the temporal information of the visual, such as video frames, and audio modalities, such as audio waveforms [52]. This involves aligning the time between video frames and audio waveforms to identify the gestures and their corresponding spoken language. One commonly used open-source dataset for sign language recognition is the RWTH PHOENIX Weather 2014T dataset [53], which contains video recordings of German sign language from different signers. The dataset provides both visual and audio modalities, making it well-suited for multimodal learning tasks that rely on alignment methods. By aligning the temporal information of the video and audio, models can leverage both visual and audio features for sign language recognition, thereby improving the accuracy and effectiveness of recognition.\nEmotion recognition. While emotion recognition can be performed using only a single-modal dataset, performance can be improved by utilizing multimodal datasets as input. Multimodal inputs can take the form of video, text, and audio or can incorporate sensor data such as brainwave data [54]. A real-world example is emotion recognition in music. In this task, the model needs to identify the emotional content of music using audio features and lyrics. In such cases, employing a late fusion approach is appropriate, as it combines the predictions of models trained on individual modalities such as audio features and lyrics to generate the final prediction. The DEAM dataset is specifically designed to support research on music emotion recognition and analysis. It includes audio features and lyrics for over 2,000 songs [55]. The audio features encompass various descriptors like MFCC, spectral contrast, and rhythm features, while lyrics are represented using techniques like bag-of-words and word embeddings.\nVideo processing. In the domain of video and audio, multimodal fusion is also a growing trend. With the migration of image-text multimodal models to video-text and audiotext multimodal domains, a series of representative models have emerged. For example, the VideoCoCa model [56] for the image-text domain. The CLIP model led to the development of the VideoCLIP model [57]. The advent of unified multimodal large models has also driven advancements in the field of video processing. Alibaba's mPLUG-2 [58] has shown impressive performance in video-related tasks, e.g., video question answering and video captioning. Moreover, Google's MusiclM [59] has gained recognition in the audio multimodal domain, as it can generate music based on text inputs. In addition, the video and audio domains involve a range of other multimodal tasks. Audio-visual speech recognition is the task of performing speech recognition on given videos and audio of individuals. Video sound source separation involves localizing and separating multiple sound sources in a given video and audio signal. Image generation from audio refers to generating images related to given sounds. Speechconditioned face generation involves generating videos of a speaking person based on given speech utterances. There are some tasks like audio-driven 3D facial animation, which can generate 3D facial animations of a speaking person based on a given speech, and a 3D facial template [60].\nSmarter digital human. AIGC technologies [61] have played an important role in the development of digital humans, simplifying the process and enhancing development efficiency. Companies like Meta and NVIDIA have introduced products to assist users in creating 3D digital humans, with NVIDIA's Omniverse Avatar being an example. Users can create digital humans by uploading photos, videos, or audio, offering the advantages of efficiency and cost-effectiveness. Specifically, natural language generation technology impacts the quality of content in human-computer interactions, while computer vision technology affects the facial expressions and body movements of digital humans, such as lip synchronization [62]. The continuous advancement of AIGC technologies enables high-quality human-computer interactions. AIGC empowers AI-driven digital humans with intelligent development, providing recognition, perception, analysis, and decision-making capabilities during multimodal interactions.\nPractical guide for data. Multimodal datasets play a crucial role in advancing research on vision and language tasks. These datasets combine different modalities, such as images, text, videos, and audio, providing rich and diverse sources of information for various applications. We categorize the multimodal datasets into different types and present a curated selection of representative datasets for each category, as shown in Table II. For future research, we can use these datasets to conduct experiments to test the model's effectiveness. " }, { "figure_ref": [], "heading": "VI. CHALLENGES", "publication_ref": [ "b70", "b71", "b25", "b72", "b73", "b73", "b72", "b74", "b75", "b1" ], "table_ref": [], "text": "To further improve the performance of multimodal applications, some fundamental issues still require more attention, including but not limited to:\nModalities expansion. The sensors and data sources are diverse, so they can acquire rich information in order to achieve more comprehensive and accurate analysis and recognition. For example, in the field of emotion computation, modality expansion involves using multiple modalities such as audio, facial expressions, electrocardiography (ECG), and electroencephalography (EEG) to gain a more comprehensive understanding and recognition of people's emotional states [71]. The audio modality can capture changes in the speaker's tone and speech rate; the visual modality can analyze facial expressions and body language; and the ECG and EEG can provide physiological signals related to emotional changes. In addition, the field of medical imaging involves multiple modalities such as CT scans, MRIs, and PET. For example, CT scans can provide detailed information about tissue structure and lesions; MRI can observe the anatomical structures and functionality of tissues; and PET can be used to detect metabolism and the distribution of biomarkers. By combining different modalities of image data, doctors, and researchers can obtain more comprehensive and accurate medical information to support precise diagnosis and treatment decisions.\nTime-consuming problem. For optimizing training architectures and improving training time, large models have a significant impact on AI systems. Firstly, due to the models' enormous scale, computations may need to be distributed across clusters. Secondly, multi-user and multi-task scenarios are common, requiring support for multi-tenancy. Moreover, high reliability is essential, demanding models to have dynamic fault tolerance capabilities. Multiple backbone models need to be combined. While multimodal LLMs have achieved tremendous success in various domains, their computational requirements pose significant challenges to model training. How can we accelerate model training [72]? We can dynamically allocate multiple models of different architectures to two high-speed interconnected data centers. During training and inference, pathways dynamically schedule models through gang scheduling, enabling capabilities such as shared computation, shared weights, and dynamic routing [26].\nLifelong/continual learning. The current classic approach is to run an AI algorithm on a given dataset, build a model, and then apply this model to an actual task. This is called isolated learning and causes the shortcoming that the algorithm does not have memory capabilities. Therefore, the model or algorithm does not retain the learned knowledge and then continually apply it to future learning. For real applications but not an isolated task, multimodal large models require the ability of lifelong learning [73] or continual learning [74]. We should build an LLM with continuous learning capabilities that can make a complex understanding of the world based on its own experience, thereby using more complex knowledge for autonomous and progressive training and improvement [74].\nTowards AGI. On the path toward artificial general intelligence (AGI), we still face many opportunities and challenges. For example, the catastrophic forgetting problem [73] refers to the phenomenon where a neural network and its associated weights, originally trained for a language task, are repurposed for other tasks, resulting in the network forgetting its initial training objectives. In such cases, the large model may lose its original language capabilities, leading to a decline. For example, in language ability when shifting to robotic-based applications [75]. Recent research like BLIP-2, KOSMOS-1, BEiT-3, and PaLI [76] has highlighted two feasible approaches to address this issue: i) avoid catastrophic forgetting by using smaller networks and retraining from scratch with new data; ii) circumvent catastrophic forgetting by employing larger language networks as backbones. Note that there are still other challenges when pursuing AGI, including multimodal fusion, multimodal alignment, co-learning, and model-as-aservice (MaaS) [2]." }, { "figure_ref": [], "heading": "VII. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "The advancements in multimodal models have opened up new avenues for AI, which enables binary machines to understand and then process diverse data types. Multimodal models will lead to more comprehensive and intelligent systems in the near future. We have provided a comprehensive exploration of multimodal model development. We first introduced the multimodal concept and then sorted out the historical development of multimodal algorithms. After that, we discussed the efforts of major technology companies in developing multimodal products and offered insights into the technical aspects of multimodal models. We also presented a compilation of commonly used datasets that can provide valuable experimentation and evaluation resources. Finally, the challenges associated with the development of multimodal models were highlighted and discussed for further research. By addressing these aspects, this paper aims to provide a deeper understanding of multimodal models and their potential characters in various domains." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENT", "publication_ref": [], "table_ref": [], "text": "This research was supported in part by the National Natural Science Foundation of China (Nos. 62002136 and 62272196), and the Young Scholar Program of Pazhou Lab (No. PZL2021KF0023). Dr. Wensheng Gan is the corresponding author of this paper." } ]
The exploration of multimodal language models integrates multiple data types, such as images, text, language, audio, and other heterogeneity. While the latest large language models excel in text-based tasks, they often struggle to understand and process other data types. Multimodal models address this limitation by combining various modalities, enabling a more comprehensive understanding of diverse data. This paper begins by defining the concept of multimodal and examining the historical development of multimodal algorithms. Furthermore, we introduce a range of multimodal products, focusing on the efforts of major technology companies. A practical guide is provided, offering insights into the technical aspects of multimodal models. Moreover, we present a compilation of the latest algorithms and commonly used datasets, providing researchers with valuable resources for experimentation and evaluation. Lastly, we explore the applications of multimodal models and discuss the challenges associated with their development. By addressing these aspects, this paper aims to facilitate a deeper understanding of multimodal models and their potentiality in various domains.
Multimodal Large Language Models: A Survey
[ { "figure_caption": "Fig. 1 :1Fig. 1: The definition of multimodal.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "analysis of multiple modalities, such as images and text u DBMs -Model the dependencies and interactions between different modalities. u Image subtitle generation -Analyze the visual content of an image and generate human-like descriptions u CLIP -Leverage unsupervised techniques to process image-text data. u DALL-E 2 -Utilizes a diffusion model conditioned on CLIP image embeddings u BeiT-3 -Stands for BERT Pretraining of Image Transformers u KOSMOS-1 -Following instructions and performing in-context learning u PaLM-E -A new benchmark in visuallanguage performance without taskspecific fine-tuning.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 :2Fig. 2: Four distinct stages of multimodal research.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 :3Fig. 3: The technical points of multimodal models.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The multimodal models", "figure_data": "ModelYearTechnical pointsFunctionPaperOpen-sourceTransformer2017Self-attention, positional Encoding and multi-Head AttentionMachine translation[39]✓ViT2020Patch-based Representation, linear Projection, transformer EncoderImage classification and generation[40]✓BEiT2021Discrete visual embedding aggregation and MLMImage Understanding and transfer learning[24]✓VisualChatGPT2023Invokes multiple VFMs, pre-trained LLMs and prompt managementVisual queries and instructions[38]✓MM-REACT2023Integration of ChatGPT and vision experts and textual prompt designVisual understanding tasks[41]✓Frozen2021Few-Shot learning, utilize external knowledge and soft-prompting philosophyVQA[42]✓BLIP-22023Pre-trained image encoders, querying Transformer and frozen LLMsZero-shot image-to-text generation[43]✓LLaMA-Adapter2023Fine-tuning instruction-following, learnable attention mechanism adaption prompts and zero-initializedVision and language tasks[44]✓Frozen visual encoder with a frozen LLM,Identify humorous elements withinMiniGPT-42023one projection layer and trained byimages and create websites from[45]✓image-text pairshandwritten draftsLLaVA2023Instruction tuning LLMs and end-to-end trained LLMsVisual and language understanding and multimodal chat abilities[46]✓PICa2022Utilize GPT-3 as an implicit knowledge base and prompt GPT-3 via image captionsVQA[47]✗Image-question matching module, imagePNP-VQA2022captioning module, and question answeringVision-language tasks[48]✓moduleImg2LLM2022Zero-shot generalization and without requiring end-to-end trainingVQA[49]✓", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "The multimodal datasets", "figure_data": "DatasetsYearScaleModalitiesPaperCOCO2014567KImage-Text[63]Visual Genome20175.4MImage-Text[64]YouCook220182.2KVideo-Text[65]WebVid2M20212.5MVideo-Text[66]Common Voice20199.2KAudio-Text[67]LibriSpeech20151KAudio-Text[68]M5Product20216MImage-Text-Video-Audio[69]MSR-VTT201610KImage-Text-Video-Audio[70]", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Jiayang Wu; Wensheng Gan; Zefeng Chen; Shicheng Wan; Philip S Yu
[ { "authors": "W Gan; Z Qi; J Wu; J C W Lin", "journal": "IEEE", "ref_id": "b0", "title": "Large language models in education: Vision and opportunities", "year": "2023" }, { "authors": "W Gan; S Wan; P S Yu", "journal": "IEEE", "ref_id": "b1", "title": "Model-as-a-service (MaaS): A survey", "year": "2023" }, { "authors": "R Dale", "journal": "Natural Language Engineering", "ref_id": "b2", "title": "GPT-3: What's it good for?", "year": "2021" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "ACL", "ref_id": "b3", "title": "BERT: Pretraining of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Y Liu; M Ott; N Goyal; J Du; M Joshi; D Chen; O Levy; M Lewis; L Zettlemoyer; V Stoyanov", "journal": "", "ref_id": "b4", "title": "RoBERTa: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "K Sanderson", "journal": "Nature", "ref_id": "b5", "title": "GPT-4 is here: What scientists think", "year": "2023" }, { "authors": "J Summaira; X Li; A M Shoib; J Abdul", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications", "ref_id": "b6", "title": "A review on methods and applications in multimodal deep learning", "year": "2022" }, { "authors": "X Wang; G Chen; G Qian; P Gao; X.-Y Wei; Y Wang; Y Tian; W Gao", "journal": "Machine Intelligence Research", "ref_id": "b7", "title": "Large-scale multi-modal pre-trained models: A comprehensive survey", "year": "2023" }, { "authors": "S Yin; C Fu; S Zhao; K Li; X Sun; T Xu; E Chen", "journal": "", "ref_id": "b8", "title": "A survey on multimodal large language models", "year": "2023" }, { "authors": "M Turk", "journal": "Pattern Recognition Letters", "ref_id": "b9", "title": "Multimodal interaction: A review", "year": "2014" }, { "authors": "J Ortega-Garcia; J Fierrez; F Alonso-Fernandez; J E A Galbally", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b10", "title": "The multiscenario multienvironment biosecure multimodal database", "year": "2009" }, { "authors": "L Bahl; P Brown; P De Souza; R Mercer", "journal": "IEEE", "ref_id": "b11", "title": "Maximum mutual information estimation of hidden markov model parameters for speech recognition", "year": "1986" }, { "authors": "S Satoh; T Kanade", "journal": "IEEE", "ref_id": "b12", "title": "Name-it: Association of face and name in video", "year": "1997" }, { "authors": "J J Lien; T Kanade; J F Cohn; C.-C Li", "journal": "IEEE", "ref_id": "b13", "title": "Automated facial expression recognition based on facs action units", "year": "1998" }, { "authors": "C S A Larocca; J J Morgan; S M Bellinger", "journal": "Computer Assisted Language Instruction Consortium Journal", "ref_id": "b14", "title": "On the path to 2x learning: Exploring the possibilities of advanced speech recognition", "year": "1999" }, { "authors": "J Carletta; S Ashby; S Bourban; M Flynn; M Guillemot; T Hain; J Kadlec; V Karaiskos; W Kraaij; M ", "journal": "Springer", "ref_id": "b15", "title": "The AMI meeting corpus: A pre-announcement", "year": "2005" }, { "authors": "G Tur; A Stolcke; L Voss; S Peters; D Hakkani-Tur; J Dowding; B Favre; R Fernández; M Frampton; M Frandsen", "journal": "IEEE Transactions on Audio, Speech, and Language Processing", "ref_id": "b16", "title": "The calo meeting assistant system", "year": "2010" }, { "authors": "A Vinciarelli; M Pantic; H Bourlard; A Pentland", "journal": "", "ref_id": "b17", "title": "Social signal processing: state-of-the-art and future perspectives of an emerging domain", "year": "2008" }, { "authors": "J Ngiam; A Khosla; M Kim; J Nam; H Lee; A Y Ng", "journal": "", "ref_id": "b18", "title": "Multimodal deep learning", "year": "2011" }, { "authors": "G E Hinton; R R Salakhutdinov", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "A better way to pretrain deep boltzmann machines", "year": "2012" }, { "authors": "Q You; H Jin; Z Wang; C Fang; J Luo", "journal": "", "ref_id": "b20", "title": "Image captioning with semantic attention", "year": "2016" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b21", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "A Ramesh; P Dhariwal; A Nichol; C Chu; M Chen", "journal": "", "ref_id": "b22", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "H Bao; L Dong; S Piao; F Wei", "journal": "", "ref_id": "b23", "title": "BEiT: BERT pre-training of image transformers", "year": "2022" }, { "authors": "S Huang; L Dong; W Wang; Y Hao; S Singhal; S Ma; T Lv; L Cui; O K Mohammed; Q Liu", "journal": "", "ref_id": "b24", "title": "Language is not all you need: Aligning perception with language models", "year": "2023" }, { "authors": "D Driess; F Xia; M S Sajjadi; C Lynch; A Chowdhery; B Ichter; A Wahid; J Tompson; Q Vuong; T Yu", "journal": "", "ref_id": "b25", "title": "Palm-e: An embodied multimodal language model", "year": "2023" }, { "authors": "T Mikolov; K Chen; G Corrado; J Dean", "journal": "", "ref_id": "b26", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "K Bostrom; G Durrett", "journal": "", "ref_id": "b27", "title": "Byte pair encoding is suboptimal for language model pretraining", "year": "2020" }, { "authors": "T Yang; Y Wang; Y Lu; N Zheng", "journal": "", "ref_id": "b28", "title": "Visual concepts tokenization", "year": "2022" }, { "authors": "Z.-Y Dou; Y Xu; Z Gan; J Wang; S Wang; L Wang; C Zhu; P Zhang; L Yuan; N Peng", "journal": "", "ref_id": "b29", "title": "An empirical study of training endto-end vision-and-language transformers", "year": "2022" }, { "authors": "J Rao; Z Shan; L Liu; Y Zhou; Y Yang", "journal": "", "ref_id": "b30", "title": "Retrieval-based knowledge augmented vision language pre-training", "year": "2023" }, { "authors": "J Li; R Selvaraju; A Gotmare; S Joty; C Xiong; S C H Hoi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b32", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Z Wang; J Yu; A W Yu; Z Dai; Y Tsvetkov; Y Cao", "journal": "", "ref_id": "b33", "title": "SimVLM: Simple visual language model pretraining with weak supervision", "year": "2022" }, { "authors": "Z Wang; W Wang; H Zhu; M Liu; B Qin; F Wei", "journal": "", "ref_id": "b34", "title": "Distilled dual-encoder model for vision-language understanding", "year": "2022" }, { "authors": "H Bao; W Wang; L Dong; Q Liu; O K Mohammed; K Aggarwal; S Som; S Piao; F Wei", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b35", "title": "Vlmo: Unified vision-language pre-training with mixture-of-modality-experts", "year": "2022" }, { "authors": "J White; Q Fu; S Hays; M Sandborn; C Olea; H Gilbert; A Elnashar; J Spencer-Smith; D C Schmidt", "journal": "", "ref_id": "b36", "title": "A prompt pattern catalog to enhance prompt engineering with ChatGPT", "year": "2023" }, { "authors": "C Wu; S Yin; W Qi; X Wang; Z Tang; N Duan", "journal": "", "ref_id": "b37", "title": "Visual ChatGPT: Talking, drawing and editing with visual foundation models", "year": "2023" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b39", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "Z Yang; L Li; J Wang; K Lin; E Azarnasab; F Ahmed; Z Liu; C Liu; M Zeng; L Wang", "journal": "", "ref_id": "b40", "title": "MM-REACT: Prompting chatgpt for multimodal reasoning and action", "year": "2023" }, { "authors": "M Tsimpoukelli; J L Menick; S Cabi; S Eslami; O Vinyals; F Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b41", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "J Li; D Li; S Savarese; S Hoi", "journal": "", "ref_id": "b42", "title": "BLIP-2: Bootstrapping languageimage pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "R Zhang; J Han; A Zhou; X Hu; S Yan; P Lu; H Li; P Gao; Y Qiao", "journal": "", "ref_id": "b43", "title": "LLaMA-Adapter: Efficient fine-tuning of language models with zero-init attention", "year": "2023" }, { "authors": "D Zhu; J Chen; X Shen; X Li; M Elhoseiny", "journal": "", "ref_id": "b44", "title": "MiniGPT-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" }, { "authors": "H Liu; C Li; Q Wu; Y J Lee", "journal": "", "ref_id": "b45", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Z Yang; Z Gan; J Wang; X Hu; Y Lu; Z Liu; L Wang", "journal": "", "ref_id": "b46", "title": "An empirical study of GPT-3 for few-shot knowledge-based vqa", "year": "2022" }, { "authors": "A M H Tiong; J Li; B Li; S Savarese; S C Hoi", "journal": "", "ref_id": "b47", "title": "Plug-and-play VQA: Zero-shot vqa by conjoining large pretrained models with zero training", "year": "2022" }, { "authors": "J Guo; J Li; D Li; A M H Tiong; B Li; D Tao; S C Hoi", "journal": "", "ref_id": "b48", "title": "From images to textual prompts: Zero-shot VQA with frozen large language models", "year": "2022" }, { "authors": "D Gurari; Y Zhao; M Zhang; N Bhattacharya", "journal": "Springer", "ref_id": "b49", "title": "Captioning images taken by people who are blind", "year": "2020" }, { "authors": "C Saharia; W Chan; S Saxena; L Li; J Whang; E L Denton; K Ghasemipour; R Gontijo Lopes; B Karagol Ayan; T Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b50", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "S Albanie; G Varol; L Momeni; T Afouras; J S Chung; N Fox; A Zisserman", "journal": "Springer", "ref_id": "b51", "title": "BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues", "year": "2020" }, { "authors": "J Forster; C Schmidt; O Koller; M Bellgardt; H Ney", "journal": "", "ref_id": "b52", "title": "Extensions of the sign language recognition and translation corpus RWTH-PHOENIX-Weather", "year": "2014" }, { "authors": "S Zhao; G Jia; J Yang; G Ding; K Keutzer", "journal": "IEEE Signal Processing Magazine", "ref_id": "b53", "title": "Emotion recognition from multiple modalities: Fundamentals and methodologies", "year": "2021" }, { "authors": "A Aljanaki; Y.-H Yang; M Soleymani", "journal": "The Public Library of Science", "ref_id": "b54", "title": "Developing a benchmark for emotional analysis of music", "year": "2017" }, { "authors": "S Yan; T Zhu; Z Wang; Y Cao; M Zhang; S Ghosh; Y Wu; J Yu", "journal": "", "ref_id": "b55", "title": "Video-text modeling with zero-shot transfer from contrastive captioners", "year": "2022" }, { "authors": "H Xu; G Ghosh; P.-Y Huang; D Okhonko; A Aghajanyan; F Metze; L Zettlemoyer; C Feichtenhofer", "journal": "", "ref_id": "b56", "title": "VideoCLIP: Contrastive pretraining for zero-shot video-text understanding", "year": "2021" }, { "authors": "H Xu; Q Ye; M Yan; Y Shi; J Ye; Y Xu; C Li; B Bi; Q Qian; W Wang", "journal": "", "ref_id": "b57", "title": "mPLUG-2: A modularized multi-modal foundation model across text, image and video", "year": "2023" }, { "authors": "A Agostinelli; T I Denk; Z Borsos; J Engel; M Verzetti; A Caillon; Q Huang; A Jansen; A Roberts; M Tagliasacchi", "journal": "", "ref_id": "b58", "title": "MusicLM: Generating music from text", "year": "2023" }, { "authors": "A Richard; M Zollhöfer; Y Wen; F De La Torre; Y Sheikh", "journal": "", "ref_id": "b59", "title": "Meshtalk: 3D face animation from speech using cross-modality disentanglement", "year": "2021" }, { "authors": "J Wu; W Gan; Z Chen; S Wan; H Lin", "journal": "", "ref_id": "b60", "title": "AI-generated content (AIGC): A survey", "year": "2023" }, { "authors": "K Prajwal; R Mukhopadhyay; V P Namboodiri; C Jawahar", "journal": "", "ref_id": "b61", "title": "A lip sync expert is all you need for speech to lip generation in the wild", "year": "2020" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b62", "title": "Microsoft COCO: Common objects in context", "year": "2014" }, { "authors": "R Krishna; Y Zhu; O Groth; J Johnson; K Hata; J Kravitz; S Chen; Y Kalantidis; L.-J Li; D A Shamma", "journal": "International Journal of Computer Vision", "ref_id": "b63", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "L Zhou; C Xu; J Corso", "journal": "", "ref_id": "b64", "title": "Towards automatic learning of procedures from web instructional videos", "year": "2018" }, { "authors": "M Bain; A Nagrani; G Varol; A Zisserman", "journal": "", "ref_id": "b65", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "R Ardila; M Branson; K Davis; M Henretty; M Kohler; J Meyer; R Morais; L Saunders; F M Tyers; G Weber", "journal": "", "ref_id": "b66", "title": "Common voice: A massively-multilingual speech corpus", "year": "2020" }, { "authors": "V Panayotov; G Chen; D Povey; S Khudanpur", "journal": "IEEE", "ref_id": "b67", "title": "Librispeech: an asr corpus based on public domain audio books", "year": "2015" }, { "authors": "X Dong; X Zhan; Y Wu; Y Wei; X Wei; M Lu; X Liang", "journal": "", "ref_id": "b68", "title": "M5Product: A multi-modal pretraining benchmark for e-commercial product downstream tasks", "year": "2021" }, { "authors": "J Xu; T Mei; T Yao; Y Rui", "journal": "", "ref_id": "b69", "title": "MSR-VTT: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "S Katsigiannis; N Ramzan", "journal": "IEEE Journal of Biomedical and Health Informatics", "ref_id": "b70", "title": "DREAMER: A database for emotion recognition through eeg and ecg signals from wireless low-cost offthe-shelf devices", "year": "2017" }, { "authors": "F Zeng; W Gan; Y Wang; P S Yu", "journal": "IEEE", "ref_id": "b71", "title": "Distributed training of large language models", "year": "2023" }, { "authors": "Z Chen; B Liu", "journal": "Springer", "ref_id": "b72", "title": "Lifelong machine learning", "year": "2018" }, { "authors": "F Zenke; B Poole; S Ganguli", "journal": "PMLR", "ref_id": "b73", "title": "Continual learning through synaptic intelligence", "year": "2017" }, { "authors": "F Zeng; W Gan; Y Wang; N Liu; P S Yu", "journal": "", "ref_id": "b74", "title": "Large language models for robotics: A survey", "year": "2023" }, { "authors": "X Chen; X Wang; S Changpinyo; A Piergiovanni; P Padlewski; D Salz; S Goodman; A Grycner; B Mustafa; L Beyer", "journal": "", "ref_id": "b75", "title": "PaLI: A jointly-scaled multilingual language-image model", "year": "2022" } ]
[]
2023-11-22
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "style transfer brings in 3D inconsistency issue and causes blurriness. On the other hand, training a NeRF jointly with 2D style transfer objectives shows poor convergence due to the identity and head pose gap between style image and content image. It also poses challenge in training time and memory due to the need of volume rendering for full image to apply style transfer loss functions. We therefore propose a hybrid framework of NeRF and mesh rasterization to combine the benefits of high fidelity geometry reconstruction of NeRF and fast rendering speed of mesh. Our framework consists of three stages: 1. Training a NeRF model on input face images to learn the 3D geometry; 2. Extracting a" }, { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b42" ], "table_ref": [], "text": "Style transfer for human face has been a popular research area in recent years. It has various applications in animations, advertising and gaming industry. Existing style transfer approaches for human face mainly focus on 2D image domain, where the input of the system is generally a style image and a content image, and the output is a stylized image which preserves the identity of the content image while having the style of the style image. The approaches for 2D face style transfer are usually achieved by 2D convolutional neural networks and pose 3D inconsistency issue when applied on a video or multi view images of the same face, which constraints usage of these 2D style transfer approaches in movies, animations or gaming for a consistent visual experience.\nSeveral recent studies on 3D style transfer leverage NeRF to stylize a 3D scene. They generally supervise NeRF training with style transfer objectives applied on images rendered from NeRF, which introduces training time and memory challenge due to volume rendering on large number of pixels to form the full image needed to compute style transfer losses. Stylizing-3D-Scene [5] proposed a hyper network which was conditioned on style embedding of a style image and transferred style information to the color network of NeRF. They applied style transfer losses on small image patches (32x32) to avoid issues in training time and memory. UPST-NeRF [4] also utilized a hyper network and trained on small image patches. Training with small image patches has difficulty in capturing global semantic information and leads to a loss in style transfer quality. ARF [39] proposed a nearest neighbor-based Gram matrix loss for style transfer and deferred gradient descend to optimize on full image instead of image patch. However, deferred gradient descend significantly slows down the training process as it doesn't reduce the computation needed for volume rendering full resolution image.\nTo reduce training time and memory of NeRF, recent work [7,43] proposed to only sample points near object surface for volume rendering. In this paper, we take it one step forward and propose to use just one single surface intersection point to render, in which case the volume rendering falls back to its simplest form and becomes equivalent to rendering a mesh extracted from NeRF. Compared to volume rendering, mesh rasterization is faster and con-sumes less GPU memory. We then propose a three stage approach for 3D face style transfer, where we apply different 3D representation and rendering techniques in different stages to optimize for different loss objectives in consideration of their computation needs. In the first stage, we train a NeRF model to reconstruct 3D geometry from input face images, optimized by an RGB loss applied on a batch of randomly sampled pixels through volume rendering. In the second stage, we extract a mesh from the trained NeRF model, and stylize the mesh color from a style image. The mesh color is optimized by style transfer objectives applied on full image rendered from differentiable mesh rasterization [15]. We generate 200 stylized meshes from 200 style images in a training dataset. In the third stage, we fix the geometry network weight of NeRF, and train a hyper network to predict the color network weight from a style image, to generalize for arbitrary style transfer. During each training iteration, we randomly sample a style image and its corresponding stylized mesh, and renders a full image through mesh rasterization. The hyper network is then optimized by an RGB loss between a random batch of predicted pixels from NeRF's volume rendering, and corresponding pixels from mesh rendered image. With the combination of NeRF and mesh rasterization, we are able to do 3D face style transfer at original resolution of up to 2K.\nDuring mesh optimization, we observe that using raw style image for style transfer objectives usually leads to poor convergence due to the large difference in identity and head pose with the content images rendered at different view points. We therefore propose to generate pair data of stylized images with similar head pose and identity by applying a 2D style transfer model [38] on content images randomly rendered at different head pose angle. Mesh optimization with pair data shows better style transfer quality on the mesh.\nTo summarize, our contributions are:\n• We propose a novel three stage approach which achieves arbitrary 3D face style transfer with good style transfer quality and 3D consistency.\n• We combine NeRF and mesh rasterization to optimize for different loss objectives which enables 3D face style transfer on original image resolution of up to 2K at a reasonable training cost.\n• We propose to generate pair data of stylized images to fill the gap of head pose and identity. Optimizing mesh colors with pair data shows better style transfer quality." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Novel View Synthesis", "publication_ref": [ "b10", "b6", "b0", "b34", "b39" ], "table_ref": [], "text": "Novel View Synthesis aims at synthesizing image at arbitrary view point from a set of source images. Tra-ditional approaches apply explicit 3D representations to model 3D scenes, such as 3D meshes [2, 6, 33, 36], 3D voxels [11,13,28,37], point clouds [1,22,25,35], depth maps [9,17]. They further combine the 3D geometry defined with explicit representations with appearance representations such as colors, texutre maps, light fields or neural texture. The use of explicit 3D representations of geometry either requires supervision from ground truth 3D representation or poses strong assumption on the underlying 3D geometry.\nIn recent years, there has been advances in neural rendering approaches with neural radiance field (NeRF) [23,40], where a 3D scene is represented implicitly by a multi-layer perceptron (MLP). The MLP maps the 3D coordinate and camera view direction to RGB value and density, and synthesizes a novel view via volume rendering which aggregates the colors of sampled 3D points along a ray. NeRF produces high quality novel view synthesis without the need of 3D supervision or assumption on the 3D geometry. Following works extend NeRF for faster training and inference, such as representing 3D scene with hashmap [24], or octree [19], followed by a reduced number of MLP layers to speed up. Other works extend NeRF to improve surface capture quality, such as NeuS [34]." }, { "figure_ref": [], "heading": "Human Face Style Transfer", "publication_ref": [], "table_ref": [], "text": "Given a content image of human face and a reference style image, human face style transfer aims to synthesize a stylized image with the style of the style image and the structure of the content image. Traditional approaches for human face style transfer mainly focus on 2D image domain. Some works realize human face style transfer with an image-to-image translation framework, where the main idea is to learn a bi-directional mapping between the real face domain and artistic face domain [26,32,42] In contrast to 2D approaches, our approach achieves style transfer in 3D domain, with visually pleasing quality while preserving 3D consistency." }, { "figure_ref": [], "heading": "3D Scene Style Transfer", "publication_ref": [ "b4" ], "table_ref": [], "text": "There have been recent works [4, 5, 18, 39] on 3D scene style transfer which combines style transfer and novel view synthesis and aims to synthesize novel views with style from a style image while preserving the underlying 3D structure. They mainly leverage NeRF [23] as the 3D representation for the scene. These works mainly apply on in-the-wild 3D scenes and transfer the color tone of style images. However, they cannot capture the detail style patterns and semantics as required in human face style transfer. Further, to handle the training time and memory issue from NeRF, they propose solutions that may reduce style transfer quality, or fail to generalize to unseen styles. For example, [4,5] applies style transfer losses on small image patches during training, which degrades the style transfer quality as it cannot capture global semantic information. ARF [39] proposed deferred gradient descend to train on full resolution image, which significantly slows down training and makes learning multiple styles impossible in practice. In contrast to these works, our approach focuses on 3D human face style transfer and captures local details and semantics in style transfer. We propose a novel NeRF-mesh hybrid framework which enables fast training speed at original image resolution and achieves good style transfer quality and 3D consistency." }, { "figure_ref": [], "heading": "Proposed Approach", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Overview", "publication_ref": [], "table_ref": [], "text": "As illustrated in Fig. 2, our approach consists of three stages: 1. geometry training stage, where we train a NeRF model to capture the 3D geometry of the real face; 2. mesh optimzation stage, where we derive a mesh from the trained NeRF model, refine its color through inverse projection, and stylize it by optimizing for style transfer objectives with pair data setting; 3. style training stage, where we train a hyper network to predict NeRF's color network weight from a style embedding extracted from a style image. Details of each stage are presented in the following sections." }, { "figure_ref": [], "heading": "Geometry Training", "publication_ref": [], "table_ref": [], "text": "Neural Radience Field (NeRF) [23] uses multilayer perceptron (MLP) networks to model a 3D scene as fields of volume density and colors. Given a pixel of an image for a 3D scene at view direction, a ray from the pixel is emitted and several 3D points are sampled along the ray. For each 3D point, NeRF predicts its volume density and color by a geometry network and color network. The geometry network of NeRF maps a 3D point to volume density and features. The color network of NeRF then maps features from geometry network and view direction to RGB color. The predicted color of the pixel is derived by volume rendering which aggregates the color and volume density of the " }, { "figure_ref": [], "heading": "Mesh Optimization", "publication_ref": [ "b15" ], "table_ref": [], "text": "After training a NeuS model on input images, we use marching cube [20] to export a face mesh from trained SDF network. To optimize face mesh, we apply differentiable rasterization [15] to render image from mesh, and apply losses on image level, where the gradients of the losses can be back propagated to the mesh. Optimizing topology of 3D mesh from image supervision usually leads to subopti-mal convergence, as analyzed in [16,29]. We therefore fix the vertex locations of the mesh and only optimize for vertex colors.\nMesh Refinement: The initial mesh from marching cube generally contains some artifacts in the colors. This is because the color network of NeuS model was trained with volume rendering which aggregates colors along the ray to form final color at pixel, and thus the color at surface point has some gap with the color seen in the image. We then refine the mesh color by optimizing an inverse projection problem.\nargmin c L rgb (M ⊙ ϕ c (θ), M ⊙ I gt )(1)\nwhere c is vertex colors and ϕ c (•) represents an image generator by mesh rasterization, parameterized by vertex color. I gt is a random ground truth image from the set of input images and θ is the corresponding view angle. M is the mesh segmentation mask. We optimized the mesh color on input images with masked RGB loss with an iterative process. After inverse projection, the mesh color is refined to be similar as presented in source images. We further remove background by applying a foreground segmentation model [3] on input images and trim down mesh vertices that are visible in the input images as background pixels. After mesh refinement and background removal, the resulting mesh mainly contains human head and part of the upper body and has a photorealistic texture, which enables us to synthesize photorealistic images at different view points to use for content images for 2D style transfer.\nFace Mesh Style Transfer: Given a refined face mesh and a style image, we aimed at transferring the style from the style image to the face mesh through optimization. Naturally, we can view the face mesh as an image generator ϕ c parameterized by vertex colors c that can generate content images of the face at arbitrary angle. And we apply style transfer objectives between content images and the input style image to optimize the vertex colors c. For the style transfer objectives, we use a feature matching loss [10] and contextual loss [21]. This brings in our initial optimization objective below.\nargmin c L f m (ϕ c (θ), I style ) + L cx (ϕ c (θ), I style ) (2)\nwhere I style is the style image, and θ is the view angle of the mesh randomly sampled from a semi sphere in each iteration of optimization.\nHowever, the initial objectives could not optimize the mesh color to have good style transfer quality. We find that it is because of a large gap in identity and head pose between the mesh rendered images and the style image. The mesh rendered images always resemble the identity of the input images that is different with the style image. And the mesh rendered images have diverse head pose that could be largely different with style image. Therefore, we propose to optimize with pair data that has similar identity and head pose.\nMore specifically, instead of using a fixed style image I style for arbitrary content image ϕ c (θ), we use a 2D style transfer model DualStyleGAN [38] ψ(•) to generate stylized image from a content image and a style image to have similar head pose and identity with the content image.\nargmin c L f m (ϕ c (θ), ψ(ϕ c (θ), I style ))+ L cx (ϕ c (θ), ψ(ϕ c (θ), I style ))) (3)\nDuring optimization, for each iteration, we randomly sample a view angle θ from a semi sphere, render an image ϕ c (θ) from mesh, and generate a stylized image ψ(ϕ c (θ), I style ) from 2D style transfer. The vertex color is optimized by the feature matching loss and contextual loss between these two.\nThe stylized images are generated from 2D style transfer and could contain 3D inconsistencies. As we are fixing vertex locations, the 3D consistency of the optimized mesh is guaranteed, and the optimization objectives only supervise the style of the mesh and avoid potential 3D inconsistencies from the generated stylized images. After optimization, we obtain a stylized face mesh from a style image. With mesh rasterization, the optimization is pretty fast and only takes 2 minutes per mesh." }, { "figure_ref": [], "heading": "Style Training", "publication_ref": [], "table_ref": [], "text": "In this stage, we would like to generalize the color network of NeuS model for arbitrary style transfer. For this purpose, it should be trained with multiple styles seen so that it could generalize to unseen style. Therefore, we generate 200 stylized meshes corresponding to 200 different style images to use as our ground truth generators for training.\nWe modulate the weight of the color network in NeuS model by a hyper network Ω(•) whose input is a style embedding extracted from a style image by a PSP style encoder [30]. Given different style images, the hyper network is capable of generating different color network weight to render for different stylized outputs.\nWe freeze the SDF network from stage 1 to reuse the learned 3D geometry, and only train the hyper network. We train with RGB loss supervised by the stylized mesh in stage 2. For each iteration, we randomly sample a style image I style , its corresponding stylized mesh ϕ(•), a view angle θ at a semi sphere and a batch of pixels. We query the hyper network from a style embedding to generate weight of color network and render the color of sampled pixel through volume rendering. For RGB supervision, we use the stylized mesh ϕ(•) to render an image from the same view angle θ. Formally,\nargmin Ω L rgb (Ω(z style , θ), ϕ(θ))(4)\nwhere z style represents a style embedding from a style image, Ω(z style , θ represents a batch of pixels from hyper network rendering.\nIn test time, the trained hyper network can be used for arbitrary style transfer. With a style image, we extract its style embedding and predict the weight of color network. And the predicted color network and the pretrained SDF network are used to generate stylized novel views through volume rendering with the style in style image applied." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b4" ], "table_ref": [], "text": "Dataset We collect a video dataset of 8 subjects, where each of them records a video of 10-15 seconds of 300-500 frames at 30 FPS. The videos are further processed with COLMAP [31] to estimate camera intrinsics and poses for every video frame. For style transfer, We use a cartoon dataset [27] with 317 cartoon images. We use 200 images during training and hold off the remaining 117 images as unseen styles to evaluate for the generalizability of our approach. For each subject, we train a separate model for arbitrary style transfer on this subject. For a single model, the training can be finished in 23 hours, with 7 hours for stage 1, 6 hours for stage 2 and 10 hours for stage 3.\nMethods for Comparison We compare our approach with state of the art 3D scene style transfer approaches (UPST [4], Stylizing 3D Scene [5], ARF [39]), and two baselines: 1. 2D style transfer → NeuS, where we first run 2D Image style transfer [38] on input images and then train a NeuS model directly on top of the stylized source images; 2. Neus → 2D style transfer, where we first train a NeuS model on top of the source images to synthesize novel views for real human face, and then apply 2D style transfer on top of the synthesized novel view images." }, { "figure_ref": [ "fig_2" ], "heading": "Qualitative Results", "publication_ref": [ "b4", "b4" ], "table_ref": [], "text": "We compare our approach and 3D scene style transfer approaches (UPST [4], Stylizing 3D Scene [5], ARF [39]) qualitatively in Fig. 3. Among the 3D scene style transfer approaches, UPST [4] is significantly under-stylized and has a bad novel view synthesis on the side view. Stylizing 3D scene [5] generates 3D consistent frontal and side views, but can only transfer overall color tone and have artifacts in the background. ARF [39] applies stronger style transfer than other two approaches, but loses details in facial structure and contains blurriness. The compared 3D scene style transfer approaches only transfer the overall color tone of the style image and fail to capture the semantics of the face, whereas our approach transfers the color of hair, skin and lip well and also achieves good 3D consistency." }, { "figure_ref": [ "fig_5" ], "heading": "Quantitative Results", "publication_ref": [ "b13", "b4" ], "table_ref": [], "text": "Consistency Measurement We use the short range consistency error and the long range consistency error from [14] to measure the 3D consistency between stylized images at different view points, aligned with the other 3D scene style transfer approaches. The consistency error is implemented by a warped LPIPS metric [41] where a view is warped to another view with a depth estimation.\nE(V i , V j ) = LP IP S(M ij ⊙ V i , M ij ⊙ f w ij (V j ))(5)\nwhere E(V i , V j ) is the consistency error between view i and view j, f w ij is the warping function and M ij is the warping mask. When computing LPIPS metric, only the pixels within the warping mask are taken. For short range consistency, the consistency error is computed with every adjacent frames in the testing video. For long range consistency, the consistency error is computed with all the view pairs with the gap of 7 frames.\nTable . 1 shows that our approach outperforms the compared approaches by a magnitude in both short range consistency and long range consistency. The large improvement in the 3D consistency benefits from our multi stage training where explicit mesh guidance is applied. Among the other approaches, NeuS → 2D style transfer has the lowest 3D consistency as it absorbs most of the 3D consistency issues from 2D style transfer. Other NeRF based approaches show better 3D consistency but is significantly worse than our approach as they do not have explicit mesh guidance as ours which strengthen the 3D consistency.\nUser Study We perform a user study to evaluate the style transfer quality and 3D consistency between different approaches. We compare our approach with four different approaches (Style to NeuS, ARF [39], Stylizing 3D Scene [5] and UPST [4]). For each comparison, we generate videos of two approaches for two identities (four videos in total). For each identity in a comparison, we ask users to make two selection: 1. select the video of better style transfer quality; 2. select the video of better 3D consistency. We collect votes from 20 participants per comparison, in total 320 votes (320=4 (comparisons) × 20 (participants) × 2 (identities) × 2 (questions). Results are shown in Fig. 5. Our approach outperforms other approaches in both style transfer quality and 3D consistency." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Mesh Optimization without pair data To show the effectiveness of our pair data setting during mesh optimization stage, we do an ablation study and show that without pair data setting, the mesh optimization could not converge well, due to the large identity and head pose gap between the style image and the content image from mesh rendering. Visualization can be seen at Fig 6." }, { "figure_ref": [ "fig_3" ], "heading": "Application", "publication_ref": [], "table_ref": [], "text": "Style Blending Our approach can perform smooth style blending between two styles by interpolating between the We ask the users to select the approach with better style quality or 3D consistency. two embedding of the style images, generating smooth and harmonious style transfer of a mixed style blended from two style images, as shown in Fig. 4. This allows creation of non-existent styles through blending two styles.\nUnseen Style Our approach trains a hyper network to generalize on multiple styles, hence is capable of generalizing to unseen style images in training, as illustrated in Fig. 7. This allows a broader use of our approach to apply on arbitrary cartoon images for 3D human face style transfer." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel three stage approach that achieves 3D face style transfer with good style quality and 3D consistency. We present a hybrid training strategy with volume rendering and mesh rasterization which enables style transfer at original image resolution. We design a novel mesh optimization stage where we propose a pair data setting to generate decent stylized meshes. We Figure 6. Comparison of mesh optimization with/without pair data setting Figure 7. Our approach can generalize to unseen style images and generate style transfer with decent quality and 3D consistency train a hyper network on stylized meshes to generalize for arbitrary style transfer. Our experiments demonstrate that our approach outperforms baselines approaches in terms of style quality and 3D consistency quantitatively and qualitatively, and is also capable to perform smooth and harmonious style blending as well as generalizing to unseen style." } ]
Figure 1. Given a set of multi-view input images of a human face (a), our approach reconstructs a 3D human face, transfers the style of a style image (b) to it and generates 3D consistent stylized novel views of the face (c).
3D Face Style Transfer with a Hybrid Solution of NeRF and Mesh Rasterization
[ { "figure_caption": ". The other line of work falls on modifying and finetuning styleGAN [12]. Pinkney and Adler [27] first finetuned StyleGAN on cartoon data and achieved cartoon style transfer by simply applying the latent code in original StyleGAN to finetuned cartoon StyleGAN. Kwong et al. [8] further swapped the convolutional layer features between original styleGAN and a finetuned cartoon styleGAN to achieve style transfer. Dual-StyleGAN [38] modified the architecture of StyleGAN by introducing explicit extrinsic style path to have a deeper control on the style transfer. As these approaches focus on 2D image domain, they usually show 3D inconsistency issue when applied on multi view images of the same face.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Overview of our approach. Our approach is in 3 stages: 1. Geometry training to learn the 3D geometry of a human face; 2. Mesh optimization to refine mesh colors and transfer style from a style image to the mesh; 3. Style training to train a hyper network conditioned on style image to generalize to arbitrary style.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Qualitative Comparisons of transferring style in a style image (a) to input views (b). Our approach (f) shows better style transfer quality and 3D consistency compared to other 3D scene style transfer approaches (UPST [4] (c), Stylizing 3D Scene [5] (d), ARF [39] (e))", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Style blending, our approach can interpolate between two styles and generate a mixed style of both. We show two rows of examples with style gradually changing from style 1 to style 2.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. User study in style transfer quality and 3D consistency.We ask the users to select the approach with better style quality or 3D consistency.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative Comparison on short range and long range 3D consistency error. Our approach outperforms the compared approaches by a magnitude.", "figure_data": "Short RangeLong RangeMethodConsistency ErrorConsistency Error(LPIPS ×10 -2 ↓)(LPIPS×10 -2 ↓)2D style transfer → NeuS1.213.47NeuS → 2D style transfer3.235.07UPST [4]1.714.14Stylizing 3D Scene [5]1.202.06ARF [39]1.885.12Ours0.290.38", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Jianwei Feng Amazon; Prateek Singhal
[ { "authors": "Kara-Ali Aliev; Artem Sevastopolsky; Maria Kolos; Dmitry Ulyanov; Victor Lempitsky", "journal": "Springer", "ref_id": "b0", "title": "Neural point-based graphics", "year": "2020" }, { "authors": "Chris Buehler; Michael Bosse; Leonard Mcmillan; Steven Gortler; Michael Cohen", "journal": "", "ref_id": "b1", "title": "Unstructured lumigraph rendering", "year": "2001" }, { "authors": "Xiangguang Chen; Ye Zhu; Yu Li; Bingtao Fu; Lei Sun; Ying Shan; Shan Liu", "journal": "", "ref_id": "b2", "title": "Robust human matting via semantic guidance", "year": "2022" }, { "authors": "Yaosen Chen; Qi Yuan; Zhiqiang Li; Yuegen Liu; Wei Wang; Chaoping Xie; Xuming Wen; Qien Yu", "journal": "", "ref_id": "b3", "title": "Upstnerf: Universal photorealistic style transfer of neural radiance fields for 3d scene", "year": "2022" }, { "authors": "Pei-Ze Chiang; Meng-Shiun Tsai; Hung-Yu Tseng; Wei-Sheng Lai; Wei-Chen Chiu", "journal": "", "ref_id": "b4", "title": "Stylizing 3d scene via implicit representation and hypernetwork", "year": "2022" }, { "authors": "Camillo J Paul E Debevec; Jitendra Taylor; Malik", "journal": "", "ref_id": "b5", "title": "Modeling and rendering architecture from photographs: A hybrid geometry-and image-based approach", "year": "1996" }, { "authors": "Fangzhou Hong; Mingyuan Zhang; Liang Pan; Zhongang Cai; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b6", "title": "Avatarclip: Zero-shot textdriven generation and animation of 3d avatars", "year": "2022" }, { "authors": "Jialu Huang; Jing Liao; Sam Kwong", "journal": "IEEE Transactions on Multimedia", "ref_id": "b7", "title": "Unsupervised image-to-image translation via pre-trained stylegan2 network", "year": "2021" }, { "authors": "Po-Han Huang; Kevin Matzen; Johannes Kopf; Narendra Ahuja; Jia-Bin Huang", "journal": "", "ref_id": "b8", "title": "Deepmvs: Learning multi-view stereopsis", "year": "2018" }, { "authors": "Xun Huang; Serge Belongie", "journal": "", "ref_id": "b9", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "Mengqi Ji; Juergen Gall; Haitian Zheng; Yebin Liu; Lu Fang", "journal": "", "ref_id": "b10", "title": "Surfacenet: An end-to-end 3d neural network for multiview stereopsis", "year": "2017" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b11", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "N Kiriakos; Steven M Kutulakos; Seitz", "journal": "IEEE", "ref_id": "b12", "title": "A theory of shape by space carving", "year": "1999" }, { "authors": "Wei-Sheng Lai; Jia-Bin Huang; Oliver Wang; Eli Shechtman; Ersin Yumer; Ming-Hsuan Yang", "journal": "", "ref_id": "b13", "title": "Learning blind video temporal consistency", "year": "2018" }, { "authors": "Samuli Laine; Janne Hellsten; Tero Karras; Yeongho Seol; Jaakko Lehtinen; Timo Aila", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b14", "title": "Modular primitives for high-performance differentiable rendering", "year": "2020" }, { "authors": "Yiyi Liao; Simon Donne; Andreas Geiger", "journal": "", "ref_id": "b15", "title": "Deep marching cubes: Learning explicit surface representations", "year": "2018" }, { "authors": "Fayao Liu; Chunhua Shen; Guosheng Lin; Ian Reid", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b16", "title": "Learning depth from single monocular images using deep convolutional neural fields", "year": "2015" }, { "authors": "Kunhao Liu; Fangneng Zhan; Yiwen Chen; Jiahui Zhang; Yingchen Yu; Abdulmotaleb El Saddik; Shijian Lu; Eric Xing", "journal": "", "ref_id": "b17", "title": "Stylerf: Zero-shot 3d style transfer of neural radiance fields", "year": "2023" }, { "authors": "Lingjie Liu; Jiatao Gu; Kyaw Zaw Lin; Tat-Seng Chua; Christian Theobalt", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b18", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "E William; Harvey E Lorensen; Cline", "journal": "ACM siggraph computer graphics", "ref_id": "b19", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1987" }, { "authors": "Roey Mechrez; Itamar Talmi; Lihi Zelnik-Manor", "journal": "", "ref_id": "b20", "title": "The contextual loss for image transformation with non-aligned data", "year": "2018" }, { "authors": "Moustafa Meshry; Dan B Goldman; Sameh Khamis; Hugues Hoppe; Rohit Pandey; Noah Snavely; Ricardo Martin-Brualla", "journal": "", "ref_id": "b21", "title": "Neural rerendering in the wild", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b22", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b23", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Simon Niklaus; Long Mai; Jimei Yang; Feng Liu", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b24", "title": "3d ken burns effect from a single image", "year": "2019" }, { "authors": "Ori Nizan; Ayellet Tal", "journal": "", "ref_id": "b25", "title": "Breaking the cycle-colleagues are all you need", "year": "2020" }, { "authors": "N M Justin; Doron Pinkney; Adler", "journal": "", "ref_id": "b26", "title": "Resolution dependent gan interpolation for controllable image synthesis between domains", "year": "2020" }, { "authors": "Hao Charles R Qi; Matthias Su; Angela Nießner; Mengyuan Dai; Leonidas J Yan; Guibas", "journal": "", "ref_id": "b27", "title": "Volumetric and multi-view cnns for object classification on 3d data", "year": "2016" }, { "authors": "Edoardo Remelli; Artem Lukoianov; Stephan Richter; Benoit Guillard; Timur Bagautdinov; Pierre Baque; Pascal Fua", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Meshsdf: Differentiable iso-surface extraction", "year": "2020" }, { "authors": "Elad Richardson; Yuval Alaluf; Or Patashnik; Yotam Nitzan; Yaniv Azar; Stav Shapiro; Daniel Cohen-Or", "journal": "", "ref_id": "b29", "title": "Encoding in style: a stylegan encoder for image-to-image translation", "year": "2021" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b30", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "Xuning Shao; Weidong Zhang", "journal": "", "ref_id": "b31", "title": "Spatchgan: A statistical feature based discriminator for unsupervised image-to-image translation", "year": "2021" }, { "authors": "Michael Waechter; Nils Moehrle; Michael Goesele", "journal": "Springer", "ref_id": "b32", "title": "Let there be color! large-scale texturing of 3d reconstructions", "year": "2014" }, { "authors": "Peng Wang; Lingjie Liu; Yuan Liu; Christian Theobalt; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b33", "title": "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction", "year": "2021" }, { "authors": "Olivia Wiles; Georgia Gkioxari; Richard Szeliski; Justin Johnson", "journal": "", "ref_id": "b34", "title": "Synsin: End-to-end view synthesis from a single image", "year": "2020" }, { "authors": " Daniel N Wood; Ken Daniel I Azuma; Brian Aldinger; Tom Curless; Duchamp; Werner David H Salesin; Stuetzle", "journal": "", "ref_id": "b35", "title": "Surface light fields for 3d photography", "year": "2000" }, { "authors": "Zhirong Wu; Shuran Song; Aditya Khosla; Fisher Yu; Linguang Zhang; Xiaoou Tang; Jianxiong Xiao", "journal": "", "ref_id": "b36", "title": "3d shapenets: A deep representation for volumetric shapes", "year": "2015" }, { "authors": "Shuai Yang; Liming Jiang; Ziwei Liu; Chen Change Loy", "journal": "", "ref_id": "b37", "title": "Pastiche master: exemplar-based high-resolution portrait style transfer", "year": "2022" }, { "authors": "Kai Zhang; Nick Kolkin; Sai Bi; Fujun Luan; Zexiang Xu; Eli Shechtman; Noah Snavely", "journal": "Springer", "ref_id": "b38", "title": "Arf: Artistic radiance fields", "year": "2022" }, { "authors": "Kai Zhang; Gernot Riegler; Noah Snavely; Vladlen Koltun", "journal": "", "ref_id": "b39", "title": "Nerf++: Analyzing and improving neural radiance fields", "year": "2020" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b40", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Yihao Zhao; Ruihai Wu; Hao Dong", "journal": "Springer", "ref_id": "b41", "title": "Unpaired imageto-image translation using adversarial consistency loss", "year": "2020" }, { "authors": "Yufeng Zheng; Victoria Fernández Abrevaya; Marcel C Bühler; Xu Chen; Michael J Black; Otmar Hilliges", "journal": "", "ref_id": "b42", "title": "Im avatar: Implicit morphable head avatars from videos", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 351.5, 586.94, 193.61, 9.68 ], "formula_id": "formula_0", "formula_text": "argmin c L rgb (M ⊙ ϕ c (θ), M ⊙ I gt )(1)" }, { "formula_coordinates": [ 5, 61.8, 704.17, 224.57, 9.68 ], "formula_id": "formula_1", "formula_text": "argmin c L f m (ϕ c (θ), I style ) + L cx (ϕ c (θ), I style ) (2)" }, { "formula_coordinates": [ 6, 60.08, 99.17, 226.29, 24.63 ], "formula_id": "formula_2", "formula_text": "argmin c L f m (ϕ c (θ), ψ(ϕ c (θ), I style ))+ L cx (ϕ c (θ), ψ(ϕ c (θ), I style ))) (3)" }, { "formula_coordinates": [ 6, 99.57, 638.96, 186.79, 9.65 ], "formula_id": "formula_3", "formula_text": "argmin Ω L rgb (Ω(z style , θ), ϕ(θ))(4)" }, { "formula_coordinates": [ 7, 64.48, 524.95, 221.88, 12.69 ], "formula_id": "formula_4", "formula_text": "E(V i , V j ) = LP IP S(M ij ⊙ V i , M ij ⊙ f w ij (V j ))(5)" } ]
10.24963/ijcai.2022/344
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b0", "b1", "b2", "b4", "b5", "b6", "b7", "b8", "b10", "b6", "b7", "b6", "b7", "b11", "b6", "b7" ], "table_ref": [], "text": "Learning with noisy labels (LNL) Song et al. [2022], multi-rater learning Ji et al. [2021], and human-AI collaboration Dafoe et al. [2021] have advanced the development of robust classifiers to tackle data imperfections and complex decision processes common in real-world applications. While techniques for addressing each of these three challenges have made significant progress, it is worth noting the absence of approaches that can simultaneously handle all three problems. Real-world datasets often contain images annotated with multiple noisy labels due to the inherent uncertainty in labelling complex images, such as in breast cancer screening mammogram datasets Halling-Brown et al. [2020], where each image may require two to three noisy labels from experts, depending on case difficulty. Solutions addressing all three challenges would be highly effective, enabling the training of robust classifiers using multiple noisy labels per image. During testing, such classifiers could dynamically request input from a variable number of experts to achieve efficient and accurate collaborative classification.\nLNL methods Song et al. [2022] focus on developing AI models capable of handling imperfect, ambiguous, or erroneous training labels that are common in real-world datasets. However, they often overlook multi-rater labels and human-AI Figure 1: This paper presents a new benchmark and methodology that addresses challenges combining noisy-label learning, multi-rater learning, and human-AI collaboration. Our method, LECOMH, uses a dataset with multiple noisy labels per image (see multi-rater annotation example on the left) to train an AI classifier (noisy-label learning), a consensus labeller (multi-rater learning), a Human AI Selection module that estimates the number (m ∈ [0, M ]) of human predictions needed for efficient and accurate human-AI collaboration during testing, and a Collaboration module that produces the final prediction. collaboration for both training and testing. In contrast, multi-rater learning Ji et al. [2021] acknowledges multiple noisy labels per training image, but fails to consider the potential collaboration between AI models and experts during testing.\nHuman-AI collaboration Dafoe et al. [2021] leverages the complementary strengths of human annotators and AI algorithms during testing. Such a strategy can be categorised into: learning to defer [Raghu et al., 2019, Madras et al., 2018, Hemmer et al., 2022, Verma et al., 2023], learning to complement [Wilder et al., 2021], human-in-the-loop [Wu et al., 2022a], and algorithm-in-the-loop [Green and Chen, 2019]. Our paper particularly focuses on learning to defer and learning to complement techniques. Previous methods exploring these two techniques often assume the availability of ground truth labels, a rarity in real-world datasets, which restricts the application of these methods. Also, except for Hemmer et al. [2022], Verma et al. [2023], all approaches rely on single human predictions, reflecting the fact that multi-rater learning has been largely disregarded. Partial exceptions are the methods in Hemmer et al. [2022], Verma et al. [2023] that learn to defer to multiple experts, but they still require clean-label samples for training, limiting their application in real-world problems.\nThis paper addresses the research gap exposed above by introducing novel benchmarks and a new methodology to leverage the synergistic potential of noisy-label learning, multi-rater learning, and human-AI collaboration. Our innovative approach (shown in Figure 1) is called Learning to Complement with Multiple Humans (LECOMH) and produces a classification method that is trained to have low human-AI collaboration cost, while producing highly accurate results. In summary, our key contributions are:\n• The innovative LECOMH methodology to seamlessly integrate noisy-label learning, multi-rater learning, and human-AI collaboration techniques to address the challenges posed by multiple noisy labels per training image and complex decision processes with a training process that maximises the human-AI collaboration accuracy and minimises its costs;\n• New benchmarks to test complex human-AI collaboration decision models trained with multi-rater noisylabel learning techniques, paving the way for more comprehensive performance evaluation for real-world applications.\nOur experiments evaluate LECOMH against state-of-the-art (SOTA) human-AI collaboration approaches Mozannar et al. [2023], Hemmer et al. [2022], Verma et al. [2023] using our newly introduced benchmarks. LECOMH consistently demonstrates superior performance than competition, with higher accuracy for equivalent values of collaboration costs, measured by the number of labels provided by humans. Remarkably, LECOMH is the only method that improves the accuracy of human labellers across all benchmarks." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b29", "b37", "b38", "b39", "b40", "b1", "b41", "b42", "b44", "b45", "b8", "b55", "b62", "b64", "b65" ], "table_ref": [], "text": "Learning with Noisy Labels (LNL) approaches are currently based on: robust loss functions Garg et al. [2023], or optimise the utility of clean samples through a matched high-confidence selection technique Wang et al. [2022].\nMulti-rater Learning can be categorised into two types. One type learns calibrated results for all raters to account for inter-observer variabilities Raykar et al. [2009], Guan et al. [2018], Mirikharaji et al. [2021], Ji et al. [2021]. The other type learns annotator-specific confidence confusion matrices to replicate the labeling processes of annotators, and also finds potentially correct labels Khetan et al. [2017], Tanno et al. [2019], Wu et al. [2022b], Cao et al. [2023]. To integrate the strengths of both approaches, Wu et al. [2022b] proposed a joint learning of the multi-rater confidences assignment task and calibrated segmentation task. Furthermore, considering that previous approaches assume a uniform set of parameters for annotator errors across all samples, Gao et al. [2022] In addition to L2D, other HAICO strategies are worth mentioning. Learning to complement Wilder et al. [2021] optimises the AI model only on the tasks that are challenging for humans. The human-centred training of AI systems is proposed in Bansal et al. [2021] to maximise the expected utility of the human-AI team. Liu et al. [2023] leverages perceptual differences between humans and AI to make a human-AI system outperform humans or machines alone. Multi-agent reinforcement learning trains an agent that can cooperate with humans to make confident predictions Strouse et al.\n[2021], Carroll et al. [2019], Yu et al. [2023]. Similar to L2D methods, the approaches above do not integrate noisy label multi-rater learning techniques. Furthermore, these approaches do not consider the complex decision process setting, where multiple users can be available for training and testing.\n3 Learning to Complement with Multiple Humans (LECOMH)\nLet D = {x i , M i } |D| i=1\nbe the noisy-label training set, where x i ∈ X ⊂ R H×W ×R denotes an input image of size H ×W and R channels, and M i = {m i,j } M j=1 denotes the M experts' noisy annotations per image, with m i,j ∈ Y ⊂ {0, 1} |Y| being a one-hot label. Step 3\nStep 2\nStep 1 Our methodology contains (Figure 1, on the right): 1) an AI Prediction Module pre-trained with LNL techniques to enable the production of a training sample consensus label by the multi-rater learning approach CROWDLAB Goh et al.\n[2022], 2) a Human-AI Selection Module that predicts the collaboration format (i.e., AI alone, AI + 1 user, AI + 2 users, etc.), and 3) a Collaboration Module that aggregates the predictions selected by the Human-AI Selection Module to produce a final prediction. We explain the training and testing processes below." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Training", "publication_ref": [ "b66" ], "table_ref": [], "text": "LECOMH maximises classification accuracy and minimises collaboration costs in a human-AI collaborative setting, where cost is related to the number of user-provided labels. As shown in Figure 2, our training has three phases: pre-training the AI Prediction Module using an LNL approach, generation of consensus labels for the training set using the multi-rater learning CROWDLAB method Goh et al. \nD c = {(x i , ŷc i , M i )|(x i , M i ) ∈ D, (ŷ i , α i ) = CrowdLab(x i , f θ (x i ), M i ), α i > 0.5},(1)\nwhich is used for LECOMH training, as explained below.\nLECOMH training and testing: the proposed LECOMH comprises the Human-AI Selection Module and the Collaboration Module, as shown in Figure 2. The Human-AI Selection Module, represented by g ϕ : X → ∆ M , predicts a categorical distribution of the probability of having an isolated AI prediction (1st dimension) or a combined prediction between AI and multiple users (2nd dimension: AI +1 user, ..., M+1st dimension: AI + M users). The Collaboration Module, defined by\nh ψ : ∆ |Y|-1 × ... × ∆ |Y|-1 M +1 times → ∆ |Y|-1\n, takes the AI prediction in the first input, and the remaining user predictions (from 0 to M users) selected by g ϕ (.) to produce the final classification.\nThe training for the Human-AI Selection Module and the Collaboration Module relies on the following optimisation:\nϕ * , ψ * = arg min ϕ,ψ 1 |D c | (xi,ŷ c i ,Mi)∈D c ℓ (ŷ c i , h ψ (p (g ϕ (x i ), f θ (x i ), shf(M i )))) + λ × cost(g ϕ (x i )),(2)\nwhere ℓ(.) is the cross-entropy (CE) loss, λ is a hyper-parameter that weights the cost function,\np (g ϕ (x), f θ (x), shf(M)) =          [f θ (x), 0 |Y| , ..., 0 |Y| ] if max j g (j) ϕ (x) = g (1) ϕ (x) [f θ (x), m i,1 , ..., 0 |Y| ] if max j g (j) ϕ (x) = g (2) ϕ (x) ... [f θ (x), m i,1 , ..., m i,M ] if max j g (j) ϕ (x) = g (M +1) ϕ (x) ,(3) with g (j)\nϕ (.) denoting the j th output from the Human-AI Selection Module and shf(M) representing a function that shuffles the experts' annotations (so the training is not biased to any specific experts' annotations), and\ncost(g ϕ (x)) = M +1 j=1 g (j) ϕ (x) × (j -1),(4)\nwhich means that when the AI model provides the prediction alone, we have max j g (j)\nϕ (x) = g\n(1) ϕ (x), which implies cost(g ϕ (x)) ≈ 0, but as max j g (j)\nϕ (x) = g (K) ϕ (x) for K ∈ [2, M ],\nwe have cost(g ϕ (x)) ≈ K -1. In other words, the cost in Eq 4 assumes a constant cost of one unit per expert's annotation.\nThe Human-AI training is explained in 3 steps, as depicted in Figure 2. The first step consists of concatenating the AI prediction by f θ (.) and the experts' labels in M. The second step, consists of selecting the collaboration format (AI alone, AI + 1 user, ..., AI + M users) based on the Human-AI Selection Module prediction, as shown in Eq 3. Here, we need to select such a collaboration based on max j g (j) ϕ (x), which is non-differentiable. To make it differentiable, we replace this operation with the Gumbel softmax Jang et al. [2016], which produces a continuous distribution on the simplex ∆ M that approximates categorical samples, and whose parameter gradients can be computed via the re-parameterisation trick. The third step consists of training the Collaboration Module to make decisions using the selected AI and expert annotations." }, { "figure_ref": [], "heading": "Testing", "publication_ref": [], "table_ref": [], "text": "Testing starts from the AI prediction, followed by the Human-AI Selection Module prediction of the categorical distribution of the probability of the AI model running alone or collaborating with a set of K ∈ [1, M ] users (where cost = K). After deciding on the number of users to collaborate, using Gumbel softmax on g ϕ (x), we randomly select testing users, and concatenate their predictions with the AI prediction to serve as input to the Collaboration Module, which outputs the final classification." }, { "figure_ref": [], "heading": "New Benchmarks", "publication_ref": [ "b67", "b1", "b70", "b36" ], "table_ref": [], "text": "We propose three new benchmarks containing multiple noisy labels for the training and testing sets to assess the ability of training human-AI collaborative models with multiple noisy labels per image, and testing with the collaboration of multiple users.\nNew CIFAR-10 Benchmarks. We introduce two new benchmarks using the CIFAR-10 dataset Krizhevsky et al. [2009], comprising 50K training images and 10K testing images of size 32 × 32. The first benchmark uses CIFAR- 10N Wei et al. [2021] for training and CIFAR-10H Peterson et al. [2019] for testing. CIFAR-10N Wei et al. [2021] has three noisy human annotations for each image of the CIFAR-10 training set, while CIFAR-10H Peterson et al. [2019] provides 51 noisy human labels per image for the CIFAR-10 testing set. Given the limitation of three labels per sample in the training of CIFAR-10N, we also limit the testing process to allow for collaboration with at most three users, which are randomly sampled from the pool of 51 users of CIFAR-10H. Inspired by the benchmark proposed by Xia et al. [2021], our second benchmark, called multi-rater CIFAR10-IDN, is a new training and testing multi-rater instance-dependent noise benchmark. The label noise rates are set to be in {0.2, 0.3, 0.4, 0.5} for both training and testing sets. For each noise rate, we generate three distinct noisy labels by employing different random seeds, simulating varying human predictions with similar error rates.\nNew Chaoyang Benchmark. The Chaoyang dataset encompasses 6160 colon slides represented as patches of size 512 × 512 Zhu et al. [2021]. Each patch received three noisy expert annotations. Originally, this dataset had a training set with 4021 patches for training and 2139 patches for testing, where training patches contained multi-rater noisy labels (i.e., many patches had multiple different labels), but testing only contained patches where all experts agreed on a single label. We re-structured this dataset to build a new benchmark where both training and testing sets contained multiple noisy labels, where we guarantee that patches from the same slide do not appear in both training and testing sets. Specifically, we re-shuffled the entire dataset, partitioning the original 6160 patches into 4725 patches for training and 1435 patches for testing. In this new partition, both training and testing sets contain multi-rater noisy labels patches, where training has 862 patches with 2 out of 3 consensual labels and 3862 patches with 3 out of 3 consensual labels, while testing has 449 patches with 2 out of 3 consensual labels and 986 patches with 3 out of 3 consensual labels." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b71", "b37", "b1", "b29", "b11", "b59", "b11", "b4", "b5", "b11", "b11", "b6", "b7", "b7" ], "table_ref": [], "text": "Architecture. All methods are implemented in Pytorch Paszke et al. [2019] and run on NVIDIA RTX A6000. For CIFAR-10N and CIFAR-10H experiments, we pre-trained ProMix Wang et al. [2022] with two ResNet-18 as the AI Prediction Module using the Rand1 Wei et al. [2021] annotation. For the multi-rater CIFAR10-IDN experiments, we pre-trained InstanceGM Garg et al. [2023] with two PreAct-ResNet-18 as the AI Prediction Module. For Chaoyang, following NSHE Zhu et al. [2021], two ResNet-34 are pre-trained using the label_A annotation, and the best-trained network is selected for the AI Prediction Module. All the above models were selected because of their SOTA performance in the respective datasets. For the Human-AI Selection Module, we utilise the same backbone of the pre-trained models. The Collaboration Module comprises a two-layer MLP with 512-dimensional hidden layers with a ReLU activation function. The pre-trained Promix on CIFAR-10N reaches 97.41% accuracy on the CIFAR-10 test set. On the multi-rater CIFAR-10 IDN, the pre-trained InstanceGM reaches accuracy 96.64%, 96.52%, 96.36% and 95.90% for noise rates 0.2, 0.3, 0.4 and 0.5. The pre-trained NSHE reaches 82.44% accuracy on Chaoyang.\nTraining and evaluation details. We train our human-AI system with 200 epochs via an SGD with a momentum of 0.9 and a weight decay of 0.0005. The batch size is fixed as 256 for CIFAR and 96 for Chaoyang. Furthermore, the initial learning rate is 0.05, which decays by a cosine scheduler. The Gumbel softmax temperature parameter of the Human-AI Selection Module is 5. For our system's training and testing, we randomly shuffled the order of human annotations, which aims to fit real-world noisy conditions. The ground truth training labels are set to be the consensus label from CROWDLAB. For testing these methods, we randomly sampled annotations from the human annotation pools for each sample.\nBaselines. Following Mozannar Mozannar et al. [2023], we compare our method with single expert human-AI collaboration (SEHAICO) methods, which include cross entropy surrogate Mozannar and Sontag [2020] (CE), one-vsall-based surrogate Verma and Nalisnick [2022] (OvA), selective prediction that thresholds classifier confidence for the rejector Mozannar et al. [2023] (SP), the confidence method Raghu et al. [2019] (CC), differentiable triage Okati et al.\n[2021] (DIFT), mixture of experts Madras et al. [2018] (MoE), and RealizableSurrogate Mozannar et al. [2023] (RS). For the experiments with these SEHAICO methods, we use either randomly sampled annotations or aggregation (majority voting) annotations to simulate a single expert from the human annotation pools. To determine the collaboration cost for all methods mentioned above, we sort the testing images based on their rejection scores and then adjust the threshold for annotating these testing cases by users Mozannar et al. [2023]. We also compare LECOMH with methods that defer to multiple experts (MEHAICO): classifier and expert team (CET) Hemmer et al. [2022], and learning to defer to multiple experts (Multi_L2D) Verma et al. [2023], where for both models, we fix the number of experts at three (i.e., the maximum number of experts available in our benchmarks), and show just the final accuracy and cost results by the models. For a fair comparison, all classification backbones for the {SE, ME}HAICO (l) IDN50-random.\nFigure 3: Test accuracy vs. collaboration (system) cost of LECOMH (Ours) and competing SEHAICO Mozannar et al. [2023] and MEHAICO Hemmer et al. [2022], Verma et al. [2023] methods. The SEHAICO methods are always pre-trained with LNL techniques, with the single user being simulated with either aggregation (majority voting) or random selection from the pool of three annotators. The MEHAICO methods show results with and without LNL and they rely on three experts, resulting in a single point of accuracy vs. cost in each graph. We threshold accuracy at cost=10000. " }, { "figure_ref": [], "heading": "Results and Discussion", "publication_ref": [], "table_ref": [], "text": "For CIFAR-10, we calculate the total cost with Eq 4 and adjust λ in Eq 2 during training to increase or reduce our LECOMH's cost. Therefore, we have a minimum cost of 0 (all testing cases predicted by AI alone) and a maximum cost of 30000 (all testing cases predicted by AI + 3 users) for 10K test images. For Chaoyang, we rescale the total cost from [0, 3×1435] to [0,30000] to facilitate the comparison between datasets. The total cost of SEHAICO methods is in [0, 10000] because only one user per image is allowed to be used. For our LECOMH, accuracy is assessed with cost in the range of [0, 10000], and for MEHAICO methods, if cost > 10000, then we plot the accuracy at cost=10000.\nWe show test accuracy vs. collaboration cost in Figure 3. According to these curves, our method consistently outperforms {SE,ME}HAICO methods in terms of classification accuracy, for all collaboration costs in all benchmarks. These results validate the superiority of our method compared to the current SOTA in human-AI collaboration problems.\nGiven that all methods use the same backbone models trained with SOTA LNL methods, and the consensus label is obtained from a multi-rater for all approaches, such differences can be explained by the superior human-AI collaboration technique proposed in this paper that consists of collaborating with (not deferring to) multiple humans (instead of a single human).\nA crucial observation to understand the graphs in Figure 3 is that at cost=0, all LNL pre-trained methods show the performance of the LNL AI Prediction Module, and all competing SEHAICO methods at cost=10000 show the human accuracy in the benchmark. This happens because at cost=10000, all predictions come from humans in the SEHAICO methods. For example, for CIFAR-10H, it is clear that the LNL-trained AI Prediction Module has higher accuracy than humans, but for Chaoyang, we reach the opposite conclusion, i.e., the AI Prediction Module has lower accuracy than pathologists, and for the IDN benchmarks, the AI Prediction Module is always better, and the human performance is coherent with the benchmark (e.g., for IDN20, humans have around 80% accuracy). Given that knowledge, it is remarkable that our LEMCOH always produces a human-AI collaboration accuracy that consistently increases with the collaboration cost, where the final accuracy is always better than the original human accuracy. This is in contrast with respect to competing SEHAICO approaches that usually exhibit an accuracy that grows with cost, up to a certain point when it shrinks to the original human accuracy in the benchmark. Also, SEHAICO methods achieve peak performance at a cost that is not at the extremes of [0, 10000], which demonstrates that maximising the integration of expert information into the system is not the best choice. Furthermore, even though both our method and competing SEHAICO methods can improve human-AI team performance in practice, our LEMCOH does not show a decreasing accuracy, with respect to an increasing annotation cost, after reaching the peak performance. This illustrates that our method can minimise the impact of expert prediction errors and better integrate expert information to make correct decisions. When the AI model outperforms humans (e.g. CIFAR-10H), both our method and L2D methods can show better predictions than humans or AI alone. In contrast, if humans perform better than models (e.g. Chaoyang experiments), the accuracy of SEHAICO methods is limited by expert information and cannot improve significantly. However, the accuracy of our method's predictions exceeds that of the model and humans. MEHAICO clearly outperforms SEHAICO for large collaboration costs (e.g., CIFAR-10H and Chaoyang), indicating the value of collaborating with multiple users, but it still shows worse performance than our LECOMH. For the IDN benchmarks, CET (w. LNL) always show high cost, better accuracy than SEHAICO methods and worse accuracy than LECOMH. CET (w/o LNL) tend to show much lower costs in [100,3000], but lower accuracy than all other methods, suggesting the importance of LNL in this setting. Multi_L2D tends to show low cost and competitive accuracy similar to the LNL AI Prediction's.\nAnother observation from Figure 3 is the behaviour of all approaches at high noise rate scenarios, exemplified by the IDN40 and IDN50 benchmarks. In such cases, human annotations become very unreliable, so the effectiveness of the human-AI collaboration for all approaches is diminished, and we no longer see the accuracy improvement displayed in other benchmarks. Nevertheless, even though our LEMCOH will not considerably improve accuracy, it will not decrease it either. This is in contrast to the SEHAICO methods, which show a sharp decrease as collaboration costs increase. Regarding MEHAICO, it shows better accuracy than SEHAICO, but worse than LECOMH, for high collaboration costs, but when the methods show low collaboration costs, their results are similar to that of the AI Prediction Module. This means that human-AI collaboration is required to ensure that the predictions of human experts must have a certain accuracy to enable effective results." }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "Ablation Studies", "publication_ref": [ "b37", "b29", "b15" ], "table_ref": [], "text": "We start the ablation study with a verification of the importance of: 1) the pre-training of the AI Prediction Module with label noise learning (LNL) techniques Wang et al. [2022], Garg et al. [2023], 2) the multi-rater learning approach CROWDLAB Goh et al.\n[2022], and 3) the reliance on multiple users (rather than a single user) for collaborating with the system. Figure 4 shows this study on CIFAR-10H, Chaoyang, IDN20 and IDN50 benchmarks, where we compare the performance of the proposed LECOMH without relying on label noise learning pre-training (LECOMH w/o LNL), where we train the classifier using a standard cross-entropy loss with early stopping (i.e., we halt the training after 30 epochs, which typically shows some robustness to label noise Li et al. [2020]). Results suggest that when the AI Prediction Module is more accurate than the original human labels (e.g., CIFAR-10H, IDN{20,50}), the training without LNL always performs worse than with LNL, particularly for low collaboration costs, and when the noise rate is large (i.e., IDN50 with 50% noise rate), then the lack of an LNL pre-training is catastrophic independently of the collaboration cost. On Chaoyang, the lack of an LNL pre-training is noticeable only when the cost is lower than 6000. We also compare with methods that replace CROWDLAB Goh et al. [2022] with simpler approaches that produce a consensus label using majority voting (LECOMH w. aggregation label) or using a randomly sampled training label (LECOMH w. random label). Results show that for all cases, except Chaoyang at high collaboration cost, the lack of a multi-rater learning procedure has a negative impact. Such a disadvantage is quite noticeable in high noise rate problems (e.g., IDN50), but it is also explicit for lower noise rates (e.g., IDN20), when using a random label as consensus. The last comparison is with methods that use a single user as a collaborator, where this user is represented by aggregating all testing users with majority voting (w. SH-aggregation) or by randomly selecting one of the testing users (w. SH-random). The role of multiple users is clear in all problems with low-noise rate (CIFAR-10H, Chaoyang, IDN20), where LECOMH with aggregation or with random always perform worse than with three users, particularly with increasing costs. Interestingly, for IDN50, the reliance on multiple users does not affect LECOMH, which means that for large noise rates, it does not matter if single or multiple users are explored for training and testing.\nAnother important point of our method is the understanding of the role of λ in Eq 2, which controls the importance of the collaboration cost in our optimisation. Figure 5 shows the accuracy and cost as a function of λ for LECOMH on CIFAR-10H, Chaoyang, IDN20 and IDN50. As expected, when we increase λ, the cost term is weighted more, which means that human-AI collaborations will become rarer, reducing cost and test accuracy. On the contrary, decreasing λ shows the opposite effect, i.e., more human-AI collaborations, resulting in higher accuracy.\nTable 1 shows several examples from CIFAR-10H test set, where for each case, we show the test image, the set of labels provided by humans (denoted by M), the prediction of the AI Prediction Module f θ (.), the prediction probability vector by the Human-AI Selection Module g ϕ (.) representing [AI prediction (1st value), AI + 1 User (2nd value), AI + 2 Users (3rd value), AI + 3 Users (4th value)], the final prediction by the Collaboration Module h ψ (.), and the ground truth (GT) label. Note that in all cases in this table, when the pre-trained model f θ (.) makes a mistake, but the majority of users are correct, the final prediction by the Collaboration Module h ψ (.) is correct. Another interesting point is that the Human-AI Selection Module g ϕ (.) always has a large probability allocated to the AI Prediction Module alone, with the second largest probability allocated to the AI + 3 Users. So, in practice, the system will almost always use the AI prediction alone, but in about 5% to 10% of the predictions, we will have the prediction from AI + 3 Users, which almost guarantees a correct prediction by the Collaboration Module h ψ (.)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduced new benchmarks and innovative LECOMH methodology that simultaneously handles noisy-label learning, multi-rater learning, and human-AI collaboration by exploring an innovative learning to complement with multiple humans strategy. Our approach consists of training a human-AI collaboration method that can automatically determine if and how many humans collaborate with the AI model during testing with an optimisation that maximises classification accuracy and minimises collaboration costs. We compare state-of-the-art human-AI collaboration methods using our new benchmarks and show that our LECOMH produces better classification results, where accuracy always increases with collaboration cost, measured by the number of labellings provided by humans. Furthermore, LECOMH is the only method that improves the performance of human labellers in all benchmarks.\nLimitation: The major limitation of LECOMH is the assumption that labellers have similar performance and we do not try to characterise them to adapt the system to perform well for that particular user. We will address this issue by exploring a strategy where labellers are charaterised before starting to interact with the system, so the system will be able to better adapt to the user's performance. By improving the performance of users who interact with AI systems, we believe that LECOMH has a potential benefit to society given the more accurate outcomes produced by the system and the generally improved performance of labellers." } ]
The advent of learning with noisy labels (LNL), multi-rater learning, and human-AI collaboration has revolutionised the development of robust classifiers, enabling them to address the challenges posed by different types of data imperfections and complex decision processes commonly encountered in real-world applications. While each of these methodologies has individually made significant strides in addressing their unique challenges, the development of techniques that can simultaneously tackle these three problems remains underexplored. This paper addresses this research gap by integrating noisy-label learning, multi-rater learning, and human-AI collaboration with new benchmarks and the innovative Learning to Complement with Multiple Humans (LECOMH) approach. LECOMH optimises the level of human collaboration during testing, aiming to optimise classification accuracy while minimising collaboration costs that vary from 0 to M , where M is the maximum number of human collaborators. We quantitatively compare LECOMH with leading human-AI collaboration methods using our proposed benchmarks. LECOMH consistently outperforms the competition, with accuracy improving as collaboration costs increase. Notably, LECOMH is the only method enhancing human labeller performance across all benchmarks.
LEARNING TO COMPLEMENT WITH MULTIPLE HUMANS (LECOMH): INTEGRATING MULTI-RATER AND NOISY-LABEL LEARNING INTO HUMAN-AI COLLABORATION
[ { "figure_caption": "Figure 2 :2Figure 2: LECOMH training begins with AI Prediction Module pre-training using LNL strategy Wang et al. [2022], Garg et al. [2023]. We then apply the multi-rater learning method CROWDLAB Goh et al. [2022] to combine multiple user labels with AI predictions, creating a consensus label. The LECOMH training aims to maximise classification accuracy and minimise collaboration costs through three iterative steps: 1) building the set of AI prediction and users labels, 2) training of the Human-AI Selection Module to estimate the number of users (from 0 to 3, in the diagram) to collaborate with the AI Prediction Module, and 3) training of the Collaboration Module that takes the AI prediction and the selected users' labels to produce a final classification. Testing involves steps 1 to 3 in the diagram to generate the final prediction.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "[2022], and training of LECOMH's Human-AI Selection and Collaboration Modules. Below, we provide more details.Pre-training with noisy labels: we use LNL techniquesWang et al. [2022],Garg et al. [2023],Zhu et al. [2021] to train the AI Prediction Module f θ : X → ∆ |Y|-1 , where ∆ |Y|-1 denotes the |Y|-dimensional probability simplex, and θ ∈ Θ denotes the model parameters. This LNL training uses the training set, where the noisy label per image x i is randomly selected as one of the experts' annotations in M i .Generating consensus labels from the multiple raters of the training set: we leverage the multi-rater learning method CROWDLABGoh et al. [2022] that takes the training images and experts' labels (x, M) ∈ D, together with the AI classifier's predictions ŷ = f θ (x) for each sample in D to produce a consensus label ŷc ∈ Y and a quality (or confidence) score α. The consensus label dataset is formed with:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Ablation study showing test accuracy vs collaboration cost of LECOMH (w/o LNL), denoting LECOMH without using noisy-label learning, LECOMH (w. aggregation label) and LECOMH (w. random label), representing LECOMH training with a consensus label using majority vote and a randomly sampled training label, respectively, and LECOMH (w. SH-aggregation) and LECOMH (w. SH-random), denoting the reliance on a collaboration with single users (rather than multiple users) formed by aggregating labels and randomly selecting labels, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Test accuracy and collaboration cost as a function of λ.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "(.) is the AI Prediction Module's classification, g ϕ (.) represents the Human-AI Selection prediction probability vector for [AI prediction (1st value), AI + 1 User (2nd value), AI + 2 Users (3rd value), AI + 3 Users (4th value)], h ψ (.) is the final prediction from the Collaboration Module, and GT denotes the ground truth label.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ".[2023], and hybrid methodsNguyen et al. [2020],Jiang et al. [2020]. Nevertheless, existing LNL SOTA methods predominantly rely on SSL techniques. These methods are designed to classify clean training samples, enhancing the robustness of the training process. For instance, DivideMixLi et al. [2020] integrates MixMatchBerthelot et al. [2019] and Co-teachingHan et al. [2018] to harness the potential of samples classified as noisy. Adhering to this paradigm, several LNL studies employ MixUpZhang et al. [2017] within Semi-Supervised Learning (SSL), coupled with regularising loss functionsCordeiro et al. [2023],Sachdeva et al. [2023],Zhu et al. [2021]. Other LNL SOTA methods introduce graphical models to work together with SSL methods", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "methods have the same architecture. All SEHAICO methods rely on LNL pre-training because they provide better results for all cases, but for MEHAICO methods, we show training with (w. LNL) and without (w/o LNL) LNL pre-training. For all {SE,ME}HAICO methods,", "figure_data": "0.9900.9901.01.0Test Accuracy0.955 0.960 0.965 0.970 0.975 0.980 0.985LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CCTest Accuracy0.955 0.960 0.965 0.970 0.975 0.980 0.985LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CCTest Accuracy0.7 0.8 0.9 0.6LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CCTest Accuracy0.9 0.6 0.7 0.8 0.5LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC0.9500DIFT MoE2000400060008000100000.9500DIFT MoE2000400060008000100000.50DIFT MoE2000400060008000100000.40DIFT MoE200040006000800010000System CostSystem CostSystem CostSystem Cost(a) CIFAR-10H-aggregation.(b) CIFAR-10H-random.(c) Chaoyang-aggregation.(d) Chaoyang-random.0.9750.9750.950.95Test Accuracy0.800 0.850 0.875 0.900 0.925 0.950 0.825LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFT MoETest Accuracy0.825 0.850 0.875 0.900 0.925 0.950 0.800LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFT MoETest Accuracy0.80 0.85 0.90 0.75 0.70LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFT MoETest Accuracy0.80 0.85 0.90 0.75 0.70LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFT MoE0200040006000800010000020004000600080001000002000400060008000100000200040006000800010000System CostSystem CostSystem CostSystem Cost(e) IDN20-aggregation.(f) IDN20-random.(g) IDN30-aggregation.(h) IDN30-random.0.950.950.90.9Test Accuracy0.65 0.70 0.75 0.80 0.85 0.90LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFTTest Accuracy0.70 0.75 0.80 0.85 0.90 0.65LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFTTest Accuracy0.6 0.7 0.8LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFTTest Accuracy0.7 0.8 0.6LECOMH (Ours) CET (w/o LNL) CET (w. LNL) Multi_L2D (w/o LNL) Multi_L2D (w. LNL) RS CE OvA SP CC DIFT0.60MoE0.60MoE0.5MoE0.5MoE0200040006000800010000020004000600080001000002000400060008000100000200040006000800010000System CostSystem CostSystem CostSystem Cost(i) IDN40-aggregation.(j) IDN40-random.(k) IDN50-aggregation.", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Zheng Zhang; Kevin Wells; Gustavo Carneiro
[ { "authors": "Hwanjun Song; Minseok Kim; Dongmin Park; Yooju Shin; Jae-Gil Lee", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b0", "title": "Learning from noisy labels with deep neural networks: A survey", "year": "2022" }, { "authors": "Wei Ji; Shuang Yu; Junde Wu; Kai Ma; Cheng Bian; Qi Bi; Jingjing Li; Hanruo Liu; Li Cheng; Yefeng Zheng", "journal": "", "ref_id": "b1", "title": "Learning calibrated medical image segmentation via multi-rater agreement modeling", "year": "2021" }, { "authors": "Allan Dafoe; Yoram Bachrach; Gillian Hadfield; Eric Horvitz; Kate Larson; Thore Graepel", "journal": "Nature", "ref_id": "b2", "title": "Cooperative ai: machines must learn to find common ground", "year": "2021" }, { "authors": "Lucy M Mark D Halling-Brown; Dominic Warren; Emma Ward; Alistair Lewis; Matthew G Mackenzie; Louise S Wallis; Rosalind M Wilkinson; Rita Given-Wilson; Kenneth C Mcavinchey; Young", "journal": "Radiology: Artificial Intelligence", "ref_id": "b3", "title": "Optimam mammography image database: a large-scale resource of mammography images and clinical data", "year": "2020" }, { "authors": "Maithra Raghu; Katy Blumer; Greg Corrado; Jon Kleinberg; Ziad Obermeyer; Sendhil Mullainathan", "journal": "", "ref_id": "b4", "title": "The algorithmic automation problem: Prediction, triage, and human effort", "year": "2019" }, { "authors": "David Madras; Toni Pitassi; Richard Zemel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b5", "title": "Predict responsibly: improving fairness and accuracy by learning to defer", "year": "2018" }, { "authors": "Patrick Hemmer; Sebastian Schellhammer; Michael Vössing; Johannes Jakubik; Gerhard Satzger", "journal": "", "ref_id": "b6", "title": "Forming effective human-ai teams: Building machine learning models that complement the capabilities of multiple experts", "year": "2022" }, { "authors": "Rajeev Verma; Daniel Barrejon; Eric Nalisnick", "journal": "PMLR", "ref_id": "b7", "title": "Learning to defer to multiple experts: Consistent surrogate losses, confidence calibration, and conformal ensembles", "year": "2023-04-27" }, { "authors": "Bryan Wilder; Eric Horvitz; Ece Kamar", "journal": "", "ref_id": "b8", "title": "Learning to complement humans", "year": "2021" }, { "authors": "Xingjiao Wu; Luwei Xiao; Yixuan Sun; Junhang Zhang; Tianlong Ma; Liang He", "journal": "Future Gener. Comput. Syst", "ref_id": "b9", "title": "A survey of human-inthe-loop for machine learning", "year": "2022-10" }, { "authors": "Ben Green; Yiling Chen", "journal": "CSCW", "ref_id": "b10", "title": "The principles and limits of algorithm-in-the-loop decision making", "year": "2019-11" }, { "authors": "Hussein Mozannar; Hunter Lang; Dennis Wei; Prasanna Sattigeri; Subhro Das; David Sontag", "journal": "PMLR", "ref_id": "b11", "title": "Who should predict? exact algorithms for learning to defer to humans", "year": "2023-04-27" }, { "authors": "Nontawat Charoenphakdee; Jongyeong Lee; Masashi Sugiyama", "journal": "PMLR", "ref_id": "b12", "title": "On symmetric losses for learning from corrupted labels", "year": "2019" }, { "authors": "Zhilu Zhang; Mert Sabuncu", "journal": "", "ref_id": "b13", "title": "Generalized cross entropy loss for training deep neural networks with noisy labels", "year": "2018" }, { "authors": "Aritra Ghosh; Himanshu Kumar; Shanti Sastry", "journal": "", "ref_id": "b14", "title": "Robust loss functions under label noise for deep neural networks", "year": "2017" }, { "authors": "Junnan Li; Richard Socher; Steven Ch Hoi", "journal": "", "ref_id": "b15", "title": "Dividemix: Learning with noisy labels as semi-supervised learning", "year": "2020" }, { "authors": "Lu Jiang; Zhengyuan Zhou; Thomas Leung; Li-Jia Li; Li Fei-Fei", "journal": "", "ref_id": "b16", "title": "Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels", "year": "2018" }, { "authors": "Bo Han; Quanming Yao; Xingrui Yu; Gang Niu; Miao Xu; Weihua Hu; Ivor Tsang; Masashi Sugiyama", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "Co-teaching: Robust training of deep neural networks with extremely noisy labels", "year": "2018" }, { "authors": "Bodi Yuan; Jianyu Chen; Weidong Zhang; Hung-Shuo Tai; Sara Mcmains", "journal": "", "ref_id": "b18", "title": "Iterative cross learning on noisy labels", "year": "2018" }, { "authors": "Lee Jaehwan; Kim Yoo Donggeun; Hyo-Eun", "journal": "", "ref_id": "b19", "title": "Photometric transformer networks and label adjustment for breast density prediction", "year": "2019" }, { "authors": "Diego Ortego; Eric Arazo; Paul Albert; E O' Noel; Kevin Connor; Mcguinness", "journal": "IEEE", "ref_id": "b20", "title": "Towards robust learning with different label noise distributions", "year": "2021" }, { "authors": "Diego Ortego; Eric Arazo; Paul Albert; E O' Noel; Kevin Connor; Mcguinness", "journal": "", "ref_id": "b21", "title": "Multi-objective interpolation training for robustness to label noise", "year": "2021" }, { "authors": "Pengfei Chen; Junjie Ye; Guangyong Chen; Jingwei Zhao; Pheng-Ann Heng", "journal": "", "ref_id": "b22", "title": "Beyond class-conditional assumption: A primary attempt to combat instance-dependent label noise", "year": "2021" }, { "authors": "Eric Arazo; Diego Ortego; Paul Albert; O' Noel; Kevin Connor; Mcguinness", "journal": "", "ref_id": "b23", "title": "Unsupervised label noise modeling and loss correction", "year": "2019" }, { "authors": "Mengye Ren; Wenyuan Zeng; Binh Yang; R Urtasun", "journal": "", "ref_id": "b24", "title": "Learning to reweight examples for robust deep learning", "year": "2018" }, { "authors": "Zizhao Zhang; Han Zhang; Sercan Ö Arik; Honglak Lee; Tomas Pfister", "journal": "", "ref_id": "b25", "title": "Distilling effective supervision from severe label noise", "year": "2020" }, { "authors": "Zizhao Zhang; Tomas Pfister", "journal": "", "ref_id": "b26", "title": "Learning fast sample re-weighting without reward data", "year": "2021" }, { "authors": "Youjiang Xu; Linchao Zhu; Lu Jiang; Yi Yang", "journal": "", "ref_id": "b27", "title": "Faster meta update strategy for noise-robust deep learning", "year": "2021" }, { "authors": "Jun Shu; Qi Xie; Lixuan Yi; Qian Zhao; Sanping Zhou; Zongben Xu; Deyu Meng", "journal": "", "ref_id": "b28", "title": "Meta-weight-net: Learning an explicit mapping for sample weighting", "year": "2019" }, { "authors": "Arpit Garg; Cuong Nguyen; Rafael Felix; Thanh-Toan Do; Gustavo Carneiro", "journal": "", "ref_id": "b29", "title": "Instance-dependent noisy label learning via graphical modelling", "year": "2023" }, { "authors": "Tam Nguyen; C Mummadi; T Ngo; L Beggel; Thomas Brox", "journal": "", "ref_id": "b30", "title": "SELF: learning to filter noisy labels with selfensembling", "year": "2020" }, { "authors": "Lu Jiang; Di Huang; Mason Liu; Weilong Yang", "journal": "", "ref_id": "b31", "title": "Beyond synthetic noise: Deep learning on controlled noisy labels", "year": "2020" }, { "authors": "David Berthelot; Nicholas Carlini; Ian Goodfellow; Nicolas Papernot; Avital Oliver; Colin A Raffel", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Mixmatch: A holistic approach to semi-supervised learning", "year": "2019" }, { "authors": "Hongyi Zhang; Moustapha Cisse; David Yann N Dauphin; Lopez-Paz", "journal": "", "ref_id": "b33", "title": "mixup: Beyond empirical risk minimization", "year": "2017" }, { "authors": "Ragav Filipe R Cordeiro; Vasileios Sachdeva; Ian Belagiannis; Gustavo Reid; Carneiro", "journal": "Pattern Recognition", "ref_id": "b34", "title": "Longremix: Robust learning with high confidence samples in a noisy label environment", "year": "2023" }, { "authors": "Ragav Sachdeva; Filipe Rolim Cordeiro; Vasileios Belagiannis; Ian Reid; Gustavo Carneiro", "journal": "Pattern Recognition", "ref_id": "b35", "title": "Scanmix: learning from severe label noise via semantic clustering and semi-supervised learning", "year": "2023" }, { "authors": "Chuang Zhu; Wenkai Chen; Ting Peng; Ying Wang; Mulan Jin", "journal": "IEEE transactions on medical imaging", "ref_id": "b36", "title": "Hard sample aware noise robust learning for histopathology image classification", "year": "2021" }, { "authors": "Haobo Wang; Ruixuan Xiao; Yiwen Dong; Lei Feng; Junbo Zhao", "journal": "", "ref_id": "b37", "title": "Promix: combating label noise via maximizing clean sample utility", "year": "2022" }, { "authors": "Shipeng Vikas C Raykar; Linda H Yu; Anna Zhao; Charles Jerebko; Gerardo Hermosillo Florin; Luca Valadez; Linda Bogoni; Moy", "journal": "", "ref_id": "b38", "title": "Supervised learning from multiple experts: whom to trust when everyone lies a bit", "year": "2009" }, { "authors": "Melody Guan; Varun Gulshan; Andrew Dai; Geoffrey Hinton", "journal": "", "ref_id": "b39", "title": "Who said what: Modeling individual labelers improves classification", "year": "2018" }, { "authors": "Zahra Mirikharaji; Kumar Abhishek; Saeed Izadi; Ghassan Hamarneh", "journal": "", "ref_id": "b40", "title": "D-lema: Deep learning ensembles from multiple annotations-application to skin lesion segmentation", "year": "2021" }, { "authors": "Ashish Khetan; Zachary C Lipton; Anima Anandkumar", "journal": "", "ref_id": "b41", "title": "Learning from noisy singly-labeled data", "year": "2017" }, { "authors": "Ryutaro Tanno; Ardavan Saeedi; Swami Sankaranarayanan; Nathan Daniel C Alexander; Silberman", "journal": "", "ref_id": "b42", "title": "Learning from noisy labels by regularized estimation of annotator confusion", "year": "2019" }, { "authors": "Junde Wu; Huihui Fang; Zhaowei Wang; Dalu Yang; Yehui Yang; Fangxin Shang; Wenshuo Zhou; Yanwu Xu", "journal": "Springer", "ref_id": "b43", "title": "Learning self-calibrated optic disc and cup segmentation from multi-rater annotations", "year": "2022" }, { "authors": "Zhi Cao; Enhong Chen; Ye Huang; Shuanghong Shen; Zhenya Huang", "journal": "", "ref_id": "b44", "title": "Learning from crowds with annotation reliability", "year": "2023" }, { "authors": "Zhengqi Gao; Fan-Keng Sun; Mingran Yang; Sucheng Ren; Zikai Xiong; Marc Engeler; Antonio Burazer; Linda Wildling; Luca Daniel; Duane S Boning", "journal": "Springer", "ref_id": "b45", "title": "Learning from multiple annotator noisy labels via sample-wise label fusion", "year": "2022" }, { "authors": "Wen Hui; Ulyana Goh; Jonas Tkachenko; Cleanlab Mueller; Cleanlab Cleanlab", "journal": "", "ref_id": "b46", "title": "Crowdlab: Supervised learning to infer consensus labels and quality scores for data with multiple annotators", "year": "2022" }, { "authors": "Erin K Chiou; John D Lee", "journal": "Human factors", "ref_id": "b47", "title": "Trusting automation: Designing for responsivity and resilience", "year": "2023" }, { "authors": "Zhuoran Lu; Ming Yin", "journal": "", "ref_id": "b48", "title": "Human reliance on machine learning models when performance feedback is limited: Heuristics and risks", "year": "2021" }, { "authors": "Ming Yin; Jennifer Wortman Vaughan; Hanna Wallach", "journal": "", "ref_id": "b49", "title": "Understanding the effect of accuracy on trust in machine learning models", "year": "2019" }, { "authors": "Donghee Shin", "journal": "International Journal of Human-Computer Studies", "ref_id": "b50", "title": "The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai", "year": "2021" }, { "authors": "Katharina Weitz; Dominik Schiller; Ruben Schlagowski; Tobias Huber; Elisabeth André", "journal": "", "ref_id": "b51", "title": "do you trust me?\" increasing user-trust by integrating virtual agents in explainable ai interaction design", "year": "2019" }, { "authors": "Amir Rosenfeld; Markus D Solbach; John K Tsotsos", "journal": "", "ref_id": "b52", "title": "Totally looks like-how humans compare, compared to machines", "year": "2018" }, { "authors": "Thomas Serre", "journal": "Annual review of vision science", "ref_id": "b53", "title": "Deep learning: the good, the bad, and the ugly", "year": "2019" }, { "authors": "Ece Kamar; Severin Hacker; Eric Horvitz", "journal": "", "ref_id": "b54", "title": "Combining human and machine intelligence in large-scale crowdsourcing", "year": "2012" }, { "authors": "Gagan Bansal; Besmira Nushi; Ece Kamar; Eric Horvitz; Daniel S Weld", "journal": "", "ref_id": "b55", "title": "Is the most accurate ai the best teammate? optimizing ai for teamwork", "year": "2021" }, { "authors": "Nikhil Agarwal; Alex Moehring; Pranav Rajpurkar; Tobias Salz", "journal": "National Bureau of Economic Research", "ref_id": "b56", "title": "Combining human expertise with artificial intelligence: experimental evidence from radiology", "year": "2023" }, { "authors": "Kailas Vodrahalli; Roxana Daneshjou; Tobias Gerstenberg; James Zou", "journal": "", "ref_id": "b57", "title": "Do humans trust advice more if it comes from ai? an analysis of human-ai interactions", "year": "2022" }, { "authors": "Corinna Cortes; Giulia Desalvo; Mehryar Mohri", "journal": "Springer", "ref_id": "b58", "title": "Learning with rejection", "year": "2016-10-19" }, { "authors": "Hussein Mozannar; David Sontag", "journal": "PMLR", "ref_id": "b59", "title": "Consistent estimators for learning to defer to an expert", "year": "2020" }, { "authors": "Rajeev Verma; Eric Nalisnick", "journal": "PMLR", "ref_id": "b60", "title": "Calibrated learning to defer with one-vs-all classifiers", "year": "2022" }, { "authors": "Nastaran Okati; Abir De; Manuel Rodriguez", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b61", "title": "Differentiable learning under triage", "year": "2021" }, { "authors": "Minghao Liu; Jiaheng Wei; Yang Liu; James Davis", "journal": "", "ref_id": "b62", "title": "Do humans and machines have the same eyes? human-machine perceptual differences on image classification", "year": "2023" }, { "authors": "Kevin Dj Strouse; Matt Mckee; Edward Botvinick; Richard Hughes; Everett", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b63", "title": "Collaborating with humans without human data", "year": "2021" }, { "authors": "Micah Carroll; Rohin Shah; Mark K Ho; Tom Griffiths; Sanjit Seshia; Pieter Abbeel; Anca Dragan", "journal": "Advances in neural information processing systems", "ref_id": "b64", "title": "On the utility of learning about humans for human-ai coordination", "year": "2019" }, { "authors": "Chao Yu; Jiaxuan Gao; Weilin Liu; Botian Xu; Hao Tang; Jiaqi Yang; Yu Wang; Yi Wu", "journal": "", "ref_id": "b65", "title": "Learning zero-shot cooperation with humans, assuming humans are biased", "year": "2023" }, { "authors": "Eric Jang; Shixiang Gu; Ben Poole", "journal": "", "ref_id": "b66", "title": "Categorical reparameterization with gumbel-softmax", "year": "2016" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b67", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Jiaheng Wei; Zhaowei Zhu; Hao Cheng; Tongliang Liu; Gang Niu; Yang Liu", "journal": "", "ref_id": "b68", "title": "Learning with noisy labels revisited: A study using real-world human annotations", "year": "2021" }, { "authors": "Ruairidh M Joshua C Peterson; Thomas L Battleday; Olga Griffiths; Russakovsky", "journal": "", "ref_id": "b69", "title": "Human uncertainty makes classification more robust", "year": "2019" }, { "authors": "Xiaobo Xia; Tongliang Liu; Bo Han; Mingming Gong; Jun Yu; Gang Niu; Masashi Sugiyama", "journal": "", "ref_id": "b70", "title": "Sample selection with uncertainty of losses for learning with noisy labels", "year": "2021" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga", "journal": "Advances in neural information processing systems", "ref_id": "b71", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 72, 686.53, 87.85, 14.07 ], "formula_id": "formula_0", "formula_text": "Let D = {x i , M i } |D| i=1" }, { "formula_coordinates": [ 4, 190.84, 675.98, 349.83, 25.62 ], "formula_id": "formula_1", "formula_text": "D c = {(x i , ŷc i , M i )|(x i , M i ) ∈ D, (ŷ i , α i ) = CrowdLab(x i , f θ (x i ), M i ), α i > 0.5},(1)" }, { "formula_coordinates": [ 5, 150.29, 118.31, 156.65, 24.81 ], "formula_id": "formula_2", "formula_text": "h ψ : ∆ |Y|-1 × ... × ∆ |Y|-1 M +1 times → ∆ |Y|-1" }, { "formula_coordinates": [ 5, 150.78, 177.31, 389.88, 43.65 ], "formula_id": "formula_3", "formula_text": "ϕ * , ψ * = arg min ϕ,ψ 1 |D c | (xi,ŷ c i ,Mi)∈D c ℓ (ŷ c i , h ψ (p (g ϕ (x i ), f θ (x i ), shf(M i )))) + λ × cost(g ϕ (x i )),(2)" }, { "formula_coordinates": [ 5, 71.64, 246.55, 469.03, 91.36 ], "formula_id": "formula_4", "formula_text": "p (g ϕ (x), f θ (x), shf(M)) =          [f θ (x), 0 |Y| , ..., 0 |Y| ] if max j g (j) ϕ (x) = g (1) ϕ (x) [f θ (x), m i,1 , ..., 0 |Y| ] if max j g (j) ϕ (x) = g (2) ϕ (x) ... [f θ (x), m i,1 , ..., m i,M ] if max j g (j) ϕ (x) = g (M +1) ϕ (x) ,(3) with g (j)" }, { "formula_coordinates": [ 5, 228.53, 358.26, 312.13, 30.32 ], "formula_id": "formula_5", "formula_text": "cost(g ϕ (x)) = M +1 j=1 g (j) ϕ (x) × (j -1),(4)" }, { "formula_coordinates": [ 5, 198.21, 413.05, 136.07, 14.3 ], "formula_id": "formula_6", "formula_text": "ϕ (x) = g (K) ϕ (x) for K ∈ [2, M ]," } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b31", "b32", "b9", "b38", "b11", "b36" ], "table_ref": [], "text": "Millimeter wave (mmWave) sensing is a burgeoning field with vast implications for surveillance, security [7,24], autonomous navigation [33,34], etc. MmWave radar sensors in particular have gained immense popularity in recent years, owing to their capability to discern objects' range and angles and even generate point clouds [36]. Robustness against lighting and atmospheric conditions primes mmWave radar for roles where conventional cameras and lidar falter [11,40]. Despite such advantages, realizing precise 3D object characterization with mmWave technology-a process critical for understanding complex scenes and behaviors-has been constrained by the limited spatial resolution of available mmWave sensors.\nRecently proposed data-driven mmWave sensing models [13,17,38,45,47], while offering potential for object classification, encounters barriers when advancing toward the nuanced goal of 3D mesh reconstruction. These barriers include the heavy reliance on large, diverse datasets, the difficulty in generalizing beyond learned object types, and inability to adapt to new radar hardware without extensive retraining.\nIn this work, we challenge the status quo by introducing DiffSBR, a new approach anchored by a differentiable radio frequency (RF) ray-tracing simulator that enables gradientbased 3D reconstruction. Central to our contribution is the advancement of a differentiable RF simulation capable of bridging the gap between sparse mmWave radar point clouds and detailed 3D object geometries. This novel simulator allows for the backpropagation of loss scalar, facilitating the fine-tuning of simulated parameters to mimic the real-world radar observations. By harnessing the power of differentiable programming within the RF domain, DiffSBR sets a precedent in mmWave-based 3D object characterization. DiffSBR transcends the constraints of data-hungry methods by allowing for the characterization of objects previously unseen by the radar, thus minimizing the need for exhaustive data collection. Our experiments on a variety of radar platforms and real-world scenes reveal that DiffSBR not only achieves remarkable accuracy in reconstructing object shapes and sizes, but also demonstrates an impressive ability to infer the 3D mesh of novel objects directly from sparse mmWave signals.\nThe key contributions of DiffSBR are two folds. First, we introduce an RF ray tracing simulator that can represent the temporal-frequency patterns of mmWave radar signals, along with their spatial propagation and interaction with objects. We design new mechanisms to make the entire simulator differentiable, so that it can be incorporated into a wide range of RF optimization problems. Second, leveraging the differentiable RF simulator, we formulate the radarbased 3D reconstruction as a gradient-driven optimization framework that matches virtual objects to measured radar point clouds. This framework departs from recently proposed data-driven approaches as it is easy to generalize and requires no radar training data. Comprehensive experiments on real radar hardware and in diverse environments demonstrate the effectiveness of our methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Millimeter-Wave (mmWave) Sensing", "publication_ref": [ "b35", "b9", "b38", "b42", "b39", "b42", "b18", "b11", "b36" ], "table_ref": [], "text": "MmWave sensing technologies recently garnered substantial interest in the domain of machine perception [36,37], largely attributed to their resilience under challenging environmental conditions, e.g., low light, smoke, rain, snow and fog [11,40,44]. Commercial mmWave automotive radar sensors can easily achieve multi-cm range (depth) resolution, owing to their high time resolution. However, their angular resolution is constrained by the antenna aperture [32], which is directly proportional to the number of antenna elements -analogous to the pixel count in a camera. Consequently, while these sensors can generate 3D point clouds, the resulting data points are notably sparse, typically amounting to mere dozens of points [36].\nEarlier studies of mmWave-based automotive perception primarily explored mmWave radars for obstacle detection [41,44]. More recent applications of mmWave sensing are imitating visual perception capabilities, such as gesture and posture tracking [20,23,25,25]. Despite these advancements, existing mmWave-based object characterization models predominantly rely on data-driven blackbox inference or black-box optimization [13,17,38,45,47], which suffers from generalization due to (i) highly diverse radar hardware and (ii) lack of large, diverse radar datasets. An RF simulator tailored for mmWave signals can mitigate such limitations. More importantly, the simulator must be differentiable so as to seamless integrate with existing neural network models or gradient-based optimization frameworks. DiffSBR marks an important step in filling this gap." }, { "figure_ref": [], "heading": "Computational Electromagnetics", "publication_ref": [ "b12", "b40", "b10", "b13", "b9", "b44", "b46" ], "table_ref": [], "text": "Computational electromagnetics (CEM) has emerged as a powerful tool for simulating RF propagation and scattering. CEM techniques numerically solve Maxwell's equations to model electromagnetic wave interactions with objects and environments. Historically, CEM relied on frequency and time domain methods, such as the finitedifference time-domain (FDTD) [31] technique. The finite element method (FEM) [16] and the method of moments (MoM) [14] have also seen extensive applications, particularly in antenna design. More recently, learning-based techniques, including neural networks [30] and Gaussian processes [42], have been explored for surrogate modeling. Conventional CEM methods often face restrictions in handling large simulation domains due to computational intensities. Contemporary research has leaned towards ray tracing for efficient large-scale propagation modeling [12,15].\nTo support cutting-edge wireless applications, like ambient computing or metamaterial design and intricate sensing [11,46,48], optimization-based methods are essential. Yet, many of the existing techniques fall short, either due to their non-differentiable nature or because their computational overheads render them unsuitable for iterative processes. In this context, DiffSBR emerges as a flexible computational electromagnetics simulator, tailor-made for optimization-centric tasks. This facilitates a more seamless fusion of electromagnetic waves with deep learning models and enables the execution of gradient-driven optimization." }, { "figure_ref": [], "heading": "Neural and Differentiable Rendering", "publication_ref": [ "b26", "b33", "b9", "b32" ], "table_ref": [], "text": "Neural rendering has seen rapid progress in recent years. Early works focused on neural techniques for novel view synthesis from a set of input views. These methods train networks to implicitly represent 3D scenes and render novel views through volumetric ray marching [28]. While able to generate high-quality results, they lack an explicit 3D representation and differentiability. More recent works have focused on building differentiable renderers to enable end-toend training for 3D reconstruction and novel view synthesis [18,27,35]. These differentiable renderers approximate the traditional graphics pipeline, enabling gradient-based optimization of 3D representations like meshes, point clouds, or implicit functions. However, current differentiable rendering techniques predominantly focus on the visual scenes. DiffSBR aims to extend the principles of differentiable rendering into the RF domain. This expansion promises potential benefits for a multitude of applications, such as radarbased human activity recognition, autonomous driving, and programmable environment based on metasurfaces [11,34]." }, { "figure_ref": [ "fig_0" ], "heading": "System Design", "publication_ref": [], "table_ref": [], "text": "DiffSBR adopts an iterative optimization framework for reconstructing 3D scenes from RF signals. Figure 1 illustrates its overall architecture and workflow. (i) It first initializes a parameterized 3D scene representation based on point clouds. (ii) The forward pass involves differentiable RF ray tracing to simulate radar signals based on the generated 3D scene, incorporating RF material properties and multi-antenna Multiple Input Multiple Output (MIMO) arrays on the radar. (iii) To assess if the generated 3D scene matches the real scene as the output, the simulated signals are compared to observed/received signals using a spatial multi-antenna loss. (iv) Given this loss function, stochastic gradient descent is employed to guide the iterative optimization, update the 3D scene parameters, and minimize this loss. The gradients are computed by backpropagation through differentiable RF ray tracing. (v) After iterative optimization, the refined 3D scene parameters constitute the final reconstructed 3D representation that closely matches the true scene, as the sensing output, serving for the downstream tasks. " }, { "figure_ref": [], "heading": "3D Scene Representation and Initialization", "publication_ref": [ "b27", "b35", "b24" ], "table_ref": [], "text": "To reconstruct 3D scenes from a few observations using iterative optimization, we define 3D scene representation and delineate the scene parameters to be optimized. Our method encompasses a diverse set of 3D representations, including surface-based representations like triangle meshes, implicit models such as the signed distance field (SDF), and comprehensive volumetric approaches. It may also incorporate emerging representations with NeRF (Neural Radiance Fields) [29] and 3D Gaussians [19]. In general, any alternative 3D representations can be used in DiffSBR as long as they are compatible with ray tracing and differentiable with respect to their control parameters. Parameterization. Given the sparsity of mmWave signals, it is crucial to parameterize the scene for specific downstream applications, thereby reducing the optimization search space and lowering ambiguity. We consider the following mainstream mmWave sensing applications for case studies:\n(i) 3D Bounding Box Detection: This is a primary application of mmWave sensing [37]. We adopt the transformation matrix of 3D mesh objects as the optimization parameter. Concurrently, positional encoding should be applied.\n(ii) Human Pose Estimation: To estimate humans posture via mmWave radar, we employ the SMPL model [26], which incorporates 69 parameters to control human postures and another 10 parameters for body shape adjustments.\n(iii) Unseen Object Reconstruction: Voxel-based meth-ods can be adopted for unseen object reconstruction, such as density field with triangulation. To mitigate ambiguity, one approach is to pre-train a voxel representation specifically for the target type of objects. Autoencoders can be employed to encode a high-dimensional voxel representation into a low-dimensional latent space, with subsequent optimizations performed within this latent space. The key advantages of this approach are the significant improvement in optimization efficiency and the reduction of ambiguity.\nNotably, training such a pre-trained encoder and decoder doesn't necessitate the collection of actual RF signal data, which can be labor-intensive and require specialized equipment. Instead, it suffices to leverage existing large-scale 3D model datasets.\nInitialization. Initialization from point clouds preprocessed from raw data has been previously demonstrated as an effective approach in prior works [19]. DiffSBR can be initialized from these point clouds from mmWave radar. The initialization procedure can be based on registration techniques to compute the initial parameters for the aforementioned scene representation." }, { "figure_ref": [], "heading": "Differentiable RF Ray Tracing", "publication_ref": [], "table_ref": [], "text": "Given a 3D scene initialized and parameterized by a continuous set Θ, which encapsulates elements such as radar pose, scene geometry, material properties, and dynamics, we need to generate the corresponding simulated radar signal Y. Besides, considering a scalar function derived from this radar signal exists, such as a desired loss function to be optimized, another aim of this approach is to backpropagate the gradient of the scalar with respect to all scene parameters in Θ.\nOur Differentiable RF Ray Tracing is designed to achieve these dual tasks of forward simulation and backward propagation." }, { "figure_ref": [], "heading": "Ray Tracing", "publication_ref": [ "b20" ], "table_ref": [], "text": "Ray Tracing Forward Simulation. Ray tracing, an established technique in computer graphics, has been used in in computational RF to estimate parameters such as time of flight, velocity, and signal strength of electromagnetic radiation. Besides, sensing processing in RF can also be a similar function to rendering processing in graphics. Therefore, inspired by the Rendering Equation in graphics, we introduce an \"RF Rendering Equation\" for RF sensing with Ray tracing to generate the simulated radar signal:\nS r (d, φ o ) = P t (φ o )F (φ o )G(φ o ) 4πd 2 PL(d) + Ω p(x) P t (x, ω i )F (ω i ) 4π|x| 2 + S r (x, ω i ) cos ω i , dω i ,(1)\nwhere S r (d, φ o ) is the received signal power density at distance d and direction φ o . P t (•) signifies the transmitted power, with F (φ o ) and G(φ o ) denoting the directional gains of the transmitter and receiver, respectively. The term PL(d) represents path loss over distance d. p(x) is the reflection coefficient at position x, and Ω encompasses the entire space of potential signal paths. ω i describes the solid angle of incident direction. The integral captures multipath contributions, with the recursive term S r (x, ω i ) representing multiple reflections akin to graphics' rendering equations.\nTo understand how the single propagates on each antenna within the RF rendering equation, ray tracing is used to simulate electromagnetic wave (i.e., mmWave signal) interactions with the generated 3D scene. Computing RF requires integration over all plausible RF paths perceived by antennas. This can be mathematically represented as:\nI = P f (p, Θ) dp,(2)\nwhere f depends on scene parameters Θ such as object positions, shapes, materials, etc., and p denotes a ray path. However, solving this integral is often analytically and computationally intractable. Monte Carlo methods provide a statistical approach by taking random samples to approximate the integral:\nI ≈ Î = 1 N N i=1 f (pi, Θ),(3)\nwhere Î converges to I as N → ∞. By leveraging Monte Carlo ray tracing, accurate RF channel characteristics can be efficiently simulated while avoiding expensive full-wave solutions of Maxwell's equations, especially in intricate environments with rich multipath reflections.\nRay Tracing Backpropagation. To achieve the backpropagation and calculate the partial derivative of each parameter of interest, Differentiating Ray Tracing is further designed, denoted as θ ∈ Θ, with respect to the final output. The complexities arise due to the composition of both continuous and discontinuous integrands within function f . Continuous Integrands: Most functions in RF ray tracing are continuous. These include the antenna radiation pattern, path losses, and reflection/transmission coefficients. One example is the attenuation function A(•) that depends on the attenuation coefficient θ A . Here, we decompose the function f into f ′ (•) and A(•), both of which are continuous with respect to θ A :\nI ≈ E = 1 N N i=1 f ′ (pi, θA) × A(pi, θA),(4)\nTheir partial derivatives can be calculated using automatic differentiation based on the chain rule:\n∂I ∂θ A ≈ ∂E ∂θ A = 1 N N i=1 f ′ (p i , θ A ) × ∂A(p i , θ A ) ∂θ A +A(p i , θ A ) × ∂f ′ (p i , θ A ) ∂θ A .(5)\nDiscontinuous Integrands: Discontinuous integrals in ray tracing arise from visibility changes due to geometric edges and occlusion. To overcome this problem, we employ the reparameterization method [27] that transforms non-differentiable integrals into differentiable ones using a change of variables.\nLet f (p, θ) be a discontinuous integrand over P, where θ denotes differentiable scene parameters. If a transformation T : Q → P exists, the integral can be reparameterized as:\nP f (p, θ)dp = Q f (T (q, θ), θ)| det JT |dq,(6)\n∂I ∂θ = Q ∂f ∂θ + f ∂ ∂θ (log|detJ T |) dq,(7)\nwhere J T is the Jacobian of T . The key idea is to construct T such that f (T (q, θ), θ) no longer depends on θ, enabling standard Monte Carlo integration and automatic differentiation. It is worth noting that while T is designed individually for each integral with discontinuities, common transformations for vertices are already established in [22,27].\nWith this measure, the ray tracing components of the simulator become differentiable. We can then backpropagate the gradients from the ray tracing results, such as Timeof-Flights I t and signal strength I s , to the input scene parameters Θ. The next step involves differentiating the RF component." }, { "figure_ref": [], "heading": "RF Signal", "publication_ref": [], "table_ref": [], "text": "Simulated RF Signal Generation. After obtaining intermediate information I from ray tracing, such as the time-offlight I t and signal strength I s , we can calculate the timedomain Intermediate Frequency (IF) signal to accurately simulate the mmWave signal. Note that the simulated IF signal follows the output format of a real radar, and can be represented as:\nSIF (t) = N i=0 I s i exp(2πj(µtI t i + fcI t i )), (8\n)\nwhere N is the number of rays, f c is the carrier frequency, and µ is the frequency slope, given by µ = B T . B denotes the signal bandwidth and T represents the chirp duration. The terms I s i and I t i refer to the signal strength and time-offlights of i-th path, respectively, derived from the ray tracing results.\nMaterial Properties: To robustly simulate the signal and elevate the sensing performance, we model the electromagnetic material properties based on the Fresnel reflection coefficients derived from Maxwell's equations. The complex relative permittivity ϵ r and permeability µ 0 characterize each material.\nThe Fresnel reflection coefficients r p and r s for parallel and perpendicular polarizations depend on the incident angle δ i , transmission angle δ t , and wave impedance η = µ0 /ϵ and complex permittivity ϵ = ϵ r ϵ 0 -jσ ω :\nrp = η cos δi -cos δt η cos δi + cos δt , rs = cos δi -η cos δt cos δi + η cos δt ,(9)\nwhere cos δ i and cos δ t are computed from the incident direction i, surface normal n, and relative permittivity ϵ r :\ncos δi = -i • n, sin δi = 1 -cos 2 δi, sin δt = √ ϵr sin δi, cos δt = 1 -sin 2 δt.(10)\nThis Fresnel model can help balance accuracy and efficiency for simulating complex RF propagation in our raytracing framework. The Fresnel reflection coefficients, which are differentiable with respect to the material properties (e.g., permittivity), are integrated into this framework. Every time a ray interacts with a surface, these coefficients are applied and accumulated, subsequently contributing to the signal strength I s of the path.\nRF Signal Backpropagation. To backpropagate the gradient from the final signal to the ray tracing results, differentiation of the signal generation process is imperative. Fortunately, given that the signal generation is continuous, it is amenable to direct differentiation using automatic differentiation. Nevertheless, we also present our analytical differentiation approach: 11)\n∂SIF (t) ∂I t i = 2πj(µt + fc)I s i exp(2πj(µtI t i + fcI t i )), (\n∂SIF (t) ∂I s i = exp(2πj(µtI t i + fcI t i )). (12\n)" }, { "figure_ref": [], "heading": "End-to-End Backpropagation", "publication_ref": [], "table_ref": [], "text": "As described in Section 3.2.1 and Section 3.2.2, we achieve both forward simulation and backpropagation of 3D scene Θ to path information I and I to simulated radar signal S IF (t). We can then achieve end-to-end backpropatation by calculating the partial derivative of simulated radar signal S IF (t) respects to all scene parameters Θ by using chain-rule or automatic differentiation:\n∂SIF (t) ∂θ = N i=0 ∂SIF (t) ∂Ii × ∂Ii ∂θ .(13)\nWith the completion of the differentiable RF ray tracing, the simulator is now capable of not only accurately and efficiently simulating the radar signal based on input scene parameters but also backpropagating the gradient to all these parameters." }, { "figure_ref": [], "heading": "Gradient-Based Optimization for 3D Scene", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Optimization Formulation", "publication_ref": [ "b13" ], "table_ref": [], "text": "Given the radar-observed sparse signals y within a realworld scene, our goal is to generate a 3D digital reconstruction of the scene, denoted as θ. An ideal reconstruction would result in simulated radar signals from an RF simulator S(•) : Θ → Y that align closely with y. Mathematically, the direct inversion, θ ← S -1 (y), would provide the desired reconstruction. However, due to the complexity of S(•), obtaining a closed-form solution is not feasible.\nTo address this challenge, we introduce an iterative optimization framework that minimizes the discrepancy between the simulated signals S(x) and the observed/received signals y. This iterative process refines the scene parameters x 0 , leading to a more accurate 3D representation x * of the real-world scene from the RF signals. The process can be expressed mathematically as:\nθ * = arg min θ∈Θ ℓ(S(θ), y). (14\n)\nTo solve the optimization problem, we use Stochastic Gradient Descent (SGD). Specifically, the iterative update for our scene parameters using SGD is given by: θt+1 = θt -αt∇ℓ(S(θt), y), (15) where α t is the learning rate at iteration t. Using SGD, we iteratively refine the 3D scene representation by sampling subsets of the RF signals, computing the discrepancy gradients, and updating the parameters until convergence. Upon convergence, DiffSBR yields a 3D reconstruction θ * that serves as a digital proxy for the real-world scene as the sensing output corresponding to the observed/received RF signals." }, { "figure_ref": [], "heading": "Spatial Multi-Antenna Loss for Optimization", "publication_ref": [ "b19", "b37" ], "table_ref": [], "text": "The objective of radar-based 3D scene reconstruction is to identify the optimal scene parameters, θ * , which minimize the reconstruction loss, ℓ, while adhering to constraints dictated by RF ray tracing. In contrast to standard camera imaging where each pixel corresponds to a single RGB value, radar systems involve antenna arrays that capture a temporal sequence of data. These sequences can consist of hundreds of thousands of values from RF signals, typically in the hundreds of megahertz range. This situation poses significant challenges since conventional loss may not be well-suited to the distinct properties of mmWave sensing data, such as high frequency and low sample count. For example, minor phase shifts might lead to substantial changes in Mean Squared Error (MSE) [43], impeding the convergence of the model. Likewise, other loss functions like Kullback-Leibler (KL) [21] divergence face difficulties with high-frequency, low-sample data. Similarly, loss functions that rely on Fast Fourier Transform (FFT) images may not effectively capture fine details.\nIn the context of widely used multi-antenna MIMO radar, each transmitter-receiver antenna pair results in a unique signal, allowing the antenna array to intrinsically gather spatial information [8,39]. This aspect is crucial but often neglected in traditional temporal loss approaches applied to signal antennas, resulting in significant loss of information. To overcome this, we have developed a novel loss function that converts the native MIMO signals into a 3D spatial representation. We then employ an MSE-based criterion for optimization, leveraging the spatial information inherent in the MIMO radar data to its fullest extent.\nℓ(y, ȳ) = 1 N N i=1 (T (y)i -T (ȳ)i) 2 , (16\n)\nwhere T is a signal processing algorithm that maps raw MIMO signals to one 3D spatial image. " }, { "figure_ref": [], "heading": "Experimental Evaluation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Ambient Environment: As shown in Figure 2, we conduct experiments across 6 indoor locations (e.g., classrooms, homes, halls) and 6 outdoor locations (e.g., outdoor campus, football fields, and parking lots). These locations represent a variety of ambient environmental structures and multipath conditions. Sensing Objects: Our experiments involve the following object categories corresponding to the target use cases in Section 1. (1) Human (for pedestrian sensing, vulnerable road user detection, posture reconstruction, etc.): We recruit 4 male and 3 female participants, with an average age of 25, and height ranging from 164cm to 183cm. (2) Cars: The shape of objects under the \"vehicle\" category varies greatly, e.g., hatchback, sedan, SUV, coupe, convertible, bus, and trucks. Therefore, we use a dynamic mesh to parameterize the vehicle 3D mesh. We define a 3D density field with a resolution of 512 sampling points per dimension, resulting in a total of 134 million parameters. To ensure computational efficiency, we train a autoencoder on ShapeNet We evaluate the performance of DiffSBR in various scenarios. For the single-object 3D reconstruction, as shown in Table 2, DiffSBR performs best on cars and trash bins, with an average SSIM of 0.92, demonstrating superior shape similarity with ground truth 3D meshes. Additionally, the size error rates are consistently below 5% across different objects, with a depth accuracy of 85% at a 20cm error tolerance, as demonstrated in Figure 3." }, { "figure_ref": [ "fig_3" ], "heading": "3D Reconstruction Performance", "publication_ref": [ "b11" ], "table_ref": [ "tab_0" ], "text": "When applied to multiple and complex object scenarios, as shown in Table 1, DiffSBR achieves an average SSIM of 0.88. It again performs best on strong reflectors such as cars (average SSIM 0.93) and benches (average SSIM 0.9). In comparison with the state-of-the-art data-driven method HawkEye [13], our approach outperforms Hawk-Eye across all object categories and significantly exceeds HawkEye's average SSIM of 0.72. This indicates not only superior accuracy but also improved generalization capabilities, achieved without the need for extensive pre-training on large datasets. Besdies, the depth accuracy from the DiffSBR 3D reconstruction is depicted in Figure 4 (bottom rows). For humans and car scenarios, the depth accuracy reaches above 85% when the error tolerance is 5 cm. With two cars, the depth accuracy remains above 80% when the error tolerance is 10cm. Overall, DiffSBR can reach a depth accuracy of 87% with an error tolerance of 20 cm across all the test scenarios. These results underscore the effectiveness of DiffSBR in achieving accurate 3D reconstructions across diverse object categories and scenarios, outperforming existing data-driven approaches and offering robustness in handling multiple and complex objects without needing extensive prior radar training data. To verify whether DiffSBR can sense and reconstruct the objects following the physical laws correctly, we evaluate the RF simulator's capability and performance in the 3D scene compared with the ground truth. We used a total of 7 representative objects. Although DiffSBR uses a highly efficient yet simplified differentiable ray tracer, it consistently achieves an SSIM of around 0.99, in comparison to the electromagnetic field simulator. This proves that DiffSBR achieves high accuracy in its forward simulation process." }, { "figure_ref": [], "heading": "Simulation Accuracy", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b8", "b9", "b11", "b36" ], "table_ref": [], "text": "We evaluate DiffSBR in representative multipath-rich practical environments, which aligns with almost all representative RF sensing work [10,11,13,17,36,38,45]. The DiffSBR RF simulator thus only simulates one or more candidate objects, while omitting multipath reflections. Nonetheless, strong multipaths can cause a mismatch between the simulated and actual radar signals. In the extreme case when the line-of-sight (LoS) is fully blocked (i.e., NLoS), the DiffSBR performance may degrade. It is still an open challenge for RF sensing under such a scenario. To overcome this limitation, we can incorporate the ambient scenes into the 3D mesh, but this escalates DiffSBR's search space, making it intractably except in an environment with limited variability (e.g., road cross)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed DiffSBR, a pioneering mmWave sensing paradigm fusing differentiable ray tracing with gradientbased optimization for robust 3D reconstruction. Central to DiffSBR is a unique differentiable RF simulator bridging the gap between sparse radar signals and detailed 3D representations. Experiments showcase DiffSBR's precision in characterizing object geometry and material properties, even for previously unseen radar targets. DiffSBR surpasses data-driven method limitations, exhibiting generalization across objects, environments, and radar hardware. DiffSBR revolutionized radio signal utilization and catalyzed advancements in computational sensing and computer vision cross-fields." } ]
Millimeter wave (mmWave) sensing is an emerging technology with applications in 3D object characterization and environment mapping. However, realizing precise 3D reconstruction from sparse mmWave signals remains challenging. Existing methods rely on data-driven learning, constrained by dataset availability and difficulty in generalization. We propose DiffSBR, a differentiable framework for mmWave-based 3D reconstruction. DiffSBR incorporates a differentiable ray tracing engine to simulate radar point clouds from virtual 3D models. A gradient-based optimizer refines the model parameters to minimize the discrepancy between simulated and real point clouds. Experiments using various radar hardware validate DiffSBR's capability for fine-grained 3D reconstruction, even for novel objects unseen by the radar previously. By integrating physicsbased simulation with gradient optimization, DiffSBR transcends the limitations of data-driven approaches and pioneers a new paradigm for mmWave sensing.
Differentiable Radio Frequency Ray Tracing for Millimeter-Wave Sensing
[ { "figure_caption": "Figure 1 .1Figure1. Optimization begins with the raw radar signal; the signal is processed into point clouds for scene initialization. We then optimize the scene to generate a similar signal. During optimization, we use our differentiable radio frequency ray tracer, which allows both forward simulation and backpropagation of gradients.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. System setup and test environments for DiffSBR.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4. ImplementationmmWave radar platforms: We evaluate DiffSBR on 3 representative FMCW mmWave radar platforms: (1) 2D ranging radar: we employ an Infineon Position2Go module [3] as a ranging radar. Position2Go operates on 24 GHz, with 1 TX and 2 RX antennas. For each TX and RX pair, the raw in-phase and quadrature signals (I/Q) are accessible from a PC host connected to the radar. (2) 3D automotive imaging radar: we employ a TI AWR1843BOOST 76-81 GHz automotive radar which has 3 TX and 4 RX antennas. It can output I/Q signals along with point cloud data, with 80 to 200 points per frame. (3) 4D sensing radar: we also test the 62-69 GHz Vayyar VtrigB [5], which features 4D radar sensing (distance, direction, relative velocity, and vertical). VtrigB has 20 TX and 20 RX antennas, capable of producing I/Q signals and point clouds, with 1000 to 2000 points per frame, enabling simultaneous perception of multiple objects.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 3. Examples of 3D reconstruction results", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "(3) Multiple other objects (for road hazard sensing, surveillance/perimeter security, etc.): We use an additional 5 object categories, including bikes, trash bins, and benches. 3D Scene Parameterization: (1) Human: We adopt skeletal mesh to parameterize the human body, which can flexibly transform with multiple degrees of freedom around the joints. The body shapes are controlled by a binary (male/ female) along with 7 other parameters [6]. Our actual test subjects can take any of the 14 most representative human poses, but the DiffSBR can accommodate arbitrary virtual poses.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "[9] to compress the density field to 16*16*16. DiffSBR's optimizer adjusts these 4096 parameters, decompresses them back to 512*512*512, and then triangulates the density field to a mesh. (3) Bike and other objects: As bicycles, benches, trash bins, and parking ticket machines do not undergo deformation themselves, the style and geometry are relatively centralized. Moreover, each object category usually follows a similar manufacturing standard[2]. Therefore, we use the static mesh directly to parameterize these objects. The only parameter that determines the static mesh is the object type, which is used as an index to select candidates from the ShapeNet [9] model library. Ground Truth: We use Intel Realsense D455 RGB and depth camera [4] to capture RGB and depth images as the ground truth for 3D reconstruction. The RGB sensor has a resolution of 640×480, and ranges from 0.4 m to 10 m. The depth sensing accuracy is around 5 mm.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Simulated received raw radar signal and the point cloud for example objects. Signals from four RX antenna are illustrated, each with an I (blue) and Q (yellow) channel.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "3D Reconstruction Results on Multiple Objects.", "figure_data": "Human Bench Car & Human Two Cars Human & BikeAvg.Avg. SSIM0.8884 0.90050.86550.93700.8320.8798SD0.0012 0.00370.00170.00230.00260.0023HawkEye [13] SSIM 0.7231 0.67530.74560.67890.77650.7198road object characterization, traffic/parking violation, etc.):We use 3 types of representative cars (hatchback, sedan, andSUV).", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "3D Reconstruction Shape Results.", "figure_data": "HumanBikeCarTrash BinAvg.Avg.SSIM 0.8492 0.8358 0.91830.91450.8772SD0.0019 0.0007 0.00020.02980.0052", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Xingyu Chen; Xinyu Zhang; Qiyue Xia; Xinmin Fang; Chris Xiaoxuan Lu; Zhengxiong Li
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "awr1843boost evaluation board -ti", "year": "2023-02-03" }, { "authors": "", "journal": "", "ref_id": "b1", "title": "Bike size charts for men, women, and kids", "year": "2023-02" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "Introducing the intel real sense depth camera d455", "year": "2023-01-03" }, { "authors": "", "journal": "", "ref_id": "b3", "title": "Vayyar imaging -home", "year": "2023-01-29" }, { "authors": "Brett Allen; Brian Curless; Zoran Popović", "journal": "ACM transactions on graphics (TOG)", "ref_id": "b4", "title": "The space of human body shapes: reconstruction and parameterization from range scans", "year": "2003" }, { "authors": "Roger Appleby; Rupert N Anderton", "journal": "Proceedings of the IEEE", "ref_id": "b5", "title": "Millimeter-wave and submillimeter-wave imaging for security and surveillance", "year": "2007" }, { "authors": "Muge Bekar; Chris Baker; Marina Gashinova", "journal": "IEEE", "ref_id": "b6", "title": "Enhanced angular resolution in automotive radar imagery using burgaided mimo-dbs approach", "year": "2023" }, { "authors": "X Angel; Thomas Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Jianxiong Su; Li Xiao; Fisher Yi; Yu", "journal": "", "ref_id": "b7", "title": "ShapeNet: An Information-Rich 3D Model Repository", "year": "2015" }, { "authors": "Baicheng Chen; Huining Li; Zhengxiong Li; Xingyu Chen; Chenhan Xu; Wenyao Xu", "journal": "", "ref_id": "b8", "title": "Thermowave: a new paradigm of wireless passive temperature monitoring via mmwave sensing", "year": "2020" }, { "authors": "Xingyu Chen; Zhengxiong Li; Baicheng Chen; Yi Zhu; Chris Xiaoxuan Lu; Zhengyu Peng; Feng Lin; Wenyao Xu; Kui Ren; Chunming Qiao", "journal": "", "ref_id": "b9", "title": "Metawave: Attacking mmwave sensing with meta-material-enhanced tags", "year": "2023" }, { "authors": "Xingyu Chen; Xinyu Zhang", "journal": "ACM", "ref_id": "b10", "title": "Rf genesis: Zero-shot generalization of mmwave sensing through simulation-based data synthesis and generative diffusion models", "year": "2023" }, { "authors": "Junfeng Guan; Sohrab Madani; Suraj Jog; Saurabh Gupta; Haitham Hassanieh", "journal": "", "ref_id": "b11", "title": "Through fog high-resolution imaging using millimeter wave radar", "year": "2020" }, { "authors": "F Roger; Jan L Harrington; Harrington", "journal": "Oxford University Press, Inc", "ref_id": "b12", "title": "Field computation by moment methods", "year": "1996" }, { "authors": "Danping He; Bo Ai; Ke Guan; Longhe Wang; Zhangdui Zhong; Thomas Kürner", "journal": "IEEE communications surveys & tutorials", "ref_id": "b13", "title": "The design and applications of high-performance ray-tracing simulation platform for 5g and beyond wireless communications: A tutorial", "year": "2018" }, { "authors": "Ralf Hiptmair", "journal": "Acta Numerica", "ref_id": "b14", "title": "Finite elements in computational electromagnetism", "year": "2002" }, { "authors": "Wenjun Jiang; Hongfei Xue; Chenglin Miao; Shiyang Wang; Sen Lin; Chong Tian; Srinivasan Murali; Haochen Hu; Zhi Sun; Lu Su", "journal": "", "ref_id": "b15", "title": "Towards 3d human pose construction using wifi", "year": "2020" }, { "authors": "Hiroharu Kato; Deniz Beker; Mihai Morariu; Takahiro Ando; Toru Matsuoka; Wadim Kehl; Adrien Gaidon", "journal": "", "ref_id": "b16", "title": "Differentiable rendering: A survey", "year": "2020" }, { "authors": "Bernhard Kerbl; Georgios Kopanas; Thomas Leimkühler; George Drettakis", "journal": "ACM Transactions on Graphics", "ref_id": "b17", "title": "3d gaussian splatting for real-time radiance field rendering", "year": "2023-07" }, { "authors": "Hao Kong; Xiangyu Xu; Jiadi Yu; Qilin Chen; Chenguang Ma; Yingying Chen; Yi-Chao Chen; Linghe Kong", "journal": "", "ref_id": "b18", "title": "m3track: mmwave-based multi-user 3d posture tracking", "year": "2022" }, { "authors": "Solomon Kullback; Richard A Leibler", "journal": "The annals of mathematical statistics", "ref_id": "b19", "title": "On information and sufficiency", "year": "1951" }, { "authors": "Tzu-Mao Li; Miika Aittala; Frédo Durand; Jaakko Lehtinen", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b20", "title": "Differentiable monte carlo ray tracing through edge sampling", "year": "2018" }, { "authors": "Yadong Li; Dongheng Zhang; Jinbo Chen; Jinwei Wan; Dong Zhang; Yang Hu; Qibin Sun; Yan Chen", "journal": "IEEE Transactions on Mobile Computing", "ref_id": "b21", "title": "Towards domain-independent and real-time gesture recognition using mmwave signal", "year": "2022" }, { "authors": "Zhengxiong Li; Baicheng Chen; Xingyu Chen; Huining Li; Chenhan Xu; Feng Lin; Chris Xiaoxuan Lu; Kui Ren; Wenyao Xu", "journal": "", "ref_id": "b22", "title": "Spiralspy: Exploring a stealthy and practical covert channel to attack air-gapped computing devices via mmwave sensing", "year": "2022" }, { "authors": "Jaime Lien; Nicholas Gillian; M Emre Karagozler; Patrick Amihood; Carsten Schwesig; Erik Olson; Hakim Raja; Ivan Poupyrev", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b23", "title": "Soli: Ubiquitous gesture sensing with millimeter wave radar", "year": "2016" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "Seminal Graphics Papers: Pushing the Boundaries", "ref_id": "b24", "title": "Smpl: A skinned multiperson linear model", "year": "2023" }, { "authors": "Guillaume Loubet; Nicolas Holzschuch; Wenzel Jakob", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b25", "title": "Reparameterizing discontinuous integrands for differentiable rendering", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Springer", "ref_id": "b26", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b27", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "K Rabindra; Mishra", "journal": "International Journal of RF and Microwave Computer-Aided Engineering: Co-sponsored by the Center for Advanced Manufacturing and Packaging of Microwave, Optical, and Digital Electronics (CAMPmode) at the University of Colorado at Boulder", "ref_id": "b28", "title": "An overview of neural network methods in computational electromagnetics", "year": "2002" }, { "authors": "Vijaya Alireza H Mohammadian; William F Shankar; Hall", "journal": "Computer Physics Communications", "ref_id": "b29", "title": "Computation of electromagnetic scattering and radiation using a time-domain finite-volume discretization procedure", "year": "1991" }, { "authors": "A Jeffrey; Nanzer", "journal": "Artech House", "ref_id": "b30", "title": "Microwave and millimeter-wave remote sensing for security applications", "year": "2012" }, { "authors": "Phuc Nguyen; Vimal Kakaraparthi; Nam Bui; Nikshep Umamahesh; Nhat Pham; Hoang Truong; Yeswanth Guddeti; Dinesh Bharadia; Richard Han; Eric Frew", "journal": "", "ref_id": "b31", "title": "Dronescale: drone load estimation via remote passive rf sensing", "year": "2020" }, { "authors": "John Nolan; Kun Qian; Xinyu Zhang", "journal": "", "ref_id": "b32", "title": "Ros: passive smart surface for roadside-to-vehicle communication", "year": "2021" }, { "authors": "Felix Petersen; Bastian Goldluecke; Christian Borgelt; Oliver Deussen", "journal": "", "ref_id": "b33", "title": "Gendr: A generalized differentiable renderer", "year": "2022" }, { "authors": "Zhaoyuan Kun Qian; Xinyu He; Zhang", "journal": "Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies", "ref_id": "b34", "title": "3d point cloud generation with millimeter-wave radar", "year": "2020" }, { "authors": "Shilin Kun Qian; Xinyu Zhu; Li Erran Zhang; Li", "journal": "", "ref_id": "b35", "title": "Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals", "year": "2021" }, { "authors": "Zi Yili Ren; Yichao Wang; Sheng Wang; Yingying Tan; Jie Chen; Yang", "journal": "", "ref_id": "b36", "title": "3d human pose estimation using wifi signals", "year": "2021" }, { "authors": "Sven Schröder; Jens Reermann; Maurice Stephan; Dieter Kraus; Anton Kummert", "journal": "AIP Publishing", "ref_id": "b37", "title": "Experimental demonstration of the angular resolution enhancement of a monostatic mimo sonar", "year": "2021" }, { "authors": "Junjie Shen; Ningfei Wang; Ziwen Wan; Yunpeng Luo; Takami Sato; Zhisheng Hu; Xinyang Zhang; Shengjian Guo; Zhenyu Zhong; Kang Li", "journal": "", "ref_id": "b38", "title": "On the semantic ai security in autonomous driving", "year": "2022" }, { "authors": "Shigeki Sugimoto; Hayato Tateda; Hidekazu Takahashi; Masatoshi Okutomi", "journal": "IEEE", "ref_id": "b39", "title": "Obstacle detection using millimeterwave radar and its visualization on image sequence", "year": "2004" }, { "authors": "Xuyu Wang; Mohini Patil; Chao Yang; Shiwen Mao; Palak Anilkumar Patel", "journal": "IEEE", "ref_id": "b40", "title": "Deep convolutional gaussian processes for mmwave outdoor localization", "year": "2021" }, { "authors": "Zhou Wang; Alan C Bovik", "journal": "IEEE signal processing magazine", "ref_id": "b41", "title": "Mean squared error: Love it or leave it? a new look at signal fidelity measures", "year": "2009" }, { "authors": "Zhiqing Wei; Fengkai Zhang; Shuo Chang; Yangyang Liu; Huici Wu; Zhiyong Feng", "journal": "Sensors", "ref_id": "b42", "title": "Mmwave radar and vision fusion for object detection in autonomous driving: A review", "year": "2022" }, { "authors": "Hongfei Xue; Yan Ju; Chenglin Miao; Yijiang Wang; Shiyang Wang; Aidong Zhang; Lu Su", "journal": "", "ref_id": "b43", "title": "mmmesh: Towards 3d real-time dynamic human mesh construction using millimeter-wave", "year": "2021" }, { "authors": "Huanhuan Yang; Xiangyu Cao; Fan Yang; Jun Gao; Shenheng Xu; Maokun Li; Xibi Chen; Yi Zhao; Yuejun Zheng; Sijia Li", "journal": "Scientific reports", "ref_id": "b44", "title": "A programmable metasurface with dynamic polarization, scattering and focusing control", "year": "2016" }, { "authors": "Mingmin Zhao; Yingcheng Liu; Aniruddh Raghu; Tianhong Li; Hang Zhao; Antonio Torralba; Dina Katabi", "journal": "", "ref_id": "b45", "title": "Through-wall human mesh recovery using radio signals", "year": "2019" }, { "authors": "Yuejun Zheng; Yulong Zhou; Jun Gao; Xiangyu Cao; Huanhuan Yang; Sijia Li; Liming Xu; Junxiang Lan; Liaori Jidi", "journal": "Scientific reports", "ref_id": "b46", "title": "Ultra-wideband polarization conversion metasurface and its application cases for antenna radiation enhancement and scattering suppression", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 55.97, 305.14, 230.39, 59.09 ], "formula_id": "formula_0", "formula_text": "S r (d, φ o ) = P t (φ o )F (φ o )G(φ o ) 4πd 2 PL(d) + Ω p(x) P t (x, ω i )F (ω i ) 4π|x| 2 + S r (x, ω i ) cos ω i , dω i ,(1)" }, { "formula_coordinates": [ 4, 131.41, 590.68, 154.95, 15.53 ], "formula_id": "formula_1", "formula_text": "I = P f (p, Θ) dp,(2)" }, { "formula_coordinates": [ 4, 119.69, 689.31, 166.67, 26.84 ], "formula_id": "formula_2", "formula_text": "I ≈ Î = 1 N N i=1 f (pi, Θ),(3)" }, { "formula_coordinates": [ 4, 351.31, 320.12, 193.8, 26.84 ], "formula_id": "formula_3", "formula_text": "I ≈ E = 1 N N i=1 f ′ (pi, θA) × A(pi, θA),(4)" }, { "formula_coordinates": [ 4, 320.02, 388.28, 225.09, 58.08 ], "formula_id": "formula_4", "formula_text": "∂I ∂θ A ≈ ∂E ∂θ A = 1 N N i=1 f ′ (p i , θ A ) × ∂A(p i , θ A ) ∂θ A +A(p i , θ A ) × ∂f ′ (p i , θ A ) ∂θ A .(5)" }, { "formula_coordinates": [ 4, 349.32, 583.44, 195.79, 15.5 ], "formula_id": "formula_5", "formula_text": "P f (p, θ)dp = Q f (T (q, θ), θ)| det JT |dq,(6)" }, { "formula_coordinates": [ 4, 345.45, 611.45, 199.67, 23.97 ], "formula_id": "formula_6", "formula_text": "∂I ∂θ = Q ∂f ∂θ + f ∂ ∂θ (log|detJ T |) dq,(7)" }, { "formula_coordinates": [ 5, 91.52, 274.69, 191.35, 26.84 ], "formula_id": "formula_7", "formula_text": "SIF (t) = N i=0 I s i exp(2πj(µtI t i + fcI t i )), (8" }, { "formula_coordinates": [ 5, 282.88, 284.14, 3.48, 7.77 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 5, 81.75, 521.53, 204.61, 19.74 ], "formula_id": "formula_9", "formula_text": "rp = η cos δi -cos δt η cos δi + cos δt , rs = cos δi -η cos δt cos δi + η cos δt ,(9)" }, { "formula_coordinates": [ 5, 83.41, 587.53, 202.95, 25.48 ], "formula_id": "formula_10", "formula_text": "cos δi = -i • n, sin δi = 1 -cos 2 δi, sin δt = √ ϵr sin δi, cos δt = 1 -sin 2 δt.(10)" }, { "formula_coordinates": [ 5, 322.48, 179.14, 211.43, 21.56 ], "formula_id": "formula_11", "formula_text": "∂SIF (t) ∂I t i = 2πj(µt + fc)I s i exp(2πj(µtI t i + fcI t i )), (" }, { "formula_coordinates": [ 5, 360.13, 218.24, 181.25, 21.56 ], "formula_id": "formula_12", "formula_text": "∂SIF (t) ∂I s i = exp(2πj(µtI t i + fcI t i )). (12" }, { "formula_coordinates": [ 5, 541.38, 224.33, 3.73, 7.77 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 358.93, 364.66, 186.18, 26.84 ], "formula_id": "formula_14", "formula_text": "∂SIF (t) ∂θ = N i=0 ∂SIF (t) ∂Ii × ∂Ii ∂θ .(13)" }, { "formula_coordinates": [ 5, 379.63, 700.42, 161.75, 15.19 ], "formula_id": "formula_15", "formula_text": "θ * = arg min θ∈Θ ℓ(S(θ), y). (14" }, { "formula_coordinates": [ 5, 541.38, 702.98, 3.73, 7.77 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 6, 102.5, 656.71, 180.13, 26.84 ], "formula_id": "formula_17", "formula_text": "ℓ(y, ȳ) = 1 N N i=1 (T (y)i -T (ȳ)i) 2 , (16" }, { "formula_coordinates": [ 6, 282.63, 666.17, 3.73, 7.77 ], "formula_id": "formula_18", "formula_text": ")" } ]
2023-11-29
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Figure 1. NeISF reconstructs highly accurate shapes and materials using polarization cues. The inter-reflection between the teapot and the book is clearly observed in our specular intensity, while PANDORA [17] is heavily affected by the textures and does not correctly reconstruct the inter-reflection because it only assumes single-bounced illumination. DoLP denotes the degree of linear polarization." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b70", "b17", "b2", "b10", "b93", "b81", "b73", "b80", "b89", "b0", "b81", "b4" ], "table_ref": [], "text": "Inverse rendering aims at decomposing the target scene into parameters such as geometry, material, and lighting. It is a long-standing task for computer vision and computer graphics and has many downstream tasks, such as relighting, material editing, and novel-view synthesis. The main challenge of inverse rendering is that many combinations of the scene parameters can express the same appearance, called the ambiguity problem. Solutions to the ambiguity can be roughly divided into two categories: simplifying the scene and utilizing more information. For the first group, some studies assume a Lambertian Bidirectional Reflectance Distribution Function (BRDF) [71], a single point light source [18], or a near-planer geometry [46]. For the second group, additional information such as multiple viewpoints [33], additional illuminations [11], multi-spectral images [40], depth information [36], and polarization cues [94] has been extensively explored.\nMost of the aforementioned methods use explicit representations of scene parameters. On the other hand, Neural Radiance Fields (NeRF) [58] shows the successful use of implicit representations. Although NeRF achieves remarkable performance on novel-view synthesis, it does not decompose the scene into the parameters. Thus, many approaches try to extend the NeRF representation to solve inverse rendering, and solutions to the ambiguity problem can still be categorized into the two groups stated above. For the first group, assumptions such as a smooth roughness field [82], low roughness [74], known lighting [73,81], single light bounce [7, 8], collocated flashlight [90], or a Lambertian surface [89] are proposed to stabilize the training. However, these assumptions severely limit the scope of target scenes. For the second group, various types of cues such as depth images [1], azimuth maps [13], multiple lights [42, 52] and multi-spectral information [68] are investigated. Additionally, polarization is also examined in this group. To our knowledge, PANDORA [17] first combines the implicit representations and polarization cues for the diffuse-specular reflection separation and the geometry estimation. However, due to the entangled representation of the incident light and surface reflectance, BRDF parameters are not estimated. In addition, they assume a single light bounce and unpolarized incident light. These limitations lead to our key research question: Can polarization cues disambiguate the full NeRF-based inverse rendering?\nWe propose Neural Incident Stokes Fields (NeISF), an inverse rendering method using polarization cues and implicit representations. It takes multi-view polarized images of a static object with known object masks and camera poses but with unknown geometry, material, and lighting. Based on the implicit representation for the multi-bounced light [82,89], the proposed incident Stokes fields effectively extend this representation to include the polarization cues. Specifically, instead of explicitly modeling every single light bounce as shown in Fig. 2 (a), we use coordinate-based multi-layer perceptrons (MLPs) to record Stokes vectors of all the second-last bounces (Fig. 2 (c)). After that, we introduce a physically-based polarimetric renderer to compute Stokes vectors of the last bounces using a polarimetric BRDF model proposed by Baek et al. [5] (Baek pBRDF). The challenging part of extending an unpolarized incident light field to a polarized one is that we must ensure that the Stokes vectors are properly rotated to share the same reference frame with the Mueller matrices. Furthermore, the diffuse and specular components have different reference frames, which makes the problem more complicated. This is because the reference frame of diffuse Mueller matrices depends on the surface normal, while the reference frame of specular Mueller matrices depends on the microfacet normal (Fig. 2 (b)). To solve this issue, we propose to implicitly record the rotation of the second-last bounce for the diffuse and specular components separately. More specifically, given the position and direction of the incident light, we use MLPs to record the already rotated Stokes vectors of diffuse and specular components independently. Our light representation is capable of handling challenging scenes including those that have inter-reflections. In addition, the polarization cues can provide a wealth of information on geometry, material, and light, making it easier to solve inverse rendering compared to unpolarized methods. To comprehensively evaluate the proposed approach, we construct two polarimetric HDR datasets: a synthetic dataset rendered by Mitsuba 3.0 [28], and a real-world dataset captured by a polarization camera. Fig. 1 shows that our model outperforms the existing methods. Our contributions are summarized as follows:\n• This method introduces a unique representation, which implicitly models multi-bounce polarized light paths with the rotation of Stokes vectors taken into account.\n• To perfectly integrate the representation into the training pipeline, we introduce a differentiable physicallybased polarimetric renderer.\n• Our method achieves state-of-the-art performance on both synthetic and real scenarios.\n• Our real and synthetic multi-view polarimetric datasets and implementation are publicly available. the gradients to physical parameters even when the light is bounced multiple times. However, it requires a huge computational cost and memory consumption when handling a complex scene. NeRF-based inverse rendering can also be classified as the optimization-based method. Compared to the explicit representation of scene parameters, the compactness and effectiveness of neural implicit representation have been verified. Details will be introduced in the next part." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b84", "b82", "b83", "b8", "b59", "b28", "b94", "b79", "b13", "b29", "b38", "b60", "b5", "b1", "b97", "b64" ], "table_ref": [], "text": "Neural Implicit Fields NeRF [58] achieved photorealistic performance for novel-view synthesis utilizing the effectiveness of implicit neural representation. However, inverse rendering is not directly supported due to the entangled representation. This limitation has opened up a new research field on neural implicit fields-based inverse rendering. Some works only focus on geometry estimation. Representative works such as IDR [85], NeuS [76], VolSDF [83], and BakedSDF [84] can be classified into this group. They disentangle the geometry but use an entangled representation of the lighting and material. Attempts to complete the disentanglement have been widely studied. Early works only consider the direct lighting represented by a spherical Gaussian [7, 91], an environment map [92], or splitsum approximation [9,60]. These direct lighting-based works are not capable of handling complex effects like inter-reflection. Later, several works [29,47,78,79,93,95] that also consider indirect lighting have been reported. One simple but efficient solution is Neural Radiosity [23], which records a part of light bounces using MLPs. Inspired by them, many works [24, 82, 89] also use such kind of light representation for inverse rendering. We extend their idea by proposing the neural incident Stokes fields to model the multi-bounced polarimetric light propagation. Polarization Polarization is one of the properties of electromagnetic waves that specifies the geometrical orientation of the oscillations. An important phenomenon of po-larization is that it changes after interacting with objects, providing rich information for a variety of applications including inverse rendering. Since the release of commercial polarization cameras [80], it has become easier to capture polarized images, and polarization research has become more active. Various applications such as the estimation of shape [3, 14,21,27,30,35,39,61,72, 97], material [2,4,19,20,26], pose [16,22,98], white balance [65], reflection removal [38,43,56], segmentation [32,50,57], and sensor design [37] have been explored.\nSo far, attempts to combine NeRF and polarization mainly focused on extending the intensity fields to the polarimetric (pCON [67]) fields or Spectro-polarimetric (Ne-SpoF [34]) fields for novel-view synthesis. Namely, they do not use polarization for inverse rendering. PANDORA [17] is the first work that combines polarization cues and NeRF for inverse rendering purposes. They train coordinate-based MLPs to estimate normals, diffuse radiance, and specular radiance. After that, the estimated normals, diffuse radiance, and specular radiance are combined by a simplified renderer to generate the outgoing Stokes vectors. The main limitation of PANDORA can be considered as follows. First, it does not support the inverse rendering of BRDF parameters. Because the diffuse and specular radiance entangles the incident light, BRDF, and normals. Second, they assume an unpolarized incident light. This violates the common situation in the real world where the light has already bounced and become polarized before hitting the object. In contrast, the rendering process of our method is physically based, making it possible to fully disentangle the material, geometry, and lighting. Additionally, we do not require an unpolarized incident light assumption." }, { "figure_ref": [], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "We briefly introduce the mathematics used to describe the polarimetric light propagation, BRDF, and rendering equation. Please refer to the supplementary material for the detailed version." }, { "figure_ref": [], "heading": "Stokes-Mueller multiplication", "publication_ref": [], "table_ref": [], "text": "The polarization state of the light can be represented as a Stokes vector s ∈ R 3 . It has three elements [s 0 , s 1 , s 2 ], where s 0 is the unpolarized light intensity, s 1 is the 0 • over 90 • linear polarization, and s 2 is the 45 • over 135 • linear polarization. We do not consider the fourth dimension representing circular polarization in this paper. The lightobject interaction can be expressed by the multiplication of Stokes vectors and Mueller matrices:\ns out = M • R • s in ,(1)\nwhere M ∈ R 3×3 is the Mueller matrix representing the optical property of the interaction point, s in and s out are the incident and outgoing Stokes vectors. R ∈ R 3×3 is the rotation matrix which depends on the relative angle of the reference frames of s in and M. It must also be multiplied, as the Stokes-Mueller multiplication is only valid when they share the same reference frame." }, { "figure_ref": [], "heading": "Polarimetric BRDF", "publication_ref": [], "table_ref": [], "text": "In Baek pBRDF, the diffuse and specular components are modeled separately. The diffuse component M dif describes the process of transmitting from the outside to inside, subsurface scattering, and transmitting from the inside to outside. It can be formulated as follows:\nM dif = ( ρ π cos θ i )F T o • D • F T i .(2)\nρ ∈ R 3 is the diffuse albedo, θ i,o denotes the incident / outgoing angle, D ∈ R 3×3 is a depolarizer, and F T i,o ∈ R 3×3 is the Fresnel transmission term. The specular component describes the microfacet surface reflection:\nM spec = k s DG 4 cos θ o F R ,(3)\nwhere\nk s ∈ R 3 is the specular coefficient, D is the GGX distribution function [75],\nG is the Smith function, and F R ∈ R 3×3 is the Fresnel reflection." }, { "figure_ref": [ "fig_0" ], "heading": "Polarimetric rendering equation", "publication_ref": [], "table_ref": [], "text": "According to Eq. 1, we can obtain the polarimetric version of the Rendering Equation [31]:\ns cam = R cam • Ω M • R in • s in dω i ,(4)\nwhere s cam is the Stokes vector captured by the camera, R cam is the rotation matrix from the Mueller matrix to the camera's reference frame, rotation matrix R in rotates the incident Stokes vector s in to the reference frame of Mueller matrix M, ω i ∈ R 3 is the incident direction. Furthermore, Baek pBRDF handles the rotation matrices of diffuse and specular components in a different manner. Because the reference frame of the diffuse Mueller matrix M dif depends on the surface normal, while the reference frame of the specular Mueller matrix M spec depends on the microfacet normal (halfway vector) as shown in (Fig. 2 (b)). Thus, for the diffuse component, the Eq. 4 should be rewritten as:\ns cam dif = R cam dif • Ω M dif • R in dif • s in dω i ,(5)\nand for the specular component:\ns cam spec = Ω R cam spec • M spec • R in spec • s in dω i .(6)\nNote that for the specular component, R cam spec • should be placed into the integral, as the microfacet normal changes according to the incident direction ω i ." }, { "figure_ref": [ "fig_1" ], "heading": "Our Approach", "publication_ref": [], "table_ref": [], "text": "Our method takes multi-view polarized images, masks, and camera poses as inputs and outputs diffuse albedo, roughness, and surface normal. It supports various downstream tasks including relighting, material editing, and diffuse-specular separation. Details will be introduced in the following subsections, and we show an overview of our method in Fig. 3." }, { "figure_ref": [], "heading": "Assumptions and scopes", "publication_ref": [], "table_ref": [], "text": "We keep the specular coefficient k s = [1, 1, 1]. In addition, we assume a constant refractive index η = 1.5, because this is close to the refractive index of common materials such as acrylic glass (1.49), polypropylene plastic (1.49), and quartz (1.458). This work only focuses on object-level inverse rendering, and scene-level inverse rendering is beyond the scope. In addition, Baek pBRDF is only applicable to opaque and dielectric materials, which means objects that include metals, translucent, or transparent parts are not our target objects." }, { "figure_ref": [], "heading": "Signed distance fields", "publication_ref": [ "b82" ], "table_ref": [], "text": "We represent the geometry using a signed distance field net f sdf . Let {x k } N k=1 be the N samples along the ray direction:\nf sdf (x k ) = d k ,(7)\nd k is the signed distance from the nearest surface. The normal of the sampled location x k can be obtained by calculating the normalized gradient of f sdf :\n∇ x k f sdf (x k )/||∇ x k f sdf (x k )|| 2 = n k . (8\n)\nAfter obtaining all the normals of sampled points, an alphablending is required to compute the surface normal of the interaction point. The weight w k of the alpha-blending can be calculated by:\nw k = T k (1 -exp (-σ k δ k )),(9)\nwhere T k = exp (-k-1 j=1 σ j δ j ), δ is the distance between two adjacent samples. For the density σ, we follow the definition of VolSDF [83]:\nσ k = αΨ β (d k ), (10\n)\nwhere Ψ is the cumulative distribution function of the Laplace distribution, α and β are two learnable parameters. Then, we compute the alpha-blending to achieve the final surface normal: n = N k=1 w k n k ." }, { "figure_ref": [], "heading": "BRDF fields", "publication_ref": [], "table_ref": [], "text": "As the specular coefficient k s and refractive index η are assumed as constants, we only need to estimate the diffuse albedo ρ and roughness r. Thus, for each sampled location x k , we estimate:\nf alb (x k ) = ρ k ,(11)\nf rough (x k ) = r k . (12\n)\nSimilar to the surface normal, the albedo and roughness for the interaction point can also be calculated via alphablending:\nρ = N k=1 w k ρ k , r = N k=1 w k r k ." }, { "figure_ref": [], "heading": "Incident Stokes fields", "publication_ref": [ "b81" ], "table_ref": [], "text": "As NeILF [82] proposed, the complicated multi-bounced light propagation can be represented as an incident light field. Specifically, given the location and direction of all second-last bounce lights, they use MLPs to record the light intensities. Seemingly, extending the incident light field to the incident Stokes vectors is straightforward, and the only thing we need to do is to change the outputs of MLPs from the 1D light intensities to the 3D Stokes vectors. However, as shown in Eq. 5 and Eq. 6, rotation matrices must also be considered because Mueller-Stokes multiplication is only valid when they share the same reference frames. In addition, the diffuse and specular components have different behavior of rotations, which makes the problem even harder. One potential solution is to explicitly calculate rotation matrices R in dif and R in spec . However, calculating the rotation matrices requires us to know the accurate reference frame of the current surface and incident light. The former can be easily calculated using the surface normal (for diffuse reflection) or half-vector (for specular reflection). However, computing the reference frame of the incident light is time consuming as it depends on the previous bounce, and explicitly simulating the previous bounce requires even more \nf i (x, ω i ) = s r spec [0] = s r dif [0],(13)\nwhere [n] denotes the n th element of the vector. x is the ray-surface interaction point calculated using ray-marching. The second one is an incident specular Stokes network:\nf spec (x, ω i ) = s r spec [1, 2],(14)\nand the third one is an incident diffuse Stokes network:\nf dif (x, ω i ) = s r dif [1].(15)\nNote that we do not estimate s r dif [2], as it will be canceled out in the polarimetric rendering. Please refer to the supplementary material for details." }, { "figure_ref": [], "heading": "Sphere sampling", "publication_ref": [ "b81", "b89" ], "table_ref": [], "text": "Following NeILF [82], we solve the integral of the Rendering Equation using a fixed Fibonacci sphere sampling. So that we can rewrite Eq. 5 as follows: .0712\n.0720 ---Table 1. Results on synthetic dataset. Metrics are computed on 10 test images. The surface normal is evaluated by mean angular error (MAE), and intensity images are evaluated with a peak signal-to-noise ratio (PSNR). Due to the inherent ambiguity of albedo and roughness, we use a scale-invariant L1 error (SI-L1) following IRON [90]. \"Mixed\" represents the combination of \"Specular\" and \"Diffuse\".\ns cam dif = 2π |S L | R cam dif • S L M dif • s r dif ,(16)\nwhere s cam dif is the outgoing Stokes vectors of the diffuse component, S L is the set of the sampled incident light over the hemisphere, R cam dif is the rotation matrix computed using the estimated surface normal, M dif is the estimated Mueller matrix of the diffuse component, and s r dif is the incident diffuse Stokes vectors. Similarly, we can also rewrite Eq. 6 for the specular component:\ns cam spec = 2π |S L | S L R cam spec • M spec • s r spec .(17)\nThe final output can be obtained by:\ns cam = s cam dif + s cam spec .(18)" }, { "figure_ref": [], "heading": "Training scheme", "publication_ref": [ "b82", "b82" ], "table_ref": [], "text": "We use a three-stage training scheme. The first stage initializes the geometry. Specifically, we train VolSDF [83] to learn a signed distance field f sdf . The second stage initializes the material and lighting. And, this stage does not update the signed distance field f sdf . The other neural fields are optimized with the L 1 loss on the estimated Stokes vectors s cam and their ground truth ŝcam . In the third stage, we jointly optimize all the neural fields. In addition to the L 1 loss, we also compute an Eikonal loss L Eik [83] to regularize the signed distance field f sdf ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "We introduce one synthetic dataset and one real-world dataset for the model evaluation. Although PANDORA [17] also proposes a synthetic polarimetric dataset, the scene setup is simpler than common real-world scenarios. Specifically, the object is illuminated by an unpolarized environment map such that almost all incident light is unpolarized. To solve this problem, we place the object inside an altered \"Cornell Box\" to mimic real-world situations, where the light is bounced multiple times and becomes polarized before interacting with the object. For each object, we render 110 HDR polarized images using Mitsuba 3.0 [28] with Baek pBRDF. Among them, 100 images are used for training and 10 are used for testing. We also capture a realworld HDR dataset, as most existing polarimetric datasets are LDR, which may affect the training due to saturation and unknown gamma correction. We capture the polarized images using a polarization camera (FLIR BFS-U3-51S5PC-C). For each viewpoint, we capture images with different exposure times and composite them to obtain one HDR image. We selected three real-world objects, and for each object, we captured 96 views for training and 5 views for evaluation. We recommend our readers check the supplementary documents for details." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b28", "b59", "b81", "b11", "b11", "b82", "b81" ], "table_ref": [], "text": "Looking for competitors for our proposal is not easy. Most of the NeRF-based inverse rendering works [7,15,25,29,53,60,79,82,89,93] use Disney BRDF [12] model for rendering. Although they also estimate parameters such as roughness and albedo, these parameters have different physical meanings from ours, as Baek pBRDF is not based on Disney BRDF [12]. Nevertheless, the estimated surface normal as well as the reconstructed intensity images can be compared. Thus, we chose VolSDF [83] VolSDF does not support HDR images as inputs, we use our own implementation. Besides, PANDORA [17] is also considered as a baseline method. Although they do not support estimating BRDF parameters, the surface normal, diffusespecular separation, and reconstructed polarized images are\ncomparable. An important ablation study should be the performance with or without the presence of polarization cues.\nTo achieve this, we introduce an unpolarized version of Ne-ISF. Specifically, we remove f dif and f spec and only keep f i .\nIn addition, we also implement an unpolarized version of Beak pBRDF for rendering. Finally, the loss is only computed on the intensity space. The other parts are exactly the same as our model. We denote this model as Ours-no-pol, and it can also be considered as a variant of NeILF [82]. Details of this unpolarized BRDF can be found in the supplementary material." }, { "figure_ref": [ "fig_4", "fig_6", "fig_7" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Synthetic Dataset We report the quantitative results of the surface normal, intensity, diffuse-specular separation, roughness, and albedo in Tab. 1. VolSDF does not support diffuse-specular separation. Although NeILF++ supports diffuse-specular separation, the diffuse and specular images differ from our physical meanings. Thus, we do not report diffuse-specular separation for these two methods. For the qualitative comparison, we show the surface normal results in Fig. 4, the diffuse-specular separation, and DoLP results in Fig. 5, and the roughness results in Fig. 6. Real Dataset We show the surface normal results in Fig. 7, the diffuse-specular separation, and DoLP results in Fig. 8, and the material editing and relighting results in Fig. 9. Detailed analysis can be found in the figure captions." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Several limitations still exist. First, the implicit Stokes representation is a double-edged sword. It allows us to model complicated polarimetric light transportation. At the same time, the estimated lighting can not be used in the conventional renderer. Second, the current solution only considers opaque dielectric objects. However, polarization cues can also provide rich information for translucent or For example the flower pattern on the teapot and the text on the surface of the book. However, this is not correct because these patterns come from the albedo. On the other hand, our method can reconstruct a clean surface normal. of real-world data so noisy, making it impossible to handle high-frequency signals such as small bumps or edges." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have proposed NeISF, an inverse rendering pipeline that combines implicit scene representations and polarization cues. It relies on the following novelties. The first one is an implicit representation of the multi-bounced Stokes vectors which takes care of the rotations. The second one is a physically-based polarimetric renderer. With these two novelties, NeISF outperforms the existing inverse render models for both synthetic and real-world datasets. The ablation study has verified the contribution of polarization cues. However, several limitations mentioned in Sec. 6 still exist and are worth further exploration." } ]
Multi-view inverse rendering is the problem of estimating the scene parameters such as shapes, materials, or illuminations from a sequence of images captured under different viewpoints. Many approaches, however, assume single light bounce and thus fail to recover challenging scenarios like inter-reflections. On the other hand, simply extending those methods to consider multi-bounced light requires more assumptions to alleviate the ambiguity. To address this problem, we propose Neural Incident Stokes Fields (NeISF), a multi-view inverse rendering framework that reduces ambiguities using polarization cues. The primary motivation for using polarization cues is that it is the accumulation of multi-bounced light, providing rich information about geometry and material. Based on this knowledge, the proposed incident Stokes field efficiently models the accumulated polarization effect with the aid of an original physically-based differentiable polarimetric renderer. Lastly, experimental results show that our method outperforms the existing works in synthetic and real scenarios.
NeISF: Neural Incident Stokes Field for Geometry and Material Estimation
[ { "figure_caption": "Figure 2 .2Figure 2. Concept of our incident Stokes fields. The orange paths are explicitly computed, while the blue paths are implicitly represented. (a) In the traditional path tracer, the incident Stokes vectors are computed by the recursive multiplication of Stokes vectors, rotation Mueller matrices, and pBRDF Mueller matrices. (b) The diffuse and specular pBRDF matrices have different reference frames. Thus, the rotation matrices should be treated separately. (c) Given the positions of the interaction points and the directions of the incident light, we use MLPs to implicitly record the already-rotated incident Stokes vectors of diffuse and specular components, separately.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overview of NeISF. For each interaction point, we use MLPs to implicitly record surface normal n (Sec. 4.2), diffuse albedo ρ, roughness r (Sec. 4.3), and already-rotated incident Stokes vectors of diffuse s r dif and specular s r spec components (Sec. 4.4). A physicallybased polarimetric renderer is adopted to render the outgoing Stokes vectors s cam (Sec. 4.5).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "computational resources. Here we have an interesting observation: No matter what the reference frame of the incident light is, what we care about is the value of the incident Stokes vectors after the rotation. Thus, we propose a simple but efficient solution: modeling the rotation matrices implicitly. Specifically, instead of recording s in , we directly record the already-rotated Stokes vectors R in dif • s in and R in spec • s in using MLPs. For simplicity, we use s r dif and s r spec to denote the already-rotated incident Stokes vectors of diffuse and specular component separately. Because the first elements (unpolarized light intensity) of s r dif and s r spec are the same, in practice, we use three MLPs to model the incident Stokes vectors. The first one is an incident intensity network:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .Figure 6 .56Figure 5. Diffuse-specular separation and DoLP images of synthetic dataset. We can observe the reflection of green and red walls on the teapot for our method, where PANDORA [17] fails.", "figure_data": "", "figure_id": "fig_3", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Qualitative comparison on real dataset. NeILF++ [89], PANDORA [17], and VolSDF [83] misinterpret materials as geometries.For example the flower pattern on the teapot and the text on the surface of the book. However, this is not correct because these patterns come from the albedo. On the other hand, our method can reconstruct a clean surface normal.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Diffuse-specular separation and DoLP images of real dataset. In the specular image, we can clearly observe the reflection on the surface of the book, but PANDORA [17] fails to reconstruct such kinds of results due to the single-bounce assumption. In addition, our DoLP is visually similar to the GT.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Relighting and material editing results. We edit the material of the teapot. Due to the accurate disentanglement of geometry and material, the edited image has realistic reflections.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "model often fails to give reasonable results for the real-world scene. Optimization-based methods [6,54,55,59,64,66,86], also known as analysis by synthesis, are the other direction to solve inverse rendering. The recent breakthrough of optimization-based methods is dominated by the differentiable rendering[44, 62, 87, 88]. Differentiable renderers like Mitsuba [63] are able to backpropagate", "figure_data": "Inverse Rendering We roughly divide the existing inverserendering works into two groups, which are learning-basedand optimization-based methods. Most of the learning-based inverse rendering works [10, 41, 45, 48, 49, 51, 69,70, 77, 96] are single-view approaches. They mainly relyon large-scale synthetic training datasets because acquiringthe ground truth material, geometry, and lighting parame-ters is labor-intensive and time-consuming. A well-knownproblem of using synthetic training data is the domain gap,where the trained", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" } ]
Chenhao Li; Taishi Ono; Takeshi Uemori; Hajime Mihara; Alexander Gatto; Hajime Nagahara; Yusuke Moriuchi
[ { "authors": "Benjamin Attal; Eliot Laidlaw; Aaron Gokaslan; Changil Kim; Christian Richardt; James Tompkin; Matthew O' Toole", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Törf: Time-of-flight radiance fields for dynamic scene view synthesis", "year": "2021" }, { "authors": "Dejan Azinović; Olivier Maury; Christophe Hery; Matthias Nießner; Justus Thies", "journal": "", "ref_id": "b1", "title": "High-res facial appearance capture from polarized smartphone images", "year": "2023" }, { "authors": "Yunhao Ba; Alex Gilbert; Franklin Wang; Jinfa Yang; Rui Chen; Yiqin Wang; Lei Yan; Boxin Shi; Achuta Kadambi", "journal": "Springer", "ref_id": "b2", "title": "Deep shape from polarization", "year": "2020" }, { "authors": "Seung-Hwan Baek; Felix Heide", "journal": "", "ref_id": "b3", "title": "All-photon polarimetric time-of-flight imaging", "year": "2022" }, { "authors": "Seung-Hwan Baek; Xin Daniel S Jeon; Min H Tong; Kim", "journal": "ACM Trans. Graph", "ref_id": "b4", "title": "Simultaneous acquisition of polarimetric svbrdf and normals", "year": "2018" }, { "authors": "Jonathan T Barron; Jitendra Malik", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b5", "title": "Shape, illumination, and reflectance from shading", "year": "2014" }, { "authors": "Mark Boss; Raphael Braun; Varun Jampani; Jonathan T Barron; Ce Liu; Hendrik P A Lensch", "journal": "", "ref_id": "b6", "title": "Nerd: Neural reflectance decomposition from image collections", "year": "2021" }, { "authors": "Mark Boss; Andreas Engelhardt; Abhishek Kar; Yuanzhen Li; Deqing Sun; Jonathan T Barron; Hendrik P A Lensch; Varun Jampani", "journal": "", "ref_id": "b7", "title": "SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections", "year": "2022" }, { "authors": "Mark Boss; Varun Jampani; Raphael Braun; Ce Liu; Jonathan T Barron; Hendrik P A Lensch", "journal": "", "ref_id": "b8", "title": "Neuralpil: Neural pre-integrated lighting for reflectance decomposition", "year": "2021" }, { "authors": "Mark Boss; Varun Jampani; Kihwan Kim; Hendrik Lensch; Jan Kautz", "journal": "", "ref_id": "b9", "title": "Two-shot spatially-varying brdf and shape estimation", "year": "2020" }, { "authors": "Mark Boss; Varun Jampani; Kihwan Kim; Hendrik P A Lensch; Jan Kautz", "journal": "", "ref_id": "b10", "title": "Two-shot spatially-varying brdf and shape estimation", "year": "2020" }, { "authors": "Brent Burley; Walt Disney; Animation Studios", "journal": "", "ref_id": "b11", "title": "Physically-based shading at disney", "year": "2012" }, { "authors": "Hiroaki Xu Cao; Fumio Santo; Yasuyuki Okura; Matsushita", "journal": "", "ref_id": "b12", "title": "Multi-view azimuth stereo via tangent space consistency", "year": "2023" }, { "authors": "Guangcheng Chen; Li He; Yisheng Guan; Hong Zhang", "journal": "Springer", "ref_id": "b13", "title": "Perspective phase angle model for polarimetric 3d reconstruction", "year": "2022" }, { "authors": "Ziang Cheng; Junxuan Li; Hongdong Li", "journal": "", "ref_id": "b14", "title": "Wildlight: Inthe-wild inverse rendering with a flashlight", "year": "2023" }, { "authors": "Zhaopeng Cui; Marc Viktor Larsson; Pollefeys", "journal": "", "ref_id": "b15", "title": "Polarimetric relative pose estimation", "year": "2019" }, { "authors": "Akshat Dave; Yongyi Zhao; Ashok Veeraraghavan", "journal": "Springer", "ref_id": "b16", "title": "Pandora: Polarization-aided neural decomposition of radiance", "year": "2022" }, { "authors": "Miika Valentin Deschaintre; Fredo Aittala; George Durand; Adrien Drettakis; Bousseau", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b17", "title": "Single-image svbrdf capture with a rendering-aware deep network", "year": "2018" }, { "authors": "Yiming Valentin Deschaintre; Abhijeet Lin; Ghosh", "journal": "", "ref_id": "b18", "title": "Deep polarization imaging for 3d shape and svbrdf acquisition", "year": "2021" }, { "authors": "Jin Duan; Youfei Hao; Ju Liu; Cai Cheng; Qiang Fu; Huilin Jiang", "journal": "Optics Express", "ref_id": "b19", "title": "End-to-end neural network for pbrdf estimation of object to reconstruct polarimetric reflectance", "year": "2023" }, { "authors": "Yoshiki Fukao; Ryo Kawahara; Shohei Nobuhara; Ko Nishino", "journal": "", "ref_id": "b20", "title": "Polarimetric normal stereo", "year": "2021" }, { "authors": "Daoyi Gao; Yitong Li; Patrick Ruhkamp; Iuliia Skobleva; Magdalena Wysocki; Hyunjun Jung; Pengyuan Wang; Arturo Guridi; Benjamin Busam", "journal": "Springer", "ref_id": "b21", "title": "Polarimetric pose prediction", "year": "2022" }, { "authors": "Saeed Hadadan; Shuhong Chen; Matthias Zwicker", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b22", "title": "Neural radiosity", "year": "2021" }, { "authors": "Saeed Hadadan; Geng Lin; Jan Novák; Fabrice Rousselle; Matthias Zwicker", "journal": "", "ref_id": "b23", "title": "Inverse global illumination using a neural radiometric prior", "year": "2023" }, { "authors": "Jon Hasselgren; Nikolai Hofmann; Jacob Munkberg", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Shape, light, and material decomposition from images using monte carlo rendering and denoising", "year": "2022" }, { "authors": "Inseung Hwang; Adolfo Daniel S Jeon; Diego Munoz; Xin Gutierrez; Min H Tong; Kim", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b25", "title": "Sparse ellipsometry: portable acquisition of polarimetric svbrdf and shape with unstructured flash photography", "year": "2022" }, { "authors": "Tomoki Ichikawa; Matthew Purri; Ryo Kawahara; Shohei Nobuhara; Kristin Dana; Ko Nishino", "journal": "", "ref_id": "b26", "title": "Shape from sky: Polarimetric normal recovery under the sky", "year": "2021" }, { "authors": "Jakob Wenzel; Sébastien Speierer; Nicolas Roussel; Delio Vicini; Dr", "journal": "Transactions on Graphics (Proceedings of SIG-GRAPH)", "ref_id": "b27", "title": "jit: A just-in-time compiler for differentiable rendering", "year": "2022-07" }, { "authors": "Haian Jin; Isabella Liu; Peijia Xu; Xiaoshuai Zhang; Songfang Han; Sai Bi; Xiaowei Zhou; Zexiang Xu; Hao Su", "journal": "", "ref_id": "b28", "title": "Tensoir: Tensorial inverse rendering", "year": "2023" }, { "authors": "Achuta Kadambi; Vage Taamazyan; Boxin Shi; Ramesh Raskar", "journal": "", "ref_id": "b29", "title": "Polarized 3d: High-quality depth sensing with polarization cues", "year": "2015" }, { "authors": "T James; Kajiya", "journal": "", "ref_id": "b30", "title": "The rendering equation", "year": "1986" }, { "authors": "Agastya Kalra; Vage Taamazyan; Krishna Supreeth; Kartik Rao; Ramesh Venkataraman; Achuta Raskar; Kadambi", "journal": "", "ref_id": "b31", "title": "Deep polarization cues for transparent object segmentation", "year": "2020" }, { "authors": "Kichang Kim; Akihiko Torii; Masatoshi Okutomi", "journal": "Springer", "ref_id": "b32", "title": "Multi-view inverse rendering under arbitrary illumination and albedo", "year": "2016" }, { "authors": "Youngchan Kim; Wonjoon Jin; Sunghyun Cho; Seung-Hwan Baek", "journal": "", "ref_id": "b33", "title": "Neural spectro-polarimetric fields", "year": "2023" }, { "authors": "Yuhi Kondo; Taishi Ono; Legong Sun; Yasutaka Hirasawa; Jun Murayama", "journal": "Springer", "ref_id": "b34", "title": "Accurate polarimetric brdf for real polarization scene rendering", "year": "2020" }, { "authors": "Jin Hyun; Hyunho Ku; Joo Hat; Dahyun Ho Lee; James Kang; Min H Tompkin; Kim", "journal": "IEEE", "ref_id": "b35", "title": "Differentiable appearance acquisition from a flash/no-flash rgb-d pair", "year": "2022" }, { "authors": "Teppei Kurita; Yuhi Kondo; Legong Sun; Yusuke Moriuchi", "journal": "", "ref_id": "b36", "title": "Simultaneous acquisition of high quality rgb image and polarization information using a sparse polarization sensor", "year": "2023" }, { "authors": "Chenyang Lei; Xuhua Huang; Mengdi Zhang; Qiong Yan; Wenxiu Sun; Qifeng Chen", "journal": "", "ref_id": "b37", "title": "Polarized reflection removal with perfect alignment in the wild", "year": "2020" }, { "authors": "Chenyang Lei; Chenyang Qi; Jiaxin Xie; Na Fan; Qifeng Vladlen Koltun; Chen", "journal": "", "ref_id": "b38", "title": "Shape from polarization for complex scenes in the wild", "year": "2022" }, { "authors": "Chunyu Li; Yusuke Monno; Masatoshi Okutomi", "journal": "IEEE", "ref_id": "b39", "title": "Spectral mvir: Joint reconstruction of 3d shape and spectral reflectance", "year": "2021" }, { "authors": "Chenhao Li; Trung ; Thanh Ngo; Hajime Nagahara", "journal": "", "ref_id": "b40", "title": "Inverse rendering of translucent objects using physical and neural renderers", "year": "2023" }, { "authors": "Junxuan Li; Hongdong Li", "journal": "", "ref_id": "b41", "title": "Neural reflectance for shape recovery with shadow handling", "year": "2022" }, { "authors": "Rui Li; Simeng Qiu; Guangming Zang; Wolfgang Heidrich", "journal": "Springer", "ref_id": "b42", "title": "Reflection separation via multi-bounce polarization state tracing", "year": "2020" }, { "authors": "Tzu-Mao Li; Miika Aittala; Frédo Durand; Jaakko Lehtinen", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b43", "title": "Differentiable monte carlo ray tracing through edge sampling", "year": "2018" }, { "authors": "Zhengqin Li; Mohammad Shafiei; Ravi Ramamoorthi; Kalyan Sunkavalli; Manmohan Chandraker", "journal": "", "ref_id": "b44", "title": "Inverse rendering for complex indoor scenes: Shape, spatially-varying lighting and svbrdf from a single image", "year": "2020" }, { "authors": "Zhengqin Li; Kalyan Sunkavalli; Manmohan Chandraker", "journal": "", "ref_id": "b45", "title": "Materials for masses: Svbrdf acquisition with a single mobile phone image", "year": "2018" }, { "authors": "Zhen Li; Lingli Wang; Mofang Cheng; Cihui Pan; Jiaqi Yang", "journal": "", "ref_id": "b46", "title": "Multi-view inverse rendering for large-scale realworld indoor scenes", "year": "2023" }, { "authors": "Zhen Li; Lingli Wang; Xiang Huang; Cihui Pan; Jiaqi Yang", "journal": "", "ref_id": "b47", "title": "Phyir: Physics-based inverse rendering for panoramic indoor images", "year": "2022" }, { "authors": "Zhengqin Li; Zexiang Xu; Ravi Ramamoorthi; Kalyan Sunkavalli; Manmohan Chandraker", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b48", "title": "Learning to reconstruct shape and spatially-varying reflectance from a single image", "year": "2018" }, { "authors": "Yupeng Liang; Ryosuke Wakaki; Shohei Nobuhara; Ko Nishino", "journal": "", "ref_id": "b49", "title": "Multimodal material segmentation", "year": "2022" }, { "authors": "Daniel Lichy; Jiaye Wu; Soumyadip Sengupta; David W Jacobs", "journal": "", "ref_id": "b50", "title": "Shape and material capture at home", "year": "2021" }, { "authors": "Jingwang Ling; Zhibo Wang; Feng Xu", "journal": "", "ref_id": "b51", "title": "Shadowneus: Neural sdf reconstruction by shadow ray supervision", "year": "2023" }, { "authors": "Yuan Liu; Peng Wang; Cheng Lin; Xiaoxiao Long; Jiepeng Wang; Lingjie Liu; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b52", "title": "Nero: Neural geometry and brdf reconstruction of reflective objects from multiview images", "year": "2023" }, { "authors": "Stephen Lombardi; Ko Nishino", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b53", "title": "Reflectance and illumination recovery in the wild", "year": "2015" }, { "authors": "Stephen Lombardi; Ko Nishino", "journal": "IEEE", "ref_id": "b54", "title": "Radiometric scene decomposition: Scene reflectance, illumination, and geometry from rgb-d images", "year": "2016" }, { "authors": "Youwei Lyu; Zhaopeng Cui; Si Li; Marc Pollefeys; Boxin Shi", "journal": "Advances in neural information processing systems", "ref_id": "b55", "title": "Reflection separation using a pair of unpolarized and polarized images", "year": "2019" }, { "authors": "Haiyang Mei; Bo Dong; Wen Dong; Jiaxi Yang; Seung-Hwan Baek; Felix Heide; Pieter Peers; Xiaopeng Wei; Xin Yang", "journal": "", "ref_id": "b56", "title": "Glass segmentation using intensity and spectral polarization cues", "year": "2022" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Springer", "ref_id": "b57", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Yasuhiro Mukaigawa; Yasushi Yagi; Ramesh Raskar", "journal": "IEEE", "ref_id": "b58", "title": "Analysis of light transport in scattering media", "year": "2010" }, { "authors": "Jacob Munkberg; Jon Hasselgren; Tianchang Shen; Jun Gao; Wenzheng Chen; Alex Evans; Thomas Müller; Sanja Fidler", "journal": "", "ref_id": "b59", "title": "Extracting triangular 3d models, materials, and lighting from images", "year": "2022" }, { "authors": "Trung Ngo; Thanh ; Hajime Nagahara; Rin-Ichiro Taniguchi", "journal": "", "ref_id": "b60", "title": "Shape and light directions from shading and polarization", "year": "2015" }, { "authors": "Baptiste Nicolet; Fabrice Rousselle; Jan Novak; Alexander Keller; Jakob Wenzel; Thomas Müller", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b61", "title": "Recursive control variates for inverse rendering", "year": "2023" }, { "authors": "Merlin Nimier-David; Delio Vicini; Tizian Zeltner; Wenzel Jakob", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b62", "title": "Mitsuba 2: A retargetable forward and inverse renderer", "year": "2019" }, { "authors": "Ko Nishino", "journal": "IEEE", "ref_id": "b63", "title": "Directional statistics brdf model", "year": "2009" }, { "authors": "Taishi Ono; Yuhi Kondo; Legong Sun; Teppei Kurita; Yusuke Moriuchi", "journal": "", "ref_id": "b64", "title": "Degree-of-linear-polarization-based color constancy", "year": "2022" }, { "authors": "Geoffrey Oxholm; Ko Nishino", "journal": "", "ref_id": "b65", "title": "Multiview shape and reflectance from natural illumination", "year": "2014" }, { "authors": "Henry Peters; Yunhao Ba; Achuta Kadambi", "journal": "", "ref_id": "b66", "title": "pcon: Polarimetric coordinate networks for neural scene representations", "year": "2023" }, { "authors": "Matteo Poggi; Zama Pierluigi; Fabio Ramirez; Samuele Tosi; Luigi Di Salti; Stefano Stefano; Mattoccia", "journal": "", "ref_id": "b67", "title": "Crossspectral neural radiance fields", "year": "2022" }, { "authors": "Shen Sang; Manmohan Chandraker", "journal": "Springer", "ref_id": "b68", "title": "Single-shot neural relighting and svbrdf estimation", "year": "2020" }, { "authors": "Soumyadip Sengupta; Jinwei Gu; Kihwan Kim; Guilin Liu; David W Jacobs; Jan Kautz", "journal": "", "ref_id": "b69", "title": "Neural inverse rendering of an indoor scene from a single image", "year": "2019" }, { "authors": "Soumyadip Sengupta; Angjoo Kanazawa; Carlos D Castillo; David W Jacobs", "journal": "", "ref_id": "b70", "title": "Sfsnet: Learning shape, reflectance and illuminance of facesin the wild", "year": "2018" }, { "authors": "Mingqi Shao; Chongkun Xia; Zhendong Yang; Junnan Huang; Xueqian Wang", "journal": "", "ref_id": "b71", "title": "Transparent shape from a single view polarization image", "year": "2022" }, { "authors": "Boyang Pratul P Srinivasan; Xiuming Deng; Matthew Zhang; Ben Tancik; Jonathan T Mildenhall; Barron", "journal": "", "ref_id": "b72", "title": "Nerv: Neural reflectance and visibility fields for relighting and view synthesis", "year": "2021" }, { "authors": "Kushagra Tiwary; Akshat Dave; Nikhil Behari; Tzofi Klinghoffer; Ashok Veeraraghavan; Ramesh Raskar", "journal": "", "ref_id": "b73", "title": "Orca: Glossy objects as radiance-field cameras", "year": "2023" }, { "authors": "Bruce Walter; Hongsong Stephen R Marschner; Kenneth E Li; Torrance", "journal": "", "ref_id": "b74", "title": "Microfacet models for refraction through rough surfaces", "year": "2007" }, { "authors": "Peng Wang; Lingjie Liu; Yuan Liu; Christian Theobalt; Taku Komura; Wenping Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b75", "title": "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction", "year": "2021" }, { "authors": "Zian Wang; Jonah Philion; Sanja Fidler; Jan Kautz", "journal": "", "ref_id": "b76", "title": "Learning indoor inverse rendering with 3d spatially-varying lighting", "year": "2021-10" }, { "authors": "Zian Wang; Tianchang Shen; Jun Gao; Shengyu Huang; Jacob Munkberg; Jon Hasselgren; Zan Gojcic; Wenzheng Chen; Sanja Fidler", "journal": "", "ref_id": "b77", "title": "Neural fields meet explicit geometric representations for inverse rendering of urban scenes", "year": "2023" }, { "authors": "Haoqian Wu; Zhipeng Hu; Lincheng Li; Yongqiang Zhang; Changjie Fan; Xin Yu", "journal": "", "ref_id": "b78", "title": "Nefii: Inverse rendering for reflectance decomposition with near-field indirect illumination", "year": "2023" }, { "authors": "Tomohiro Yamazaki; Yasushi Maruyama; Yusuke Uesaka; Motoaki Nakamura; Yoshihisa Matoba; Takashi Terada; Kenta Komori; Yoshiyuki Ohba; Shinichi Arakawa; Yasutaka Hirasawa", "journal": "IEEE", "ref_id": "b79", "title": "Four-directional pixel-wise polarization cmos image sensor using air-gap wire grid on 2.5-µm back-illuminated pixels", "year": "2016" }, { "authors": "Wenqi Yang; Guanying Chen; Chaofeng Chen; Zhenfang Chen; K Kwan-Yee; Wong", "journal": "", "ref_id": "b80", "title": "S 3 -nerf: Neural reflectance field from shading and shadow under a single viewpoint", "year": "2022" }, { "authors": "Yao Yao; Jingyang Zhang; Jingbo Liu; Yihang Qu; Tian Fang; David Mckinnon; Yanghai Tsin; Long Quan", "journal": "Springer", "ref_id": "b81", "title": "Neilf: Neural incident light field for physically-based material estimation", "year": "2022" }, { "authors": "Lior Yariv; Jiatao Gu; Yoni Kasten; Yaron Lipman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b82", "title": "Volume rendering of neural implicit surfaces", "year": "2008" }, { "authors": "Lior Yariv; Peter Hedman; Christian Reiser; Dor Verbin; P Pratul; Richard Srinivasan; Jonathan T Szeliski; Ben Barron; Mildenhall", "journal": "", "ref_id": "b83", "title": "Bakedsdf: Meshing neural sdfs for realtime view synthesis", "year": "2023" }, { "authors": "Lior Yariv; Yoni Kasten; Dror Moran; Meirav Galun; Matan Atzmon; Ronen Basri; Yaron Lipman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b84", "title": "Multiview neural surface reconstruction by disentangling geometry and appearance", "year": "2020" }, { "authors": "Kuk-Jin Yoon; Emmanuel Prados; Peter Sturm", "journal": "International Journal of Computer Vision", "ref_id": "b85", "title": "Joint estimation of shape and reflectance using multiple images with known illumination conditions", "year": "2010" }, { "authors": "Cheng Zhang; Bailey Miller; Kai Yan; Ioannis Gkioulekas; Shuang Zhao", "journal": "ACM Trans. Graph", "ref_id": "b86", "title": "Path-space differentiable rendering", "year": "2020" }, { "authors": "Cheng Zhang; Lifan Wu; Changxi Zheng; Ioannis Gkioulekas; Ravi Ramamoorthi; Shuang Zhao", "journal": "ACM Trans. Graph", "ref_id": "b87", "title": "A differential theory of radiative transfer", "year": "2019" }, { "authors": "Jingyang Zhang; Yao Yao; Shiwei Li; Jingbo Liu; Tian Fang; David Mckinnon; Yanghai Tsin; Long Quan", "journal": "", "ref_id": "b88", "title": "Neilf++: Inter-reflectable light fields for geometry and material estimation", "year": "2023" }, { "authors": "Kai Zhang; Fujun Luan; Zhengqi Li; Noah Snavely", "journal": "", "ref_id": "b89", "title": "Iron: Inverse rendering by optimizing neural sdfs and materials from photometric images", "year": "2022" }, { "authors": "Kai Zhang; Fujun Luan; Qianqian Wang; Kavita Bala; Noah Snavely", "journal": "", "ref_id": "b90", "title": "Physg: Inverse rendering with spherical gaussians for physics-based material editing and relighting", "year": "2021" }, { "authors": "Xiuming Zhang; Boyang Pratul P Srinivasan; Paul Deng; Debevec; Jonathan T William T Freeman; Barron", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b91", "title": "Nerfactor: Neural factorization of shape and reflectance under an unknown illumination", "year": "2021" }, { "authors": "Yuanqing Zhang; Jiaming Sun; Xingyi He; Huan Fu; Rongfei Jia; Xiaowei Zhou", "journal": "", "ref_id": "b92", "title": "Modeling indirect illumination for inverse rendering", "year": "2022" }, { "authors": "Jinyu Zhao; Yusuke Monno; Masatoshi Okutomi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b93", "title": "Polarimetric multi-view inverse rendering", "year": "2022" }, { "authors": "Jingsen Zhu; Yuchi Huo; Qi Ye; Fujun Luan; Jifan Li; Dianbing Xi; Lisha Wang; Rui Tang; Wei Hua; Hujun Bao", "journal": "", "ref_id": "b94", "title": "I2-sdf: Intrinsic indoor scene reconstruction and editing via raytracing in neural sdfs", "year": "2023" }, { "authors": "Rui Zhu; Zhengqin Li; Janarbek Matai; Fatih Porikli; Manmohan Chandraker", "journal": "", "ref_id": "b95", "title": "Irisformer: Dense vision transformers for single-image inverse rendering in indoor scenes", "year": "2022" }, { "authors": "Shihao Zou; Xinxin Zuo; Yiming Qian; Sen Wang; Chi Xu; Minglun Gong; Li Cheng", "journal": "Springer", "ref_id": "b96", "title": "3d human shape reconstruction from a polarization image", "year": "2020" }, { "authors": "Shihao Zou; Xinxin Zuo; Sen Wang; Yiming Qian; Chuan Guo; Li Cheng", "journal": "IEEE Transactions on Multimedia", "ref_id": "b97", "title": "Human pose and shape estimation from single polarization images", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 389.85, 668.11, 155.27, 10.8 ], "formula_id": "formula_0", "formula_text": "s out = M • R • s in ,(1)" }, { "formula_coordinates": [ 4, 105.13, 224.12, 181.23, 22.34 ], "formula_id": "formula_1", "formula_text": "M dif = ( ρ π cos θ i )F T o • D • F T i .(2)" }, { "formula_coordinates": [ 4, 119.83, 299.82, 166.53, 23.22 ], "formula_id": "formula_2", "formula_text": "M spec = k s DG 4 cos θ o F R ,(3)" }, { "formula_coordinates": [ 4, 50.11, 328.18, 236.25, 22.49 ], "formula_id": "formula_3", "formula_text": "k s ∈ R 3 is the specular coefficient, D is the GGX distribution function [75]," }, { "formula_coordinates": [ 4, 97.26, 426.46, 189.1, 19.08 ], "formula_id": "formula_4", "formula_text": "s cam = R cam • Ω M • R in • s in dω i ,(4)" }, { "formula_coordinates": [ 4, 91.98, 610.67, 194.39, 19.08 ], "formula_id": "formula_5", "formula_text": "s cam dif = R cam dif • Ω M dif • R in dif • s in dω i ,(5)" }, { "formula_coordinates": [ 4, 86.56, 655.77, 199.81, 19.08 ], "formula_id": "formula_6", "formula_text": "s cam spec = Ω R cam spec • M spec • R in spec • s in dω i .(6)" }, { "formula_coordinates": [ 4, 397.62, 384.42, 147.49, 11.88 ], "formula_id": "formula_7", "formula_text": "f sdf (x k ) = d k ,(7)" }, { "formula_coordinates": [ 4, 352.56, 451.05, 188.68, 12.25 ], "formula_id": "formula_8", "formula_text": "∇ x k f sdf (x k )/||∇ x k f sdf (x k )|| 2 = n k . (8" }, { "formula_coordinates": [ 4, 541.24, 453.44, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 4, 367.79, 529.63, 177.33, 11.03 ], "formula_id": "formula_10", "formula_text": "w k = T k (1 -exp (-σ k δ k )),(9)" }, { "formula_coordinates": [ 4, 394.8, 597.64, 146.16, 11.72 ], "formula_id": "formula_11", "formula_text": "σ k = αΨ β (d k ), (10" }, { "formula_coordinates": [ 4, 540.96, 600.03, 4.15, 8.64 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 138.42, 344.93, 147.95, 11.88 ], "formula_id": "formula_13", "formula_text": "f alb (x k ) = ρ k ,(11)" }, { "formula_coordinates": [ 5, 135.2, 373.52, 147.01, 11.88 ], "formula_id": "formula_14", "formula_text": "f rough (x k ) = r k . (12" }, { "formula_coordinates": [ 5, 282.21, 375.91, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 90.84, 413.17, 144.51, 14.11 ], "formula_id": "formula_16", "formula_text": "ρ = N k=1 w k ρ k , r = N k=1 w k r k ." }, { "formula_coordinates": [ 5, 368.27, 478.18, 176.84, 12.62 ], "formula_id": "formula_17", "formula_text": "f i (x, ω i ) = s r spec [0] = s r dif [0],(13)" }, { "formula_coordinates": [ 5, 377.06, 545.85, 168.05, 12.62 ], "formula_id": "formula_18", "formula_text": "f spec (x, ω i ) = s r spec [1, 2],(14)" }, { "formula_coordinates": [ 5, 386.41, 591.71, 158.7, 12.62 ], "formula_id": "formula_19", "formula_text": "f dif (x, ω i ) = s r dif [1].(15)" }, { "formula_coordinates": [ 6, 100.38, 312.31, 185.99, 27.42 ], "formula_id": "formula_20", "formula_text": "s cam dif = 2π |S L | R cam dif • S L M dif • s r dif ,(16)" }, { "formula_coordinates": [ 6, 94.13, 433.42, 192.23, 27.42 ], "formula_id": "formula_21", "formula_text": "s cam spec = 2π |S L | S L R cam spec • M spec • s r spec .(17)" }, { "formula_coordinates": [ 6, 128.78, 486.17, 157.58, 12.62 ], "formula_id": "formula_22", "formula_text": "s cam = s cam dif + s cam spec .(18)" } ]
2023-12-15
[ { "figure_ref": [ "fig_10", "fig_10" ], "heading": "Introduction", "publication_ref": [ "b15", "b48", "b47", "b41", "b47" ], "table_ref": [], "text": "With the advancement for fine-tuning with instructionfollowing data, large language models have showcased exceptional generalization capabilities across various downstream tasks. Recently, models such as Flamingo [1], BLIP-2 [15], LLaVA [16], and MiniGPT-4 [49] have extended these capabilities into multimodal domains, achieving the in-Figure 1. VQA in text-rich images with text-grounding. Given an image and a query, the model generates responses in an autoregressive way, and concurrently outputs the basis of reasoning, i.e., the bounding box that delineates the location of the answer. tegrated visual-language understanding. However, due to the absence of instruction-following data pertinent to text-rich scenarios, these models are unable to identify and comprehend the text within images, thereby exhibiting significant limitations in the processing of text-rich images.\nTo address this issue, LLaVAR [48] prompts Optical Character Recognition (OCR) results and image caption to text-only GPT-4, thereby creating a rich instructionfollowing dataset enriched with textual content. Following this, mPLUG-DocOwl [42] harvests diverse forms of document data to augment the model's perception to textual features. Concurrently, UniDoc [10] performs instruction tuning with OCR-based tasks, equipping the model with comprehensive abilities in text detection, recognition, and spotting, thereby deepening document understanding. Although these methods have achieved impressive performance, the potential of text grounding as an instrument for document understanding is still less explored.\nUniDoc [10] has demonstrated that the use of spotting instruction templates can enhance the performance of detection and recognition tasks by incorporating the explicit location information. Furthermore, empirical evidence from Shikra [4] indicates that the inclusion of center point loca-tions for objects can mitigate hallucinations and improve the precision of its responses. Similarly, we argue that textgrounding plays an important role in text-rich VQA tasks. As illustrated in Fig. 1, we aim to develop a model that can accurately respond to a question associated with an image and simultaneously identify and clarify the specific text region within the image that corresponds to the rationale behind the answer. Generally, text-grounding ability has several benefits: First, it provides a basis for reasoning. By incorporating text-grounding into VQA tasks, models accurately focus on image regions that are related to the question, thus leading to better performance. Second, it strengthens the interpretability of models, making the generated answers more convincing. Third, in text-rich scenarios where textual information are interleaved and complicated, the introduction of text-grounding elevates the interactive experience for users. We contend that text-grounding can further excavate the model's potential and release the extensive world knowledge contained within the model.\nTo endow the TGDoc model with text-grounding capability, we accumulate a large amount of instruction-following data enriched with text location details. During the pretraining stage, we curate 99K PowerPoint presentations from the internet for image-text alignment, cultivating the model's proficiency in text detection and recognition across diverse scenarios. In the fine-tuning stage, we prompt text-only GPT-4 with OCR results and image captions to produce 12K high-quality, multi-turn conversations. Additionally, we incorporate all data used by LLaVAR [48] during both two stages. Experimental results indicate that using instruction tuning with the collected instruction-following data, our method exhibits improved performance on understanding text-rich scenes and has text-grounding capability.\nOur contributions can be summarized as follows: • We conduct an in-depth exploration of text-grounding in MLLMs without introducing extra detection modules, and substantiate its critical role in processing and understanding text-rich images. To the best of our knowledge, this is the first investigation into enhancing document understanding by using text-grounding technique. • We develop an instructing-following dataset comprising 99K PowerPoint slides and 12K high-quality conversations, with each annotated with location of text. • Extensive experiments show that our method achieves the state-of-the-art results on several text-rich VQA benchmarks, validating the effectiveness of our method." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "In this section, we begin with a brief overview of recent developments in multimodal large language models. We then introduce their applications in document understanding and conclude by discussing the efforts to integrate multimodal large language models with grounding capabilities." }, { "figure_ref": [], "heading": "Multimodal Large Language Models", "publication_ref": [ "b33", "b34", "b37", "b33", "b31", "b4", "b48", "b15", "b43", "b40", "b40", "b41", "b42", "b47" ], "table_ref": [], "text": "Large language models through instruction tuning have demonstrated impressive zero-shot capabilities for new tasks [6,26,34,35,38]. In particular, LLaMA [34], a prominent open-source model, has drawn attention in research initiatives like Alpaca [32] and Vicuna [5], which utilize the generated instruction-following data to further excavate the potential of the model. However, it is noteworthy that these studies exclusively accept the text as input.\nRecently, researchers have extended the instruction tuning to the multimodal domain, particularly with a focus on images. Specifically, Flamingo [1] and BLIP-2 [15] have pioneered visual and language integration by constructing diverse image-text alignment modules, laying a foundation for future research. Similar to BLIP-2 [15], MiniGPT-4 [49] utilizes a visual encoder that integrates ViT-G/14 [9] with Q-former and a linear projection layer to link the visual features with the large language model. To mitigate pre-training challenges like word repetition and irrelevant content, the authors leverage ChatGPT for data refinement. LLaVA [16] employs a simple linear layer as a bridge to project image features into word embedding space. Concurrently, mPLUG-Owl [44] utilizes a visual abstractor coupled with cross-attention to align visual representations with the large language model. Recently, GPT-4V [41] has shown unprecedented proficiency in handling intricate multi-modal inputs. As a robust multi-modal system, GPT-4V possesses a profound ability to interpret visual information, significantly elevating human-machine interactions.\nHowever, beyond GPT-4V [41], the above efforts face difficulties in processing images within text-rich contexts. This challenge primarily arises from an insufficiency of relevant data, such as OCR-based or other text-related instructionfollowing data. To address this issue, many related works have subsequently emerged [10,42,43,48]." }, { "figure_ref": [], "heading": "Document Understanding with MLLMs", "publication_ref": [ "b13", "b32", "b35", "b45", "b41", "b47", "b47", "b24", "b21", "b23", "b30", "b15", "b41", "b43", "b42", "b19" ], "table_ref": [], "text": "Recently, with the impressive zero-shot performance of multimodal large language models in visual-language tasks [8,14,33], the field of document understanding has shifted from supervised methods [36,39,40,46] to generative approaches [42,48], achieving several notable results.\nLLaVAR [48] leverages GPT-4 [25] with OCR tools to produce abundant instruction-following data for text-rich images. Specifically, during the pre-training phase, they utilize the noisy instruction-following data to align image and text, enabling the model with OCR capability. During fine-tuning, they employ a text-only GPT-4 to generate highquality multi-turn conversations. Through instruction tuning, the model exhibits a marked performance on diverse textbased VQA datasets [3,22,24,31], surpassing LLaVA [16]. mPLUG-DocOwl [42], an evolution of mPLUG-Owl [44], is optimized for text-rich scenarios. By creating diverse docu-ment instructional data, it excels in several downstream tasks within an OCR-free environment. UniDoc [10] mitigates potential data distribution inconsistencies between pre-training and fine-tuning by employing PowerPoint presentations to generate a large corpus of OCR-related instruction-following data. By doing so, UniDoc synthesizes multi-tasks into a unified framework, achieving leading results on several benchmarks. Moreover, notable studies like UReader [43] and KOSMOS-2.5 [20] have undertaken diverse explorations in document understanding." }, { "figure_ref": [], "heading": "Grounding Ability in MLLMs", "publication_ref": [ "b44" ], "table_ref": [], "text": "The exploration of grounding capabilities within MLLMs has increasingly attracted many researchers' interest. Shikra [4] introduces a new referential dialogue task and reorganizes existing datasets with bounding boxes to incorporate object positional details into the instruction-following data. After fine-tuning, the model exhibits impressive grounding ability without needing extra detection modules. Moreover, the authors observe that the positional information in VQA tasks effectively reduces visual hallucinations. KOSMOS-2 [27] has assembled a set of grounded image-text pairs. After training, the model can perceive objects within images. Ferret [45] offers a unique method for object grounding in scenes, capable of recognizing any shape and granular target objects in images by using an innovative hybrid regional representation strategy. The trained model is proficient in recognizing points, bounding boxes, and free-form shapes regional representations.\nIn text-rich scenarios, the grounding capabilities are still less explored. In this study, we focus on exploring the grounding of text within multimodal large language models. Our empirical results indicate that incorporating textgrounding ability can boost the understanding and interpretability of the model in text-rich scenarios." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we initially outline the architecture of our TGDoc model and then provide a detailed description of how text-grounding is represented in our dataset. Subsequently, we describe a comprehensive process involved in creating our grounded instruction-following dataset, encompassing both the pre-training and fine-tuning stages." }, { "figure_ref": [ "fig_0" ], "heading": "Model Architecture", "publication_ref": [ "b15", "b27" ], "table_ref": [], "text": "As shown in Fig. 2, following LLaVA [16], the overall architecture of our TGDoc model consists of three components: a vision encoder, a linear projection layer, and a large language model. Specifically, we employ the pre-trained CLIP-ViT-L/14 [28] as vision encoder, configured to process images at two resolutions: 224 × 224 and 336 × 336. The alignment module contains a single projection layer and is designed to transform the image features into the word embedding" }, { "figure_ref": [], "heading": "LLM Vision Encoder Projection", "publication_ref": [], "table_ref": [], "text": "The main slogan in the image is \"LOVE YOUR NEIGHBOR\"[0.114, 0.153, 0.9, 0.616]." }, { "figure_ref": [], "heading": "Tokenization", "publication_ref": [ "b4", "b33", "b15", "b41", "b47" ], "table_ref": [], "text": "Can you tell me the main slogan in the image? space compatible with the language decoder. For the large language model, we select the Vicuna-7B [5], an instructiontuned variant based on LLaMA [34] to enhance our language understanding capabilities. More specifically, given an image I H×W ×C , we initially process it using a vision encoder to extract the image features F 256×1024 . In this paper, we opt for grid features before the last transformer layer to serve as our image features. These features, as opposed to those from the last layer, exhibit an enhanced ability to discern intricate details in the image [16]. This attribute is especially beneficial for the recognition of text in text-rich environments. Subsequently, these image features F 256×1024 are transformed through a linear projection layer into the word embedding space, resulting in transformed features F 256×4096 trans to align with the language decoder. Moreover, user queries are tokenized within the same word embedding space and seamlessly concatenated with the visual embeddings to form uniform input vectors for the large language model. Consistent with the previous methods [4,10,42,48], we continue to train the model with the next-token prediction task, i.e., generating the next token based on the sequence of preceding tokens." }, { "figure_ref": [ "fig_0" ], "heading": "Grounded Input Representations", "publication_ref": [], "table_ref": [], "text": "Following the expression of shikra [4], we adopt the natural language formatted notation [x min , y min , x max , y max ] to represent the bounding boxes coordinates. In this context, [x min , y min ] signifies the top-left corner of the text's minimum bounding rectangle, while [x max , y max ] corresponds to the bottom-right corner. We normalize Captions generated by BLIP-2 \"babies are from airports\", \"babies come from airports\", \"the cover of babies come from airports\". Table 1. The construction of the instruction-following dataset during the fine-tuning stage. Given an image, we employ BLIP-2 [15] to produce three captions. In parallel, we extract text and bounding boxes from the image using two OCR engines. These captions and OCR results are then combined to form prompts, guiding GPT-4 in generating multi-turn conversations. To maintain data quality, these conversations are manually reviewed and refined, removing any content unrelated to the image. the bounding boxes to [0, 1] relative to the image size and maintain a three-decimal precision for each coordinate. The bounding box notion can be integrated seamlessly into both the prompts and responses, encapsulated as \"<text>[x min , y min , x max , y max ]\". For instance, as illustrated in Fig. 2, the structured input and output are presented as follows: \"USER: Can you tell me the main slogan in the image? ASSISTANT: The main slogan in the image is \"LOVE YOUR NEIGHBOR\"[0.114, 0.153, 0.9, 0.616].\". We employ the model to predict the coordinates in a manner analogous to natural language prediction, without introducing additional positional tokens or detection modules." }, { "figure_ref": [ "fig_2" ], "heading": "Instruction-following Data for Pre-training", "publication_ref": [], "table_ref": [], "text": "We indicate that the data collection criteria stipulate not only textual richness but also diversity in content. Inspired by UniDoc [10], we select PowerPoint slides as our instructionfollowing training data, due to their multiple advantages. Firstly, the slides are rich in structured text, featuring diverse fonts and artistic texts. Unlike text from natural scenes, the content in PowerPoint slides is clearer and more organized, which markedly improves its compatibility with Optical Character Recognition (OCR) technique. This clarity reduces the need for manual corrections, thus simplifying the data preprocessing phase. Secondly, PowerPoint files encompass various non-textual elements like graphs and flowcharts, which enrich the dataset with a diversity of image types. Thirdly, the slides often contain interleaved visual and text information, such as photographs of scenes and photos of products, embedding the text within a broader narrative context, which is conducive to the model's understanding of complex image-text scenarios.\nIn the construction of PowerPoint datasets, we consciously avoid automated generation techniques to prevent the risk of data homogeneity. Instead, we chose to source data from the website, SlideShare1 , a public platform for sharing presentations and multimedia materials. Our collection spans various domains such as business, education, and athletics. To refine the dataset, we apply the MD5 hashing algorithm [29] to eliminate any duplicates. We then utilize PaddleOCR 2 , an open-source tool, to extract both text and spatial bounding box data from each slide. To ensure high data quality, we impose a threshold requiring that each slide must feature at least one block of text accounting for a minimum of 5% of the slide's total image area. This rigorous process yields data consisting of 99K image-text pairs, and we present several examples of slides in Fig. 3.\nWith respect to the accumulated PowerPoint slides data, we develop three distinct types of tasks for each image: recognition, detection, and spotting. The instructions for these tasks are: \"Identifying all text along with its corresponding bounding box in the image\", \"Locating the box containing the word <text>\", and \"Please recognize and provide the text contained within the specified bounding box [x min , y min , x max , y max ]\", respectively. To broaden the scope of instructions, we utilize GPT-4 for query augmentation. This approach enhances the diversity of instructional inputs, thereby bolstering the model's ability to generalize across a range of scenarios." }, { "figure_ref": [ "fig_3" ], "heading": "Instruction-following Data for Fine-tuning", "publication_ref": [ "b15", "b47", "b33", "b29", "b29", "b47", "b15", "b47" ], "table_ref": [], "text": "To improve the model's proficiency in specific tasks and deepen its understanding and execution of standard Visual Question Answering (VQA) tasks, we carefully design a total of 12K multi-turn conversations. Similarly to the pretraining stage, this dataset integrates textual content with associated bounding box. The data comprises two compo- Table 2. Analysis of our data with LLaVA [16] and LLaVAR [48].\n\"pre\" and \"fine\" are the data utilized in the pre-training and finetuning stage, respectively. \"Que len\" and \"Ans len\" represent the average number of tokens after the LLaMA [34] tokenization.\nnents: book covers derived from public digital libraries and text-rich images from LAION-400M [30].\nBook covers. We manually download 11K book covers from the openly accessible Project Gutenberg digital library3 . For each cover, we carefully extract pivotal metadata, such as the title and author. Subsequently, we utilize PaddleOCR 2 to analyze each image, from which we reconstruct the text to attain the book's specifics along with their bounding boxes. We utilize GPT-4 to design conversations revolving around the book's metadata, simultaneously requiring the model to explain its rationale behind the responses. For example, a query could be \"Who is the author of this volume? Please justify your response with the bounding box [x min , y min , x max , y max ].\" The model's response would include the author's name along with the bounding box coordinates, like: \"The work is authored by <text>[x min , y min , x max , y max ].\" Text-rich scene images. We curate a corpus of 1K highresolution (minimum 1024 × 1024 pixels), text-rich images from LAION-400M dataset [30]. Inspired by LLaVAR [48], we employ two OCR tools, PaddleOCR 2 and EasyOCR4 to process each image. In addition, we utilize BLIP-2 [15] to generate three descriptions for every image. Subsequently, as shown in Tab. 1, these elements, two OCR results along with the generated captions, are prompted to instruct GPT-4 in generating multi-turn conversations that concentrate on the textual content of image. However, we observe some inconsistencies in the generated data, such as the unnecessary phrases like \"based on the paddleocr\". We manually review the conversation of each image to guarantee the quality, resulting in our final instruction fine-tuning dataset. Finally, we exhibit the examples of our fine-tuning data in Fig. 4. Besides, we also provide an analysis of our data with that of LLaVA [16] and LLaVAR [48] in Tab. 2." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b47", "b27", "b41", "b47", "b4", "b33" ], "table_ref": [], "text": "The implementation of TGDoc was executed on the Linux platform with eight A100 GPUs. We incorporated the datasets from LLaVAR [48] along with our own data. For vision encoder, we selected CLIP-ViT-L/14 [28] and conducted experiments with image sizes of 224 × 224 and 336×336, respectively. It has been observed that text-related tasks generally benefit from higher resolution [10, 42,48]. We opted for Vicuna [5], which is optimized for multi-tasks and an evolution of LLaMA [34] as our large language model. For pre-training, only the linear projection layer was trained. We applied the learning rate of 2e-3 and a batch size of 128. While for fine-tuning, we adjusted both the linear projection layer and the large language model, reducing the learning rate to 2e-5 with a batch size of 32. The AdamW optimizer [19] was utilized for parameter updates, with each stage undergoing one epoch of training. Moreover, We employed a cosine annealing scheduler [18] and set the maximum sequence length limited to 2048." }, { "figure_ref": [], "heading": "Datasets and Evaluation Metrics", "publication_ref": [ "b23", "b30", "b21", "b22", "b20", "b11", "b10" ], "table_ref": [], "text": "To validate the effectiveness of text-grounding in enhancing document understanding, we evaluate our method on six textbased Visual Question Answering (VQA) datasets, including STVQA [3], OCRVQA [24], TextVQA [31], DocVQA [22], InfographicVQA [23], and ChartQA [21]. In addition, we also employ three key information extraction (KIE) datasets, including FUNSD [12], SROIE [11], and POIE [13]. For each question, we append the instruction \"Support your reasoning with the coordinates [x min , y min , x max , y max ]\" at the end of the question to prompt the model to output the bounding box where the answer is located. Following the evaluation protocol proposed by Liu et al. [17], we use accuracy metric to determine if the model-generated content correctly contains the ground truth answer." }, { "figure_ref": [ "fig_5" ], "heading": "Comparison with Other MLLMs", "publication_ref": [ "b27", "b22", "b27" ], "table_ref": [], "text": "We compare our approach with recent multimodal large language models on Visual Question Answering (VQA) and Key Information Extraction (KIE) benchmarks. Tab. 3 summarizes the quantitative comparison results. We conduct experiments using the CLIP model [28] [23]. We analyze that this benchmark comprises various complex infographics that require elementary reasoning and basic arithmetic skills, as answers cannot be readily extracted from the images. Under these conditions, the utility of bounding boxes is restricted, and their requirement may potentially compound its reasoning workload. We are considering a strategy that integrates chain-of-thought reasoning with bounding boxes to enhance performance in these challenging conditions for future research.\nWe also present the qualitative results of our approach in Fig. 5, complementing the previously reported quantitative results. The model not only generates answers but also visually highlights the regions that substantiate its responses. This implies that the model needs to identify the specific regions pertinent to the query, mirroring the human cognitive process of seeking relevant information to answer questions. However, it should be noted that our text-grounding is not always precise. We speculate that this limitation may be attributed to the inherent characteristics of the frozen CLIP model [28], which prioritizes global semantics over meticulous attention to local textual details." }, { "figure_ref": [], "heading": "What is the written on the plane?", "publication_ref": [], "table_ref": [], "text": "What is the name of the company on the sign?\nThe written on the plane is \"BRITISH AIRWAYS\"[0.25, 0.26, 0.72, 0.56].\nThe name of the company on the sign is \"Google\"[0.13, 0.33, 0.67, 0.63].\nWhat comes after \"The World of\"?\nAfter \"The World of\"[0.13, 0.09, 0.83, 0.153], there is \"Jughead\"[0.13, 0.153, 0.96, 0.393].\nProvide a detailed introduction of the image." }, { "figure_ref": [], "heading": "Generate the detailed description of the image.", "publication_ref": [], "table_ref": [], "text": "What is this movie? Can you introduce is to me?\nThe image features a stop sign in the foreground, with a blue sky and a cloudy day in the background. The stop sign is positioned on the side of the road, and there is a building in the background.\nThe scene is captured in black and white, giving it a classic and timeless feel. The text \"STOP\" written in capital letters in red or white letters, depending on the country, serves as a clear and concise description of the primary purpose of the stop sign. The bounding box [0.50, 0.30, 0.75, 0.73] indicates that the text is located within the image.\nThe image is likely promoting an event or awareness campaign related to the oceans and sustainable development.\nThe presence of fish and the blue background suggests marine environment, emphasizing the importance of preserving it for the wellbeing of both the oceans and the animals that depend on them. The text in the image, \"OCEANS DAY on June 8th THE OCEANS AND SUSTAINABLE DEVELOPMENT\"[0.072, 0.15, 0.89, 0.53], reinforces this message by providing a clear and concise description of the event or campaign.\nThe movie is titled \"Titanic\"[0.03, 0.50, 0.94, 0.72] and is a romantic disaster film directed by James Cameron. The film stars Leonardo DiCaprio and Kate Winslet, who portray the characters Jack Dawson and Rose DeWitt Bukater. The story takes place on the Titanic, a luxury ocean liner that sinks during its maiden voyage, resulting in a tragic event that leaves the two characters stranded on the ship. The movie is known for its powerful romance, memorable scenes, and the iconic \"I'm the king of the world!\" line from DiCaprio.\nWhat is the name on the sign?\nWhat is the first word on the sign?\nWhat kind of shack is this? \nIs" }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Studies", "publication_ref": [ "b47", "b47" ], "table_ref": [], "text": "In this section, we conduct ablation studies on our curated pre-training and fine-tuning datasets, encompassing four distinct experiments. The first experiment (labeled as \"Base\") excludes the data we collected, i.e., the same setting as LLaVAR [48]. The second (labeled as \"Base+pre\") focuses on the impact of our pre-training data. The third (labeled as \"Base+fine\") investigates the influence of our fine-tuning data. To rigorously assess the impact of text-grounding on the model's comprehension ability, we undertake the fourth experiment (labeled as \"Ours+w/o box\"), which removes all bounding boxes in data. All experiments are uniformly performed with an input image size of 224 × 224.\nWe present the results in Tab. 4. From the results, we can obtain two principal conclusions. First, although the volume of fine-tuning data is smaller than that of pre-training data, the performance based on fine-tuning data surpasses that based on pre-training. We infer that incorporating specific grounded instruction tuning data during the fine-tuning stage enables the model to more accurately interpret user instructions. This leads to enhanced performance in various tasks such as Visual Question Answering (VQA). Additionally, this approach effectively facilitates the model's rapid acquisition of text-grounding skills, allowing it to focus on image Table 4. Ablation studies on our collected data. The best and the second results (accuracy %) are highlighted in bold and underlined, respectively. P denotes the data gathered for the pre-training stage, while F denotes the data for the fine-tuning stage. regions directly pertinent to the answers. Second, compared to the results without bounding boxes, the introduction of text-grounding further improves the performance. As shown in Fig. 6, compared to LLaVAR [48], our method delivers correct answers with enhanced accuracy." }, { "figure_ref": [ "fig_7", "fig_5" ], "heading": "Discussion", "publication_ref": [ "b27", "b27", "b27", "b36" ], "table_ref": [], "text": "The experimental results substantiate that the text-grounding capability enables the model to concentrate more on regions within images relevant to the answers, thereby enhancing the model's comprehension in text-rich scenarios. Furthermore, throughout the experiment, we also encounter a series of challenges, which will be discussed in the following. Accuracy of bounding boxes. In Fig. 7, we present some examples of inaccurate bounding boxes generated by our methods. Compared to Fig. 5, these boxes either near the text or cover a fragment of it. We speculate that this may stem from the capabilities of the visual encoder CLIP [28], which is based on ViT [7] and trained primarily on images of natural objects, emphasizing the global features of images rather than optimizing for fine-grained local details like textual boundaries, which leads to challenges in text detection. However, we argue that the bounding boxes, despite not always being exact, are vital for localization in VQA tasks. They can further narrow down the model's retrieval scope, Typically, text-rich images are of high resolution, and their compression to meet the CLIP's [28] input sizes often leads to a loss of textual detail. For the POIE dataset [13], increasing the input resolution of CLIP [28] from 224 to 336 pixels results in a 13% accuracy improvement. We point out that this is because most images in the POIE dataset [13] are low-resolution images, and a larger resolution preserves text details that might be lost at a lower dimension of 224. Moreover, the integration of text-grounding has further amplified the model's efficacy in processing the POIE dataset [13].\nQuestions requiring multi-step reasoning. Our method is capable of providing positional information relevant to the answers within its responses. Nevertheless, as discussed in Sec. 4.3, for queries that require multi-step reasoning (where answers cannot be directly extracted from the image), the efficacy of text-grounding is constrained. We speculate that this limitation arises due to the model focusing on irrelevant areas within incorrect bounding boxes, which not only leads to ineffective reasoning but also disrupts the entire reasoning process. To address this issue, we consider adopting the chain-of-thought method [37], incorporating bounding boxes at each step of the inference process for future work." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our research investigates text-grounding within document understanding, demonstrating its significant role in improving the model comprehension of text-rich scenes. We create 99K PowerPoint slides for pre-training, focusing on detection, recognition, and spotting tasks for image-text alignment.\nFor fine-tuning, we compile a dataset of 12K high-quality conversations with bounding box annotations to specify text locations. Experimental results with the collected grounded instruction-following dataset reveal that text-grounding enhances the model's interpretability and performance by iden-tifying answer-related areas. However, the model faces challenges in processing complex reasoning questions, which will be explored in our future research." }, { "figure_ref": [], "heading": "Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs", "publication_ref": [], "table_ref": [], "text": "Yonghui Wang 1 Wengang Zhou " }, { "figure_ref": [], "heading": "B. Conversation Format of TGDoc", "publication_ref": [], "table_ref": [], "text": "System message. \"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.\" Conversation template of model. We provided one training template of the model as follows: <System Message>USER: What is the book's title based on the image? Please provide the supporting text and its bounding box. ASSISTANT: <Model's Output>" }, { "figure_ref": [ "fig_10" ], "heading": "C. The Processing of Data for Fine-tuning", "publication_ref": [], "table_ref": [], "text": "For the fine-tuning with book covers, we download both the images along with the associated metadata. However, the metadata lacks bounding box information. To address this issue, we utilize PaddleOCR 1 to extract text and corresponding bounding boxes from each cover. Then, we reorganize the extracted text to constitute the metadata to obtain the final bounding boxes. The entire process is shown in Fig. 1." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "D. More Visual Results", "publication_ref": [ "b4" ], "table_ref": [], "text": "We present more visual results on STVQA [1] dataset, TextVQA [5] dataset, and natural scenes in Fig. 2 and Fig. 3. The results shows that the model is capable of determining the approximate locations of answers relative to the questions, thereby enabling precise extraction of text from images for model's response. Moreover, we have demonstrated TG-Doc's proficiency in comprehending a variety of multimodal scenarios, further substantiating its robustness and adaptability in handling complex contexts. TGDoc effectively combines image content and textual data, improving its overall comprehension and accuracy across various scenarios." }, { "figure_ref": [], "heading": "What does the sign say?", "publication_ref": [], "table_ref": [], "text": "What is the first word on the sign?\nThe sign says \"PARE\"[0.13, 0.41, 0.83, 0.63].\nThe first word on the sign is \"Isla\"[0.20, 0.41, 0.41, 0.56]." }, { "figure_ref": [], "heading": "What type of beverage does the bottle advertise?", "publication_ref": [], "table_ref": [], "text": "Who is sponsored for this game?\nWhat name is under the clock?\nThe bottle advertises a cola beverage. The text \"COLA\"[0. hamed Elhoseiny. MiniGPT-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023. 1\nWhat is the name on top of label?\nThe name on top of the label is \"Big House Red\"[0.24, 0.19, 0.68, 0.39].\nWhat is the title of this website?\nThe title of the website is \"The Daily Create\"[0.13, 0.23, 0.70, 0.37].\nDescribe this image in detail.\nPlease give a description of this image.\nWhat is the name of this restaurant?Could you please introduce it in detail? The image captures a large crowd of people gathered in a stadium, watching a sports event. The audience is sitting on chairs, with some of them holding up signs and cheering for their favorite team. The chairs are arranged in multiple rows, with some chairs placed closer to the foreground and others further back. The crowd is diverse in age and appearance, with people of varying heights and positions. Some of the individuals are closer to the front, while others are situated in the middle or back rows. The atmosphere appears to be lively and energetic, as the fans come together to support their team and enjoy the event.\nThe image captures a busy city street at night, with a multitude of cars driving through the city. The traffic appears to be backed up, as the cars are lined up in a long line, creating a sense of congestion. The street is illuminated by the bright lights of the cars, which are visible in the foreground and background of the scene. The nighttime setting and the heavy traffic contribute to the overall atmosphere of the scene, showcasing the hustle and bustle of urban life. " } ]
In the field of document understanding, significant advances have been made in the fine-tuning of Multimodal Large Language Models (MLLMs) with instructionfollowing data. Nevertheless, the potential of text-grounding capability within text-rich scenarios remains underexplored. In this paper, we present a text-grounding document understanding model, termed TGDoc, which addresses this deficiency by enhancing MLLMs with the ability to discern the spatial positioning of text within images. Empirical evidence suggests that text-grounding improves the model's interpretation of textual content, thereby elevating its proficiency in comprehending text-rich images. Specifically, we compile a dataset containing 99K PowerPoint presentations sourced from the internet. We formulate instruction tuning tasks including text detection, recognition, and spotting to facilitate the cohesive alignment between the visual encoder and large language model. Moreover, we curate a collection of text-rich images and prompt the textonly GPT-4 to generate 12K high-quality conversations, featuring textual locations within text-rich scenarios. By integrating text location data into the instructions, TG-Doc is adept at discerning text locations during the visual question process. Extensive experiments demonstrate that our method achieves state-of-the-art performance across multiple text-rich benchmarks, validating the effectiveness of our method. The source code is publicly available at https://github.com/harrytea/TGDoc.
Towards Improving Document Understanding: An Exploration on Text-Grounding via MLLMs
[ { "figure_caption": "Figure 2 .2Figure 2. Overall architecture of TGDoc. It consists of a vision encoder, initialized with CLIP [28] weights, a linear projection layer, and the Vicuna [5] large language model. Given an image and a query, the model will generate the answer while providing its reasoning textual regions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Final Conversation Q1: Identify the main title displayed in the image. A1: The main title in the image is \"BABIES COME FROM AIRPORTS\"[0.084, 0.049, 0.934, 0.298]. Q2: Are there any distinct sections denoted in the background? If so, please specify and support your response with the text and its bounding box. A2: Yes, it seems that the babies are from airports and the airport stop says \"Arrivals\"[0.12, 0.31, 0.34, 0.398]. Q3: Can you name the creators of the content depicted in the image? Provide the relevant text and its bounding box for justification. A3: The creators are \"Written by Erin Dealey\"[0.133, 0.902, 0.408, 0.938] and \"Illustrated by Luciana Navarro Powell\"[0.438, 0.905, 0.893, 0.938].", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Examples of PowerPoint presentations collected by us.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Examples of fine-tuning dataset collected by us. The first row consists of book covers curated from the internet, whereas the second and third rows display text-rich images sourced from the LAION-400M dataset [30].", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "the candy shack open? This sign is asking you to turn off what vehicle function? Does the sign thank you? The name on the sign is \"Un Like\"[0.13, 0.33, 0.83, 0.63]. The first word on the sign is \"Un\"[0.15, 0.34, 0.33, 0.60]. This is a candy shack[0.13, 0.19, 0.83, 0.34]. Yes, the candy shack is open. The sign in the image says \"Open\"[0.20, 0.41, 0.76, 0.80]. The sign is asking you to turn off your headlights[0.10, 0.10, 0.60, 0.36]. Yes, the sign thanks you. The coordinates for the text \"Thank You\" are [0.15, 0.33, 0.60, 0.51].", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visualization results of TGDoc. To conserve space, the bounding boxes generated by the model are visualized and integrated into the model's input. The results indicate that the model can accurately focus on regions pertinent to the answers, which notably enhances the model's interpretability.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Examples of incorporating text-grounding capability.Compared to LLaVAR[48], our method delivers correct answers with enhanced accuracy.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Some examples of imprecise bounding boxes produced by our method.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "{wyh1998, haof, kyzhou2000}@mail.ustc.edu.cn, {zhwg, lihq}@ustc.edu.cnA. Instruction TemplatesPre-training. For the collected 99K PowerPoint slides, we focus on three primary tasks: detection, recognition, and spotting. For each task, we develop several instruction templates. To enhance the model's capacity for interpreting diverse user instructions and to avoid overfitting to particular templates, we employ GPT-4 to further expand these templates. We present examples of the instructions in Tab. 1. Fine-tuning. We employ the data generated by GPT-4 for fine-tuning our model. Slightly different from the previous methods[2][3][4][6][7][8], our data focus on textual information present within images. Moreover, when the response includes textual content, we instruct GPT-4 to create questions that encourage the model to provide reasoning related to the answer. This helps ensure the model to generate responses with textual bounding boxes. Examples of such GPT-4 generated questions are shown in Tab. 2.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "PaddleOCR \"The Old World\"[0.145, 0.022, 0.845, 0.101] \"in the New\"[0.22, 0.127, 0.76, 0.206] \"by\"[0.415, 0.213, 0.565, 0.315] \"Edward Alsworth Ross\"[0.075, 0.307, 0.91, 0.378] Metadata book's title: \"The Old World in the New\" book's author: \"Edward Alsworth Ross\" Metadata with bounding boxes \"The Old World in the New\"[0.145, 0.022, 0.845, 0.206] \"Edward Alsworth Ross\"[0.075, 0.307, 0.91, 0.378] reorganize the text to obtain bounding box", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 .1Figure1. Acquisition of bounding boxes for the metadata of book cover. We employ an OCR tool to extract texts and position information from the image. Subsequently, the texts are reorganized to obtain the final bounding boxes corresponding to the metadata.", "figure_data": "", "figure_id": "fig_10", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "PaddleOCR Results \"BABIES\"[0.084,0.067,0.496,0.190], \"COME\"[0.504,0.049,0.711,0.130], \"FROM\"[0.150,0.193,0.367,0.270], \"AIRPORTS\"[0.350,0.138,0.934,0.298], \"Arrivals=\"[0.128,0.318,0.381,0.400], \"Written by Erin Dealey lllustrated by Luciana Navarro Powel\"[0.136,0.909,0.887,0.930].", "figure_data": "EasyOCR Results \"BABI-S CONE\"[0.063,0.032,0.733,0.208], \"FROM\"[0.144,0.186,0.371,0.279], \"AIRPORTS\"[0.353,0.124,0.940,0.304], \"Arrivals\"[0.120,0.310,0.340,0.398], \"Writien\"[0.133,0.907,0.225,0.933], \"by \"[0.219,0.902,0.268,0.939], \"Erin Dealey\"[0.261,0.902,0.408,0.938], \"Illustrated by Luciana Navarro Powell\"[0.438,0.905,0.893,0.938].", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with previous multimodal large language models on Visual Question Answering (VQA) and Key Information Extraction (KIE) datasets. TGDoc-224 and TGDoc-336 represent the CLIP[28] inputs at resolutions of 224 × 224 and 336 × 336. The best and the second results (accuracy %) are highlighted in bold and underlined, respectively.", "figure_data": "VQAKIEMethodSTVQA OCRVQA TextVQA DocVQA InfoVQA ChartQA FUNSD SROIE POIEAvg.BLIP-2 OPT 6.7b [15]13.3610.5821.180.828.827.440.000.00 0.026.22BLIP-2 FlanT5 XXL [15] 21.7030.7432.184.8610.177.201.190.20 2.52 12.31OpenFlamingo [2]19.3227.8229.085.0514.999.120.850.12 2.12 12.05LLaVA [16]22.0811.3628.864.4913.787.281.020.12 2.09 10.12MiniGPT-4 [49] mPLUG-Owl [44] LLaVAR [48]14.02 29.26 30.3611.52 28.62 29.3818.72 40.28 39.402.97 6.88 6.7313.32 16.46 12.254.32 9.52 8.001.19 1.02 1.020.04 1.31 0.64 3.26 15.10 7.49 1.36 6.48 15.00UniDoc [10]30.7834.5040.726.4713.7510.481.191.40 3.92 15.91TGDoc-224 TGDoc-33631.40 36.2833.50 37.2141.86 46.187.25 9.0011.53 12.7511.74 12.721.70 1.361.59 9.08 16.63 3.00 22.16 20.07", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "1 * Hao Feng 1 Keyi Zhou 1 Houqiang Li 1 *", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
Yonghui Wang; Wengang Zhou; Hao Feng; Keyi Zhou; Houqiang Li
[ { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Proceedings of the Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Anas Awadalla; Irena Gao; Josh Gardner; Jack Hessel; Yusuf Hanafy; Wanrong Zhu; Yonatan Kalyani Marathe; Samir Bitton; Shiori Gadre; Sagawa", "journal": "", "ref_id": "b1", "title": "OpenFlamingo: An opensource framework for training large autoregressive visionlanguage models", "year": "2023" }, { "authors": "Ruben Ali Furkan Biten; Andres Tito; Lluis Mafla; Gomez; Minesh Marc ¸al Rusinol; C V Mathew; Ernest Jawahar; Dimosthenis Valveny; Karatzas", "journal": "IEEE", "ref_id": "b2", "title": "ICDAR 2019 competition on scene text visual question answering", "year": "2019" }, { "authors": "Keqin Chen; Zhao Zhang; Weili Zeng; Richong Zhang; Feng Zhu; Rui Zhao", "journal": "", "ref_id": "b3", "title": "Shikra: Unleashing multimodal llm's referential dialogue magic", "year": "2023" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez", "journal": "", "ref_id": "b4", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b5", "title": "Scaling instructionfinetuned language models", "year": "2022" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b6", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b7", "title": "PaLM-E: An embodied multimodal language model", "year": "2023" }, { "authors": "Yuxin Fang; Wen Wang; Binhui Xie; Quan Sun; Ledell Wu; Xinggang Wang; Tiejun Huang; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b8", "title": "EVA: Exploring the limits of masked visual representation learning at scale", "year": "2023" }, { "authors": "Zijian Hao Feng; Jingqun Wang; Jinghui Tang; Wengang Lu; Houqiang Zhou; Can Li; Huang", "journal": "", "ref_id": "b9", "title": "UniDoc: A universal large multimodal model for simultaneous text detection, recognition, spotting and understanding", "year": "2006" }, { "authors": "Zheng Huang; Kai Chen; Jianhua He; Xiang Bai; Dimosthenis Karatzas; Shijian Lu; Jawahar", "journal": "IEEE", "ref_id": "b10", "title": "ICDAR2019 competition on scanned receipt OCR and information extraction", "year": "2019" }, { "authors": "Guillaume Jaume; Hazim Kemal Ekenel; Jean-Philippe Thiran", "journal": "IEEE", "ref_id": "b11", "title": "FUNSD: A dataset for form understanding in noisy scanned documents", "year": "2019" }, { "authors": "Jianfeng Kuang; Wei Hua; Dingkang Liang; Mingkun Yang; Deqiang Jiang; Bo Ren; Xiang Bai", "journal": "Springer", "ref_id": "b12", "title": "Visual information extraction in the wild: practical dataset and end-to-end solution", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b13", "title": "BLIP: Bootstrapping language-image pre-training for unified visionlanguage understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b14", "title": "BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2006" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b15", "title": "Visual instruction tuning", "year": "2006" }, { "authors": "Yuliang Liu; Zhang Li; Hongliang Li; Wenwen Yu; Mingxin Huang; Dezhi Peng; Mingyu Liu; Mingrui Chen; Chunyuan Li; Lianwen Jin", "journal": "", "ref_id": "b16", "title": "On the hidden mystery of OCR in large multimodal models", "year": "2023" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b17", "title": "SGDR: Stochastic gradient descent with warm restarts", "year": "2016" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b18", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Tengchao Lv; Yupan Huang; Jingye Chen; Lei Cui; Shuming Ma; Yaoyao Chang; Shaohan Huang; Wenhui Wang; Li Dong; Weiyao Luo", "journal": "", "ref_id": "b19", "title": "KOSMOS-2.5: A multimodal literate model", "year": "" }, { "authors": "Ahmed Masry; Xuan Do; Jia Long; Shafiq Qing Tan; Enamul Joty; Hoque", "journal": "", "ref_id": "b20", "title": "ChartQA: A benchmark for question answering about charts with visual and logical reasoning", "year": "2022" }, { "authors": "Minesh Mathew; Dimosthenis Karatzas; Jawahar", "journal": "", "ref_id": "b21", "title": "DocVQA: A dataset for VQA on document images", "year": "2021" }, { "authors": "Minesh Mathew; Viraj Bagal; Rubèn Tito; Dimosthenis Karatzas; Ernest Valveny; Jawahar", "journal": "", "ref_id": "b22", "title": "InfographicVQA", "year": "2022" }, { "authors": "Anand Mishra; Shashank Shekhar; Ajeet Kumar Singh; Anirban Chakraborty", "journal": "IEEE", "ref_id": "b23", "title": "OCR-VQA: Visual question answering by reading text in images", "year": "2019" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Proceedings of the Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Zhiliang Peng; Wenhui Wang; Li Dong; Yaru Hao; Shaohan Huang; Shuming Ma; Furu Wei", "journal": "", "ref_id": "b26", "title": "KOSMOS-2: Grounding multimodal large language models to the world", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b27", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Ronald Rivest", "journal": "", "ref_id": "b28", "title": "The md5 message-digest algorithm", "year": "1992" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b29", "title": "LAION-400M: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Amanpreet Singh; Vivek Natarajan; Meet Shah; Yu Jiang; Xinlei Chen; Dhruv Batra; Devi Parikh; Marcus Rohrbach", "journal": "", "ref_id": "b30", "title": "Towards VQA models that can read", "year": "2019" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b31", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Anthony Meng; Huat Tiong; Junnan Li; Boyang Li; Silvio Savarese; Steven Ch Hoi", "journal": "", "ref_id": "b32", "title": "Plug-and-Play VQA: Zeroshot VQA by conjoining large pretrained models with zero training", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b33", "title": "LLaMA: Open and efficient foundation language models", "year": "2023" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b34", "title": "Self-Instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Zilong Wang; Yiheng Xu; Lei Cui; Jingbo Shang; Furu Wei", "journal": "", "ref_id": "b35", "title": "LayoutReader: Pre-training of text and layout for reading order detection", "year": "2021" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed Chi; V Quoc; Denny Le; Zhou", "journal": "Proceedings of the Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Chain-ofthought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Canwen Xu; Daya Guo; Nan Duan; Julian Mcauley", "journal": "", "ref_id": "b37", "title": "Baize: An open-source chat model with parameter-efficient tuning on self-chat data", "year": "2023" }, { "authors": "Yiheng Xu; Minghao Li; Lei Cui; Shaohan Huang; Furu Wei; Ming Zhou", "journal": "", "ref_id": "b38", "title": "LayoutLM: Pre-training of text and layout for document image understanding", "year": "2020" }, { "authors": "Yang Xu; Yiheng Xu; Tengchao Lv; Lei Cui; Furu Wei; Guoxin Wang; Yijuan Lu; Dinei Florencio; Cha Zhang; Wanxiang Che", "journal": "", "ref_id": "b39", "title": "LayoutLMv2: Multi-modal pre-training for visually-rich document understanding", "year": "2020" }, { "authors": "Zhengyuan Yang; Linjie Li; Kevin Lin; Jianfeng Wang; Chung-Ching Lin; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b40", "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision)", "year": "2023" }, { "authors": "Jiabo Ye; Anwen Hu; Haiyang Xu; Qinghao Ye; Ming Yan; Yuhao Dan; Chenlin Zhao; Guohai Xu; Chenliang Li; Junfeng Tian", "journal": "", "ref_id": "b41", "title": "mPLUG-DocOwl: Modularized multimodal large language model for document understanding", "year": "2023" }, { "authors": "Jiabo Ye; Anwen Hu; Haiyang Xu; Qinghao Ye; Ming Yan; Guohai Xu; Chenliang Li; Junfeng Tian; Qi Qian; Ji Zhang", "journal": "", "ref_id": "b42", "title": "UReader: Universal OCR-free visually-situated language understanding with multimodal large language model", "year": "2023" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b43", "title": "mPLUG-Owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Haoxuan You; Haotian Zhang; Zhe Gan; Xianzhi Du; Bowen Zhang; Zirui Wang; Liangliang Cao; Shih-Fu Chang; Yinfei Yang", "journal": "", "ref_id": "b44", "title": "Ferret: Refer and ground anything anywhere at any granularity", "year": "" }, { "authors": "Yuechen Yu; Yulin Li; Chengquan Zhang; Xiaoqiang Zhang; Zengyuan Guo; Xiameng Qin; Kun Yao; Junyu Han; Errui Ding; Jingdong Wang", "journal": "", "ref_id": "b45", "title": "StrucTexTv2: Masked visualtextual prediction for document image pre-training", "year": "2023" }, { "authors": "Li Zhang; Yang Biao; Liu Qiang; Ma Zhiyin; Zhang Shuo; Yang Jingxu; Liu Yuliang; Bai Xiang", "journal": "", "ref_id": "b46", "title": "Monkey: Image resolution and text label are important things for large multimodal models", "year": "" }, { "authors": "Yanzhe Zhang; Ruiyi Zhang; Jiuxiang Gu; Yufan Zhou; Nedim Lipka; Diyi Yang; Tong Sun", "journal": "", "ref_id": "b47", "title": "LLaVAR: Enhanced visual instruction tuning for text-rich image understanding", "year": "2008" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b48", "title": "MiniGPT-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" }, { "authors": "Ruben Ali Furkan Biten; Andres Tito; Lluis Mafla; Gomez; Minesh Marc ¸al Rusinol; C V Mathew; Ernest Jawahar; Dimosthenis Valveny; Karatzas", "journal": "IEEE", "ref_id": "b49", "title": "ICDAR 2019 competition on scene text visual question answering", "year": "2019" }, { "authors": "Zijian Hao Feng; Jingqun Wang; Jinghui Tang; Lu", "journal": "", "ref_id": "b50", "title": "Wengang Task Instruction template detection Can you furnish the bounding box coordinates", "year": "" }, { "authors": "Houqiang Zhou; Can Li; Huang", "journal": "", "ref_id": "b51", "title": "UniDoc: A universal large multimodal model for simultaneous text detection, recognition, spotting and understanding", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b52", "title": "BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b53", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Amanpreet Singh; Vivek Natarajan; Meet Shah; Yu Jiang; Xinlei Chen; Dhruv Batra; Devi Parikh; Marcus Rohrbach", "journal": "", "ref_id": "b54", "title": "Towards VQA models that can read", "year": "2019" }, { "authors": "Qinghao Ye; Haiyang Xu; Guohai Xu; Jiabo Ye; Ming Yan; Yiyang Zhou; Junyang Wang; Anwen Hu; Pengcheng Shi; Yaya Shi", "journal": "", "ref_id": "b55", "title": "mPLUG-Owl: Modularization empowers large language models with multimodality", "year": "2023" }, { "authors": "Yanzhe Zhang; Ruiyi Zhang; Jiuxiang Gu; Yufan Zhou; Nedim Lipka; Diyi Yang; Tong Sun", "journal": "", "ref_id": "b56", "title": "LLaVAR: Enhanced visual instruction tuning for text-rich image understanding", "year": "2023" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mo-", "journal": "", "ref_id": "b57", "title": "", "year": "" } ]
[ { "formula_coordinates": [ 7, 237.06, 246.7, 5.09, 6.05 ], "formula_id": "formula_0", "formula_text": "Is" } ]
10.1145/3419439
2023-11-22
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b10", "b17", "b1", "b41", "b14", "b7" ], "table_ref": [], "text": "Object detection is a critical technology for intelligent traffic systems [11,18]. In urban scenarios, accurately locating objects is essential for improving the security of autonomous driving systems. With the success of deep learning [2], typical object detection tasks have made significant breakthroughs in the computer vision community. This development has also effectively supported object detection in urban scenarios. Fig. 2. Visualization of the data distribution in two tasks using t-SNE [42]. The feature is extracted using the ResNet [15] pre-trained on ImageNet [8]. Note that different colors denote different domains. To generate the figure, we randomly select 100 samples from each domain.\ninformation to combat overfitting caused by varying color characteristics across different scenarios.\nOur feature-level augmentation, Dual-Style Memory (DSM), leverages style information from the entire training set to increase diversity by switching styles of objects and backgrounds. We conduct numerous experiments to demonstrate the effectiveness of our approach and confirm the effectiveness of each module. Additionally, we analyze the efficacy of DoubleAUG using existing domain generalization theory. Our contributions include the development of DoubleAUG, an effective method for single-domain object detection, and the confirmation of the effectiveness of CP and DSM. The details are as follows:\n• We present a simple but powerful double data augmentation method called DoubleAUG. This approach can generate a variety of color perturbations and utilize style information from the entire training set, thereby improving the robustness and efficacy of the model in detecting objects in unseen domains. Moreover, our method is plug-and-play and can be integrated into existing methods to further improve model performance. • To implement DoubleAUG, we introduce two components: Color Perturbation, which disturbs the RGB channels to enhance the color information in the image-level space, and Dual-Style Memory, which mines diverse style information in the feature space. These two components work together to provide a comprehensive data augmentation strategy for improving the robustness of object detection models in unseen domains. • We conduct extensive evaluations of our approach on multiple standard benchmark datasets and demonstrate that our approach outperforms the state-of-the-art in terms of accuracy. Additionally, we perform ablation studies and further analysis to validate the effectiveness of our method. Furthermore, we also analyze the efficacy of the proposed method using the existing domain generalization theory.\nThe structure of this paper is as follows. In Section 2, we provide a review of related work. Section 3 details our proposed method, including Color Perturbation and Dual-Style Memory. Section 4 analyzes the efficacy of the proposed method using existing domain generalization theory. Experimental results and analysis are presented in Section 5, followed by the conclusion in Section 6." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we review some related work on object detection, domain adaptive object detection, and domain generalization. Detailed introductions will be given in the following parts." }, { "figure_ref": [], "heading": "Object Detection", "publication_ref": [ "b11", "b37", "b10", "b13", "b37", "b35", "b11", "b37", "b26" ], "table_ref": [], "text": "Object detection is a fundamental task in computer vision, which has been extensively studied for several years. Following the lead of RCNN [12,38], numerous object detection frameworks [11,14,38] based on convolutional networks have been developed in recent years, which have significantly pushed forward the state-of-the-art performance. Object detection models can be broadly classified into two types: one-stage and two-stage detection. One-stage object detection refers to a class of object detection methods that skip the region proposal stage of two-stage models and directly run detection over a dense sampling of locations. YOLO [36] outputs sparse detection results with high computation speed. Two-stage detectors generate region proposals for detection, for instance, Faster-RCNN [12,38], which introduces the Region Proposal Network (RPN) for proposal generation. FPN [27] employs multiple layers for the detection of different scales." }, { "figure_ref": [], "heading": "Domain Adaptive Object Detection", "publication_ref": [ "b4", "b15", "b16", "b38", "b54", "b55", "b51", "b3", "b50", "b25" ], "table_ref": [], "text": "To address the domain shift issue in object detection, several unsupervised domain adaptive methods have been proposed, such as [5,16,17,39,55,56]. These methods aim to align the featurelevel distributions between the source and target domains. For instance, Wu et al. [52] propose a teacher-student framework to extract knowledge from labeled source domains and guide the student network to learn detectors in the unlabeled target domain. Some methods [4,51] attempt to extract instance-invariant features to improve the generalization ability. Additionally, Li et al. [26] introduce a framework that employs a Graph-embedded Semantic Completion module to complete mismatched semantics and model class-conditional distributions with graphs. Although these methods have demonstrated their effectiveness, they typically require access to both the source and target domain data during training, limiting their applicability to domain generalization tasks." }, { "figure_ref": [], "heading": "Domain Generalization", "publication_ref": [ "b28", "b12", "b40", "b57", "b59", "b62", "b23", "b42", "b45", "b65", "b66", "b2", "b24", "b43", "b60", "b29", "b30", "b58", "b61", "b60" ], "table_ref": [], "text": "Domain generalization aims at extracting knowledge from one or multiple source domains so as to generalize well to unseen target domains [29]. Some existing DG methods [13,41,58,60,63] are proposed to minimize the difference among source domains for learning domain-invariant representations. Another popular way to address DG problems is domain augmentation [24,43,46,66,67], which create samples from fictitious domains. Besides, other DG methods are proposed, such as learning strategies [3,25,44,61] and so on [30,31,59,62]. For example, in the training stage, Zhang et al. [61] develop a multi-view regularized meta-learning algorithm that employs multiple optimization trajectories to produce a suitable optimization direction for model updating. Although current DG methods achieve the promising results, most of them use multi-domain data to train the model, which is unrealistic for some real-world applications." }, { "figure_ref": [], "heading": "Single Domain Generalization", "publication_ref": [ "b42", "b42", "b34", "b47", "b8", "b46", "b49", "b46", "b49" ], "table_ref": [], "text": "Recently, a new task called single domain generalization has been proposed [43] , which aims to generalize a model trained on one source domain to any unseen domains. Most existing methods solve this task by employing data augmentation and feature normalization. For example, Volpi et al. [43] and Qiao et al. [35] explore the use of adversarial mechanisms to solve this task, which helps promote large domain transportation in the input space. Wang et al. [48] aim to improve generalization by alternating diverse sample generation and discriminative style-invariant representation learning. Fan et al. [9] propose a generic normalization approach, adaptive standardization and rescaling normalization, to improve generalization. However, these methods cannot be directly applied to single domain generalization object detection (Single-DGOD). Therefore, some methods [47,50] have attempted to solve this problem from different angles. Wang et al. [47] attempt to find the most different object proposals in adjacent frames in a video and then cycle back to itself for self-supervision. Wu et al. [50] present the cyclic-disentangled self-distillation method by disentangling domain-invariant representations from domain-specific representations without the supervision of domain-related annotations. In this paper, we propose a novel approach that aims to augment data at both the feature-level and image-level by exchanging the object and background information of different images in style and distorting the RGB channels of images to solve the Single-DGOD problem, respectively." }, { "figure_ref": [ "fig_1" ], "heading": "THE PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel approach, called DoubleAUG, to enhance the generalization capacity of the model to the unseen domain, as illustrated in Fig. 3. Specifically, we introduce color perturbations to conduct image augmentation, which effectively alleviates overfitting caused by color variations in the unseen domain. Additionally, we develop a style memory to explore and mine diverse style information from the entire training data, which further improves the model's generalization ability. The detailed description of the proposed method will be presented in the following section." }, { "figure_ref": [ "fig_0", "fig_3", "fig_3", "fig_3" ], "heading": "Color Perturbation", "publication_ref": [ "b33", "b21" ], "table_ref": [], "text": "In the single-domain urban-object detection task, the lack of diversity in the training set can lead to overfitting of the model to the training data. This is particularly true since the images in the dataset are collected at different times and in different weather conditions, resulting in significant color variations as shown in Figs. 1 and2. To address this issue and enhance the color information in the training set, we propose Color Perturbation (CP), which involves randomly exchanging the RGB channels. Specifically, [𝑅, 𝐺, 𝐵] can be converted to to [𝑅, 𝐵, 𝐺], [𝐵, 𝑅, 𝐺], [𝐺, 𝑅, 𝐵], [𝐵, 𝐺, 𝑅] and [𝐺, 𝐵, 𝑅]. For example, regarding the Color Perturbation (CP) scheme, we employ a random shuffling process on the RGB channels of each image. To elaborate, let's consider an RGB image represented as 𝐼 ∈ R ℎ×𝑤×3 , where ℎ and 𝑤 denote the height and width, and 3 signifies the three color channels, namely Red (R), Green (G), and Blue (B). When applying the CP scheme to an image ( Î ), we perform a random shuffle operation on the vector (𝑟 = [0, 1, 2]) to obtain r , which can take one of the following permutations: r (1) Fig. 4 shows the data distribution of the original and generated images using t-SNE. The color perturbation introduces diverse information and effectively mitigates overfitting. In the training stage, we randomly combine a color order and the raw image for model training.\n= [0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 0, 1], or\nRemark. Currently, image-level data augmentation is widely used in the computer vision community, with ColorJitter [34] being a popular method. This method randomly adjusts the brightness, contrast, and saturation of an image, and has demonstrated its effectiveness in domain generalization classification tasks. However, it is important to note that images captured in urban environments often contain small objects, which differ from the objects used in image classification tasks. For example, in Fig. 4, some cars are extremely small. If ColorJitter is used for data augmentation, it may result in missing important information about these small objects. We will perform a comparative experiment in the experimental section to further investigate this issue.\nWhile images generated by the CP scheme might not frequently occur in real-world scenes, they exhibit distinctions from the original image in terms of feature distribution while maintaining semantic consistency. Refer to Figure 4 in the main paper for an illustration: the color of these images varies, yet the objects remain consistent with the original image. Thus, training the model with these images can lead to the development of a color-invariant model. In the context of singledomain generalization tasks, images from nighttime scenes differ from those captured during the daytime in terms of color information. In summary, incorporating diverse color information proves beneficial in enabling the model to capture more color-invariant details, a key factor for our task.\nAdditionally, the proposed CP module can be combined with other image-level augmentations, such as copy-paste, mosaic, and mixup in YOLOv5 [22], to further enhance the model's generalization capability, which will be validated in the experiment." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Dual-Style Memory", "publication_ref": [ "b20", "b14" ], "table_ref": [], "text": "In this part, we firstly review Adaptive Instance Normalization (AdaIN) [21], which can transfer the style information from an image to another image. In particular, this work points out that the statistics of feature maps represent the image style information. We define two groups of feature maps as 𝑓 , f ∈ R 𝐶 ×𝐻 ×𝑊 for two images, where 𝐶 are the number of channels, and 𝐻 and 𝑊 are the hight and the width of feature maps. Here, we assume that the goal is transferring the style information from f and 𝑓 , thus we can implement it by:\nAdaIN(𝑓 ) = σ 𝑓 -𝜇 𝜎 + μ,(2)\nwhere\n𝜇, 𝜎, μ, σ ∈ R 𝐶 (i.e., 𝜇 = [𝜇 1 , • • • , 𝜇 𝐶 ], 𝜎 = [𝜎 1 , • • • , 𝜎 𝐶 ], μ = [ μ1 , • • • , μ𝐶 ], and σ = [ σ1 , • • • , σ𝐶 ])\nrepresent the channel-wise mean and standard deviation (i.e., statistics) of 𝑓 . For statistics of the 𝑖-th channel are presented as:\n𝜇 𝑖 = 1 𝐻𝑊 𝐻 ∑︁ ℎ=1 𝑊 ∑︁ 𝑤=1 𝑓 [𝑖, ℎ, 𝑤],(3)\n𝜎 𝑖 = 1 𝐻𝑊 𝐻 ∑︁ ℎ=1 𝑊 ∑︁ 𝑤=1 (𝑓 [𝑖, ℎ, 𝑤] -𝜇 𝑖 ) 2 + 𝜖,(4)\nwhere 𝜖 is a constant for numerical stability. Similarly, we can also obtain statistics ( (𝜇), σ) of f , which represent the image style information as mentioned before.\nTo enhance the generalization ability of the model, we generate augmented features to increase the diversity of the abstract style in the feature space. In the single-domain urban object detection task, where the style information is limited, we aim to extract the style information from the entire training data. Additionally, there may be a style discrepancy between the local objects and the background, as shown in Fig. 1. For example, in Fig. 1(a), the background is brighter than some cars in the dark region of the image with the red box. Moreover, there exists a style difference between different images based on the local view.\nTherefore, in this paper, we propose dual-style memory (DSM) to reach this goal, which saves the style information in a dual-memory. To be specific, we first generate two style memories to save the object style information and the background style information, respectively. Here, we use M obj and M back to denote the memory used for saving the style information of the object and the background. We assume that M obj and M back have saved some style information. For an input image with 𝑁 𝑜 object(s), its middle feature maps based on convolutional layer (e.g., the feature after each block in ResNet [15]) are defined as 𝑓 . We split 𝑓 into different patches according the ground-truth, e.g., 𝑓 𝑏 ∈ R 𝐶 ×𝐴 𝑏 and 𝑓 𝑜1 ∈ R 𝐶 ×𝐴 𝑜1 , • • • , 𝑓 𝑜𝑁 𝑜 ∈ R 𝐶 ×𝐴 𝑜𝑁𝑜 are used to denote the feature maps of the background and the object set, where 𝐶 is the number of channels and 𝐴 𝑏 and 𝐴 𝑜𝑖 is the area of the background and the 𝑖-th object in the spatial dimension. For the first object, the style information can be represented as\n𝜇 𝑜1 = [𝜇 𝑜1 1 , • • • , 𝜇 𝑜1 𝐶 ] and 𝜎 𝑜1 = [𝜎 𝑜1 1 , • • • , 𝜎 𝑜1 𝐶 ]\n. Thus, we can extract its style information of the 𝑖-th channel as:\n𝜇 𝑜1 𝑖 = 1 𝐴 𝑜1 𝐴 𝑜1 ∑︁ 𝑎=1 𝑓 𝑜1 [𝑖, 𝑎],(5)\n𝜎 𝑖 = 1 𝐴 𝑜1 𝐴 ∑︁ 𝑎=1 (𝑓 𝑜1 [𝑖, 𝑎] -𝜇 𝑜1 𝑖 ) 2 + 𝜖.(6)\nSimilarly, we can compute the style information of the background and all objects, and then save these information into the corresponding style memory. For example, (𝜇 𝑜1 , 𝜎 𝑜1 ) is saved into M obj . Based on the M obj and M back , we can mine the diverse style information as:\nf 𝑜1 = M back [𝑟 ] 𝜎 𝑓 𝑜1 -𝜇 𝑜1 𝜎 𝑜1 + M back [𝑟 ] 𝜇 ,(7)\nwhere M back [𝑟 ] 𝜎 and M back [𝑟 ] 𝜇 is randomly selecting the 𝑟 -th style information in M back . In the meanwhile, we use the same scheme to enhance the style's diversity on all objects and the background. The detailed forward process of the proposed DSM is shown in Alg. 1." }, { "figure_ref": [ "fig_5" ], "heading": "Algorithm 1", "publication_ref": [ "b20" ], "table_ref": [], "text": "The forward process of dual-style memory (DSM). Compute the style information for each patch in F as Eqs. 5 and 6. // |M| is the number of elements in the queue, and 𝑁 𝑚 is the maximum length of the queue.\n6:\nif |M back | >= 𝑁 𝑚 or |M obj | >= 𝑁 𝑚 then 7:\nRemove the earliest stored style information from the style memory. Save the corresponding style information to the style memory (M back or M obj ).\n10:\nRandomly select the style information from the crossed style memory.\n11:\nConduct the adaIN as Eq. 6 to normalize all patches in F. 12: end for 13: Splice all patches according the original position.\nRemark. For the dual-style memory module, we use a fixed-length queue to implement it, which does not require a large amount of memory. As the training set is shuffled at each epoch, the available dual-style memory for a specific sample varies at each epoch, allowing us to extract more diverse information from other samples. Besides, we also conduct the experiment using a shared memory to save all styles and access the style information from the corresponding style memory. This further highlights the importance of these steps in our method.\nConcerning the Dual-Style Memory (DSM), as feature statistics inherently encapsulate style information [21], and the style perspective distinctly highlights the contrast between background and foreground, as illustrated in Fig. 5. Specifically, given that feature statistics effectively represent an image's style, we compute the statistics, namely the mean (𝜇) and variance (𝜎), of images from the first layer of ResNet-101, which is pre-trained on ImageNet. According to this figure, it's evident from the figure that there is a noticeable distinction in style information between the foreground and background. Furthermore, as indicated by the visual statistics, the foreground and background elements from different images also exhibit variations. To address this, we have introduced a dual-style memory that facilitates the generation of diverse samples for model training. This memory repository is designed to store the style information corresponding to both foreground and background. During the augmentation process, we randomly and interchangeably draw style information from this dual-memory. This means we can utilize the foreground style for the background or the background style for the foreground. This strategy generates diverse style information for each object and the background." }, { "figure_ref": [], "heading": "EXPLANATION OF DOUBLEAUG VIA EXISTING THEORY", "publication_ref": [ "b0" ], "table_ref": [], "text": "In this section, we utilize the domain generalization error bound [1] to further demonstrate the effectiveness of our method. In the following part, we firstly review the domain generalization error bound and then analyze our method based on it. \nwhere 𝜆 H (P 𝑡 𝑋 , P * 𝑋 )) is the ideal joint risk across the target domain and the training domain (𝑃 * 𝑋 ) with the most similar distribution to the target domain.\nIn Theorem 1, the first item aims to minimize the empirical error on the training set, which can be achieved by the general loss function for object detection. The last item can be treated as a constant. Therefore, we primarily focus on analyzing the second item, which involves 𝛾 and 𝜌.\nFirstly, 𝛾 represents the discrepancy between the combination of all training domains and the target domain. In the single-domain generalization object detection setting, there is a risk that if the testing domain is far from the training domain in terms of distribution, the model's generalization will be poor for all testing samples. However, our method generates diverse style information based on multiple different distributions, which can be viewed as different domains. Therefore, introducing diverse style information based on CP and DSM can be beneficial in reducing overfitting to the raw single training set and effectively mitigating the aforementioned risk.\nSecondly, 𝜌 indicates the maximum distance between different domains. In our method, we extract diverse style information from the training data itself using DSM, while the color perturbation only involves switching the RGB channels. This shows that generating diverse style information in our method does not bring a large domain gap between training samples. Furthermore, we apply DSM to the shallow layer of the neural network (where it mainly focuses on style information), which also helps to prevent the generation of a large 𝜌 . In summary, our method has an advantage in reducing the generalization error bound from both the 𝛾 and 𝜌 perspectives in Eq. 8." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "This part describes the experimental setup and evaluation of the proposed method. Section 5.1 introduces the datasets and settings used in the experiments. Section 5.2 compares the proposed method with state-of-the-art generalizable object detection methods. Ablation studies are conducted in Section 5.3 to validate the effectiveness of various components in the proposed framework. Lastly, Section 5.4 further analyzes the properties of the proposed method." }, { "figure_ref": [], "heading": "Datasets and Experimental Settings", "publication_ref": [ "b48", "b6", "b4", "b15", "b39", "b18" ], "table_ref": [], "text": "5.1.1 Datasets. Diverse-Weather [49] is a dataset that includes five scenes with different weather conditions, including daytime-sunny, night-sunny, dusk-rainy, night-rainy, and daytime-foggy. The training set consists of 19,395 images from the daytime-sunny scene, while the testing sets include 8,313 images from the daytime-sunny scene, and 26,158, 3,501, 2,494, and 3,775 images from the night-sunny, dusk-rainy, night-rainy, and daytime-foggy scenes, respectively.\nSIM10k2Cityscapes. is a dataset that combines SIM10k and Cityscapes datasets. SIM10k [23] consists of 10,000 images rendered by the Grand Theft Auto (GTAV) gaming engine, with bounding boxes of 58,701 cars provided in the 10,000 training images. In our experiments, we randomly selected 9,000 images for training and 1,000 for testing. Cityscapes & Foggy Cityscapes & Rain Cityscapes. Cityscapes [7] is a traffic scene dataset for driving scenarios. The images are captured by a car-mounted video camera. It has 2,975 images in the training set, and 500 images in the validation set. We follow [5,16] to use the validation set as the target domain to test our method. Foggy Cityscapes [40] is a fog-rendered Cityscapes dataset, it has 8,877 images in the training set, and 1,473 images in the validation set, the same as Cityscapes we use the validation set as the target domain. Rain Cityscapes [19] is a rain-rendered Cityscapes dataset, it has 9,432 training images and 1,188 testing images, the same as Cityscapes and Foggy Cityscapes, we only use the validation set as the target domain. There are 8 categories with instance labels in all Cityscapes & Foggy Cityscapes & Rain Cityscapes, but the only car is used in this experiment since the only one is annotated in SIM 10k. Note that the Cityscapes & Foggy Cityscapes & Rain Cityscapes dataset is not dedicated to the detection, thus we take the tightest rectangle of its instance masks as ground-truth bounding boxes." }, { "figure_ref": [], "heading": "Implementation Details.", "publication_ref": [ "b33", "b53", "b36", "b21", "b36", "b14", "b27", "b53", "b21", "b27", "b21" ], "table_ref": [], "text": "We conduct all experiments using PyTorch 1.8.2 [34] with Detectron2 [54] library. For all experiments, we both use Faster R-CNN [37] and YOLOv5 [22] as our base detectors. In particular, YOLOv5s (14.12MB) is a smaller model than Faster R-CNN (232.2MB). We use mAP (%) as the main evaluation metric when IOU = 0.5. For Faster R-CNN [37], ResNet-101 [15] is taken as the backbone, and we use the weights pre-trained on COCO [28] in initialization (provided by Detectron2 [54]), all models are trained on 2 GPUs using SGD with a mini-batch size of 4, the momentum is 0.9, the max iterations is 100,000, and the learning rate is 0.001, we also apply warmup by 5,000 iterations. For YOLOv5 [22] we choose YOLOv5s as our baseline, and we use the weights pre-trained on COCO [28] in initialization (provided by YOLOv5 [22]), all models are trained on 2 GPUs using SGD with a mini-batch size of 44, the momentum is 0.843, the max epoch is 200, and the learning rate is 0.0032. Note that we obtain the final model from the last epoch for all experiments. Unless otherwise specified, Faster R-CNN is used as the default baseline in all experiments." }, { "figure_ref": [], "heading": "Comparison with State-of-the-art Methods", "publication_ref": [], "table_ref": [], "text": "Table 1. Experimental results (%) on the Diverse-Weather dataset. All methods are trained on Daytime-Sunny, and tested on Night-Sunny, Dusk-Rainy, Night-Rainy, and Daytime-Foggy. Note that All these methods with the available code provided by its authors are run on the Diverse-Weather dataset." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b32", "b32", "b31", "b19", "b5", "b46", "b48", "b48", "b48", "b48", "b48" ], "table_ref": [], "text": "Night-Sunny Dusk-rainy Night-Rainy Daytime-Foggy mAP SW [33] 39 In this part, we perform the experiment to compare our method with some SOTA methods, including SW [33], IBN-Net [32], IterNorm [20] ISW [6], CycConf [47] and CDSD [49]. Particularly, CycConf and CDSD are designed for the generalizable objection detection task. CycConf improves the generalization on the out-of-distribution dataset via a novel self-training method. CDSD is a recent method for the single-domain generalized object detection in the traffic scene, which aims to extract the domain-invariant feature by the cyclic-disentangled self-distillation. For all methods, we run the experiment based the available code provided by authors. Table 1 is the result of the Diverse-Weather dataset. In this experiment, we use the same dataset and dataset setting as CDSD. As observed in this table, our method outperforms all methods based on Faster R-CNN. Furthermore, when applying our method to CycConf and CDSD, it can also further enhance the generalizable capability. Note that, the result of CDSD differs from the result in [49], because the dataset provided by [49] is not divided into the training set and testing set. Hence, although we split the dataset according to the number of samples the same as [49], the training samples and testing samples are not completely consistent with [49].\nMoreover, we also conduct the comparison in the case from SIM10k to Cityscapes, as reported in Table 2. Since SIM10k is from the game, which is the virtual dataset, it has a large difference when compared to Cityscapes collected from the real-world city scenario. Similar to the above analysis in Table 1, \"Faster R-CNN\" and \"Faster R-CNN+Ours\" exists an obvious difference in all domains." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "In this part, we perform the experiment to sufficiently validate the effectiveness of each module in the proposed DoubleAUG on the Diverse-Weather and SIM10k2Cityscapes datasets. The experimental results are listed in Tables 3 and4. As seen in these two tables, both the proposed color perturbation (CP) and dual-style memory (DSM) can improve the model's generalization on two datasets. For example, on Diverse-Weather, the CP and DSM outperform the baseline by +1.18% (35.82 vs. 34.64) and +2.12% (36.76 vs. 34.64), respectively, which confirms the efficacy of these proposed modules. Furthermore, better performance can be obtained when combining the CP and DSM together. In addition, we also observe that the improvement is significant on SIM10k2Cityscapes when using Table 2. Experimental results (%) of domain generalization from SIM10K to Cityscapes. Raw, Rain and Foggy are the different domains of Cityscapes. Note that we run the all methods on SIM10k2Cityscapes based on the available code provided by their authors." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b32", "b22" ], "table_ref": [], "text": "Raw Rain Foggy mAP SW [33] 44. 23 our method, which is because of the large domain gap between the virtual data (SIM10K) and the real-world data (Cityscapes). Therefore, our method can achieve a great performance improvement when the unseen domain is obviously different from the training set. " }, { "figure_ref": [ "fig_6" ], "heading": "Further Analysis", "publication_ref": [], "table_ref": [ "tab_6", "tab_7" ], "text": "Comparison between the proposed CP and the ColorJitter. ColorJitter is a type of image data augmentation where we randomly change the brightness, contrast and saturation of an image, and it has been widely used in computer vision. In this experiment, we compare the proposed color perturbation with it on Diverse-Weather and SIM10k2Cityscapes. The experimental results are shown in Tables 5 and6. As observed in these tables, the proposed color perturbation can achieve better performance than ColorJitter, e.g., the performance can be increased by +0.48% (37.45 vs. 36.97) and +1.21% (56.38 vs. 55.17) on Diverse-Weather and SIM10k2Cityscapes, respectively. The main reason is that these small objects in the urban-scene images could be blurry when using ColorJitter, as illustrated in Fig. 6. Evaluation on style memory used in different layers. In this experiment, we report the experimental results when using the proposed dual-style memory (DSM) in different layers, as given in Fig. 7. As known, the ResNet consists of four blocks, thus we can use the DSM after each block. As a whole, we can obviously find that when using the DSM after the first block can produce the best result. The information from the shallow layer of the neural network denotes the color, texture, and so on, which can be viewed as the style information, while the information from the deep layer of the neural network means the semantic information. Hence, using the proposed DSM to enrich the style in the shallow layer is reasonable.\nFurther evaluation on the DSM. We further evaluate the necessity of these components in the DSM, as reported in Table 7. In this experiment, we choose style information for the object (background) from the object (background) style (i.e., \"no-exchange\" in Table 7), and select the style information for the object (background) from the background (object) memory (i.e., \"exchange\" in Table 7). As seen in Table 7, the crossed selection is better than the corresponding selection. In addition, we also perform the experiment using one style memory for saving both object and Fig. 7. Experimental results of the style memory used in different layers on Diverse-Weather. It is worth noting that \"L1\" denotes using the DSM after the first block, and \"L1&2\" indicates that using the DSM after the first and second blocks simultaneously.\nbackground styles. As seen in Table 7, it is effective for using two independent memories for saving object and background respectively.\nTable 7. Further evaluation for the DSM on Diverse-Weather. \"no-exchange\" is selecting style information for the object (background) from the object (background) style, and \"exchange\" is selecting the style information for the object (background) from the background (object) memory in the top. In the bottom, \"one memory\" is using one style memory for saving both object and background styles, and \"divided memory\" is using two independent memories for saving object and background styles, respectively. Experimental results of the DSM with different memory sizes. We conduct the experiment to observe the influence of different memory sizes in the proposed dual-style memory. As seen in Fig. 8, we can obtain the best result when the memory size is set as 100. We use the setting in all experiments." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b67", "b9" ], "table_ref": [ "tab_10", "tab_11", "tab_12", "tab_13", "tab_14", "tab_13", "tab_14" ], "text": "Comparison between the proposed DSM and MixStyle. MixStyle [68] is an augmentation method by mixing the style information of these images in a batch. Since our DSM does not introduce the extra information (i.e., only mining the style information from the training set), it is fair to compare them. The experimental results are listed in Table 8. We can observe that the DSM outperforms the MixStyle by 0.92 (36.76 vs. 35.84) on the Diverse-Weather dataset.\nEvaluation of the stability of the proposed method. We conduct five experiments with different random seeds to show the stability of the proposed method, as reported in Table 9. As seen, the STD of the baseline is 0.25, while our method is 0.08. This result shows our method is stable. Experimental results on the source domain. We here show the result on the source domain in Table 10. We find that our method decreases the performance on the source domain when compared with the baseline, which can be explained by the fact that our method can effectively reduce the overfitting risk in the training stage. Hence our DoubleAUG has the ability to generalize well to unseen domains. Experimental results of different modules based on YOLOv5s. We conduct the experiment based on YOLOv5s, which is a small model than the Faster R-CNN based on ResNet-101. Besides, unlike the two-stage Faster R-CNN, it is a one-stage object detection method. We report the experimental results in Tables 11 and12. It is worth noting that we first perform the experiment based on the clean YOLOv5s (i.e., removing these augmentation schemes including copy-past, mosaic and mixup.) As displayed in these two tables, each module in our method is effective, especially on SIM10k2Cityscapes, our method can achieve significant improvement. In addition, we also conduct the experiment based on the whole YOLOv5s (i.e., using all raw augmentation schemes in YOLOv5s). As seen in Tables 11 and12, our method also can achieve an obvious improvement. Comparison our method with two-stage scheme. To further demonstrate the effectiveness of our approach, we initially leverage the recent method proposed in [10], which was published in CVPR 2023, to enhance image quality. Subsequently, we conduct the detection process in accordance with your recommendation. The results of these experiments are presented in Tabs. 13 and 14. As observed in these tables, our method clearly outperforms the two-stage scheme. " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a simple yet effective approach, DoubleAUG, to address the single-domain generalization problem in object detection tasks in urban scenes. Our approach comprises two modules: image-level color perturbation (CP) and feature-level dual-style memory (DSM). The CP module randomly shuffles the RGB channels to generate diverse color information, while the DSM module utilizes object and background style memories to save and extract diverse style information across the entire dataset. We conduct experiments on multiple tasks and settings to demonstrate the effectiveness of our proposed method. Additionally, we employ existing domain generalization theory to analyze the properties of our approach.\nAs noted in our experiment, our method effectively mitigates overfitting to the source domain, as demonstrated in Tab. 10. Consequently, when our model is employed in scenarios resembling the training domain, its performance may exhibit a decrease compared to the baseline. This situation is particularly challenging in real-world applications, as distinguishing the domain of origin for a given image is often not possible. In our future work, we intend to enhance the model's performance on the source domain while simultaneously preserving its generalization capabilities to other unseen domains." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The work is supported by NSFC Program (Grants No. 62206052, 62125602, 62076063) and Jiangsu Natural Science Foundation Project (Grant No. BK20210224)." } ]
Object detection in urban scenarios is crucial for autonomous driving in intelligent traffic systems. However, unlike conventional object detection tasks, urban-scene images vary greatly in style. For example, images taken on sunny days differ significantly from those taken on rainy days. Therefore, models trained on sunny day images may not generalize well to rainy day images. In this paper, we aim to solve the single-domain generalizable object detection task in urban scenarios, meaning that a model trained on images from one weather condition should be able to perform well on images from any other weather conditions. To address this challenge, we propose a novel Double AUGmentation (DoubleAUG) method that includes image-and feature-level augmentation schemes. In the image-level augmentation, we consider the variation in color information across different weather conditions and propose a Color Perturbation (CP) method that randomly exchanges the RGB channels to generate various images. In the feature-level augmentation, we propose to utilize a Dual-Style Memory (DSM) to explore the diverse style information on the entire dataset, further enhancing the model's generalization capability. Extensive experiments demonstrate that our proposed method outperforms state-of-the-art methods. Furthermore, ablation studies confirm the effectiveness of each module in our proposed method. Moreover, our method is plug-and-play and can be integrated into existing methods to further improve model performance.
DoubleAUG: Single-domain Generalized Object Detector in Urban via Color Perturbation and Dual-style Memory
[ { "figure_caption": "Fig. 1 .1Fig. 1. Images from two tasks. In both figures, all images are from different domains. The image with the red box represents the image from the training set, while the other images belong to the testing domain. As illustrated, there is an evident difference in style between the training and testing samples.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. An illustration of the proposed DoubleAUG, which consists of the image-level Color Perturbation (CP) and the feature-level Dual-Style Memory (DSM). It is worth noting that our method is plug-and-play and can be inserted into existing methods.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "[2,1, 0]. This is mathematically defined as: Î [:, :, 0] = 𝐼 [:, :, r [0]]; Î [:, :, 1] = 𝐼 [:, :, r [1]]; Î [:, :, 2] = 𝐼 [:, :, r [2]].", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Results with color perturbation. Note that the red box indicates the original image.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig.5. An illustration of the statistics of different objects and backgrounds. These statistics (i.e., mean (𝜇) and variance (𝜎)) with 1024-dimension are captured from the first layer of the ResNet-101 pre-trained on ImageNet. In this figure, \"back\" denotes the background, and \"objX\" is the object (i.e., foreground). The first column is the image, and the second and third columns denote the mean and variance, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. The visual comparison between color jitter and our color perturbation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "𝑀 𝑑 H (P 𝑡 𝑋 , 𝑀 𝑖=1 𝜋 𝑖 P 𝑖 𝑋 ) with minimizer 𝜋 * be the distance of 𝑃 𝑡 𝑋 from the convex hull Λ, and 𝑃 * 𝑋 := 𝑀 𝑖=1 𝜋 * 𝑖 𝑃 𝑖 𝑋 be the best approximator within Λ. Let 𝜌 := sup P ′", "figure_data": "𝑋 ,P𝑀 ∑︁𝜋 * 𝑖 𝜖 𝑖 (ℎ) +𝛾 + 𝜌 2+ 𝜆 H (P 𝑡 𝑋 , P * 𝑋 )),𝑖=1", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Evaluation of different moudles in our method on Diverse-Weather.", "figure_data": "MethodDaytime-Foggy Dusk-rainy Night-Rainy Night-Sunny mAPBaseline38.1438.0116.2346.1934.64Baseline+CP38.9537.6819.3847.2635.82Baseline+DSM39.3040.7120.8346.2136.76Baseline+CP+DSM39.0141.4822.1747.1337.45", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation of different moudles in our method on SIM10k2Cityscapes.", "figure_data": "MethodRaw Rain Foggy mAPBaseline51.21 36.73 35.16 41.03Baseline + CP57.42 46.77 44.23 49.48Baseline + DSM59.60 55.48 48.98 54.69Baseline + CP +DSM 61.66 56.75 50.74 56.38", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison between the proposed Color Perturbation (CP) and ColorJitter (CJ) on Diverse-Weather.", "figure_data": "MethodDaytime-Foggy Dusk-rainy Night-Rainy Night-Sunny mAPBaseline+DSM+CJ39.8240.7720.5646.7436.97Baseline+DSM+CP (Ours)39.0141.4822.1747.1337.45", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparison between the proposed Color Perturbation (CP) and ColorJitter (CJ) on SIM10k2Cityscapes. Ours) 61.66 56.75 50.74 56.38", "figure_data": "MethodRaw Rain Foggy mAPBaseline+DSM+CJ60.73 53.27 51.52 55.17Baseline+DSM+CP (Color JitterColor Perturbation", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Fig. 8. Experimental results of the DSM with different memory sizes on Diverse-Weather. Comparison between the proposed DSM and MixStyle on Diverse-Weather.", "figure_data": "45.645.445.746.245.845.939.7 39.240.2 38.941.0 39.240.7 39.341.6 38.940.1 38.735.936.436.736.836.636.219.221.021.120.820.120.01020501003001000Dusk-rainyDaytime-FoggyNight-rainyNight-SunnymAPMethodDaytime-Foggy Dusk-rainy Night-Rainy Night-Sunny mAPBaseline38.1438.0116.2346.1934.64+MixStyle38.9939.5017.6147.2635.84+DSM39.3040.7120.8346.2136.76", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Evaluation of the stability of the proposed method on Diverse-Weather. In this table, \"AVG\" means the averaged result five times, and \"STD\" is the corresponding standard deviation.", "figure_data": "Method Seed Daytime-Foggy Dusk-rainy Night-Rainy Night-Sunny mAP138.1438.0116.2346.1934.64237.2738.7916.2846.7634.78338.0438.8017.2846.7535.22437.0438.7315.9846.6534.60Baseline537.4538.4316.9446.7034.88AVG37.5938.5516.5446.6134.82STD0.480.340.540.240.25139.0141.4822.1747.1337.45239.4640.8321.5547.3937.31339.4541.0821.5747.5837.42439.3541.6821.7947.3437.54Ours539.2641.8521.1447.6137.47AVG39.3141.3821.6447.4137.44STD0.180.420.380.200.08", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Experimental results on the source domain in the Diverse-Weather and SIM10k2Cityscapes tasks.", "figure_data": "MethodDaytime-sunny SIM10KBaseline64.1889.15Baseline + DoubleAUG (Ours)61.0587.32", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Experimental results of different modules based on YOLOv5s on Diverse-Weather. \"w/o AUG\" means that we remove these augmentation schemes including copy-past, mosaic and mixup.", "figure_data": "MethodDaytime-Foggy Dusk-rainy Night-Rainy Night-Sunny mAPYOLOv5s w/o AUG25.533.510.438.126.9YOLOv5s w/o AUG+ CP28.234.512.138.628.4YOLOv5s w/o AUG+ CP+DSM31.733.416.238.930.1YOLOv5s28.436.814.539.529.8YOLOv5s + CP30.737.115.939.130.7YOLOv5s + CP+DSM32.137.517.639.231.6", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Experimental results of different modules based on YOLOv5s on SIM10k2Cityscapes. 'w/o AUG\" means that we remove these augmentation schemes including copy-past, mosaic and mixup.", "figure_data": "MethodRaw Rain Foggy mAPYOLOv5s w/o AUG24.7 11.813.916.8YOLOv5s w/o AUG+ CP28.6 18.218.521.8YOLOv5s w/o AUG+ CP+DSM 34.7 21.220.925.6YOLOv5s28.5 25.525.126.4YOLOv5s + CP31.5 26.927.428.6YOLOv5s + CP+DSM37.8 26.628.130.8", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Comparison our method with the two-stage scheme on Diverse-Weather.", "figure_data": "MethodNight-Sunny Dusk-rainy Night-Rainy Daytime-Foggy mAPBaseline38.1438.0116.2346.1934.64Two-stage37.8937.5118.4548.9335.70Ours39.0141.4822.1747.1337.45", "figure_id": "tab_15", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Comparison our method with the two-stage scheme on SIM10k2Cityscapes. Method Raw Rain Foggy mAP Basline 51.20 36.74 35.16 41.03 Two-stage 52.30 37.45 34.17 41.31 Ours 61.66 56.75 50.74 56.38", "figure_data": "", "figure_id": "tab_16", "figure_label": "14", "figure_type": "table" } ]
Lei Qi; Peng Dong; Tan Xiong; Hui Xue; Xin Geng
[ { "authors": "I Albuqerqe; J Monteiro; M Darvishi; T H Falk; I Mitliagkas", "journal": "", "ref_id": "b0", "title": "Generalizing to unseen domains via distribution matching", "year": "2019" }, { "authors": "R Aversa; P Coronica; C De Nobili; S Cozzini", "journal": "Data Intelligence (DI)", "ref_id": "b1", "title": "Deep learning, feature learning, and clustering analysis for sem image classification", "year": "2020" }, { "authors": "C Chen; J Li; X Han; X Liu; Y Yu", "journal": "", "ref_id": "b2", "title": "Compound domain generalization via meta-knowledge encoding", "year": "2022" }, { "authors": "C Chen; Z Zheng; X Ding; Y Huang; Q Dou", "journal": "", "ref_id": "b3", "title": "Harmonizing transferability and discriminability for adapting object detectors", "year": "2020" }, { "authors": "Y Chen; W Li; C Sakaridis; D Dai; L V Gool", "journal": "", "ref_id": "b4", "title": "Domain adaptive faster R-CNN for object detection in the wild", "year": "2018" }, { "authors": "S Choi; S Jung; H Yun; J T Kim; S Kim; J Choo", "journal": "", "ref_id": "b5", "title": "Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening", "year": "2021" }, { "authors": "M Cordts; M Omran; S Ramos; T Rehfeld; M Enzweiler; R Benenson; U Franke; S Roth; B Schiele", "journal": "", "ref_id": "b6", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b7", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "X Fan; Q Wang; J Ke; F Yang; B Gong; M Zhou", "journal": "", "ref_id": "b8", "title": "Adversarially adaptive normalization for single domain generalization", "year": "2021" }, { "authors": "B Fei; Z Lyu; L Pan; J Zhang; W Yang; T Luo; B Zhang; B Dai", "journal": "", "ref_id": "b9", "title": "Generative diffusion prior for unified image restoration and enhancement", "year": "2023" }, { "authors": "R B Girshick; R-Cnn Fast", "journal": "", "ref_id": "b10", "title": "IEEE/CVF International Conference on Computer Vision (ICCV)", "year": "2015" }, { "authors": "R B Girshick; J Donahue; T Darrell; J Malik", "journal": "", "ref_id": "b11", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "S Harary; E Schwartz; A Arbelle; P W J Staar; S A Hussein; E Amrani; R Herzig; A Alfassy; R Giryes; H Kuehne; D Katabi; K Saenko; R Feris; L Karlinsky", "journal": "", "ref_id": "b12", "title": "Unsupervised domain generalization by learning a bridge across domains", "year": "2022" }, { "authors": "K He; G Gkioxari; P Dollár; R B Girshick; R-Cnn Mask", "journal": "", "ref_id": "b13", "title": "IEEE/CVF International Conference on Computer Vision (ICCV)", "year": "2017" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "C Hsu; Y Tsai; Y Lin; M Yang", "journal": "", "ref_id": "b15", "title": "Every pixel matters: Center-aware feature alignment for domain adaptive object detector", "year": "" }, { "authors": "H Hsu; W Hung; H Tseng; C Yao; Y Tsai; M Singh; M Yang", "journal": "", "ref_id": "b16", "title": "Progressive domain adaptation for object detection", "year": "2019" }, { "authors": "Q Hu; S Paisitkriangkrai; C Shen; A Van Den Hengel; F Porikli", "journal": "IEEE Transactions on Intelligent Transportation Systems (TITS)", "ref_id": "b17", "title": "Fast detection of multiple objects in traffic scenes with a common detection framework", "year": "2016" }, { "authors": "X Hu; C Fu; L Zhu; P Heng", "journal": "", "ref_id": "b18", "title": "Depth-attentional features for single-image rain removal", "year": "2019" }, { "authors": "L Huang; Y Zhou; F Zhu; L Liu; L Shao", "journal": "", "ref_id": "b19", "title": "Iterative normalization: Beyond standardization towards efficient whitening", "year": "2019" }, { "authors": "X Huang; S J Belongie", "journal": "", "ref_id": "b20", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "G Jocher; Yolov", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "M Johnson-Roberson; C Barto; R Mehta; S N Sridhar; K Rosaen; R Vasudevan", "journal": "", "ref_id": "b22", "title": "Driving in the matrix: Can virtual worlds replace human-generated annotations for real world tasks", "year": "2017" }, { "authors": "J Kang; S Lee; N Kim; S Kwak", "journal": "", "ref_id": "b23", "title": "Style neophile: Constantly seeking novel styles for domain generalization", "year": "2022" }, { "authors": "D Li; Y Yang; Y Song; T M Hospedales", "journal": "", "ref_id": "b24", "title": "Learning to generalize: Meta-learning for domain generalization", "year": "2018" }, { "authors": "W Li; X Liu; Y Yuan", "journal": "", "ref_id": "b25", "title": "SIGMA: semantic-complete graph matching for domain adaptive object detection", "year": "2022" }, { "authors": "T Lin; P Dollár; R B Girshick; K He; B Hariharan; S J Belongie", "journal": "", "ref_id": "b26", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "T Lin; M Maire; S J Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "", "ref_id": "b27", "title": "Microsoft COCO: common objects in context", "year": "2014" }, { "authors": "Y Liu; Z Xiong; Y Li; Y Lu; X Tian; Z.-J Zha", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "ref_id": "b28", "title": "Category-stitch learning for union domain generalization", "year": "2023" }, { "authors": "R Meng; X Li; W Chen; S Yang; J Song; X Wang; L Zhang; M Song; D Xie; S Pu", "journal": "", "ref_id": "b29", "title": "Attention diversification for domain generalization", "year": "2022" }, { "authors": "S Min; N Park; S Kim; S Park; J Kim", "journal": "", "ref_id": "b30", "title": "Grounding visual representations with texts for domain generalization", "year": "2022" }, { "authors": "X Pan; P Luo; J Shi; X Tang", "journal": "", "ref_id": "b31", "title": "Two at once: Enhancing learning and generalization capacities via ibn-net", "year": "2018" }, { "authors": "X Pan; X Zhan; J Shi; X Tang; P Luo", "journal": "", "ref_id": "b32", "title": "Switchable whitening for deep representation learning", "year": "2019" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Köpf; E Z Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala; Pytorch", "journal": "", "ref_id": "b33", "title": "An imperative style, high-performance deep learning library", "year": "" }, { "authors": "F Qiao; L Zhao; X Peng", "journal": "", "ref_id": "b34", "title": "Learning to learn single domain generalization", "year": "2020" }, { "authors": "J Redmon; A Farhadi", "journal": "", "ref_id": "b35", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "S Ren; K He; R B Girshick; J Sun", "journal": "", "ref_id": "b36", "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "S Ren; K He; R B Girshick; J Sun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)", "ref_id": "b37", "title": "Faster R-CNN: towards real-time object detection with region proposal networks", "year": "2017" }, { "authors": "K Saito; Y Ushiku; T Harada; K Saenko", "journal": "", "ref_id": "b38", "title": "Strong-weak distribution alignment for adaptive object detection", "year": "2019" }, { "authors": "C Sakaridis; D Dai; L V Gool", "journal": "International Journal of Computer Vision (IJCV)", "ref_id": "b39", "title": "Semantic foggy scene understanding with synthetic data", "year": "2018" }, { "authors": "D Teney; E Abbasnejad; S Lucey; Van Den; A Hengel", "journal": "", "ref_id": "b40", "title": "Evading the simplicity bias: a diverse set of models discovers solutions with superior OOD generalization", "year": "2022" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of machine learning research (JMLR)", "ref_id": "b41", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "R Volpi; H Namkoong; O Sener; J C Duchi; V Murino; S Savarese", "journal": "", "ref_id": "b42", "title": "Generalizing to unseen domains via adversarial data augmentation", "year": "2018" }, { "authors": "C Wan; X Shen; Y Zhang; Z Yin; X Tian; F Gao; J Huang; X Hua", "journal": "", "ref_id": "b43", "title": "Meta convolutional neural networks for single domain generalization", "year": "2022" }, { "authors": "J Wang; C Lan; C Liu; Y Ouyang; T Qin", "journal": "IJCAI", "ref_id": "b44", "title": "Generalizing to unseen domains: A survey on domain generalization", "year": "2021" }, { "authors": "S Wang; L Yu; C Li; C Fu; P Heng", "journal": "", "ref_id": "b45", "title": "Learning from extrinsic and intrinsic supervisions for domain generalization", "year": "2020" }, { "authors": "X Wang; T E Huang; B Liu; F Yu; X Wang; J E Gonzalez; T Darrell", "journal": "", "ref_id": "b46", "title": "Robust object detection via instance-level temporal cycle confusion", "year": "2021" }, { "authors": "Z Wang; Y Luo; R Qiu; Z Huang; M Baktashmotlagh", "journal": "", "ref_id": "b47", "title": "Learning to diversify for single domain generalization", "year": "2021" }, { "authors": "A Wu; C Deng", "journal": "", "ref_id": "b48", "title": "Single-domain generalized object detection in urban scene via cyclic-disentangled self-distillation", "year": "2022" }, { "authors": "A Wu; C Deng", "journal": "", "ref_id": "b49", "title": "Single-domain generalized object detection in urban scene via cyclic-disentangled self-distillation", "year": "2022" }, { "authors": "A Wu; Y Han; L Zhu; Y Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)", "ref_id": "b50", "title": "Instance-invariant domain adaptive object detection via progressive disentanglement", "year": "2022" }, { "authors": "J Wu; J Chen; M He; Y Wang; B Li; B Ma; W Gan; W Wu; Y Wang; D Huang", "journal": "", "ref_id": "b51", "title": "Target-relevant knowledge preservation for multi-source domain adaptive object detection", "year": "2022" }, { "authors": "L Wu; H Ling; Y Shi; B Zhang", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "ref_id": "b52", "title": "Instance correlation graph for unsupervised domain adaptation", "year": "2022" }, { "authors": "Y Wu; A Kirillov; F Massa; W.-Y Lo; R Girshick; Detectron", "journal": "", "ref_id": "b53", "title": "", "year": "2019" }, { "authors": "C Xu; X Zhao; X Jin; X Wei", "journal": "", "ref_id": "b54", "title": "Exploring categorical regularization for domain adaptive object detection", "year": "2020" }, { "authors": "M Xu; H Wang; B Ni; Q Tian; W Zhang", "journal": "", "ref_id": "b55", "title": "Cross-domain detection via graph-induced prototype alignment", "year": "2020" }, { "authors": "Y Xu; K Sheng; W Dong; B Wu; C Xu; B Hu", "journal": "ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)", "ref_id": "b56", "title": "Towards corruption-agnostic robust domain adaptation", "year": "2022" }, { "authors": "Y Yang; H Wang; D Katabi", "journal": "", "ref_id": "b57", "title": "On multi-domain long-tailed recognition, imbalanced domain generalization and beyond", "year": "2022" }, { "authors": "X Yao; Y Bai; X Zhang; Y Zhang; Q Sun; R Chen; R Li; B Yu", "journal": "", "ref_id": "b58", "title": "PCL: proxy-based contrastive learning for domain generalization", "year": "2022" }, { "authors": "H Zhang; Y Zhang; W Liu; A Weller; B Schölkopf; E P Xing", "journal": "", "ref_id": "b59", "title": "Towards principled disentanglement for domain generalization", "year": "2022" }, { "authors": "J Zhang; L Qi; Y Shi; Y Gao; Mvdg", "journal": "", "ref_id": "b60", "title": "A unified multi-view framework for domain generalization", "year": "2022" }, { "authors": "X Zhang; L Zhou; R Xu; P Cui; Z Shen; H Liu", "journal": "", "ref_id": "b61", "title": "Towards unsupervised domain generalization", "year": "2022" }, { "authors": "Y Zhang; M Li; R Li; K Jia; L Zhang", "journal": "", "ref_id": "b62", "title": "Exact feature distribution matching for arbitrary style transfer and domain generalization", "year": "2022" }, { "authors": "Y Zhang; J Wu; Q Zhang; X Hu", "journal": "Data Intelligence (DI)", "ref_id": "b63", "title": "Multi-view feature learning for the over-penalty in adversarial domain adaptation", "year": "2023" }, { "authors": "K Zhou; Y Yang; T Hospedales; T Xiang", "journal": "", "ref_id": "b64", "title": "Deep domain-adversarial image generation for domain generalisation", "year": "2020" }, { "authors": "K Zhou; Y Yang; T M Hospedales; T Xiang", "journal": "", "ref_id": "b65", "title": "Deep domain-adversarial image generation for domain generalisation", "year": "2020" }, { "authors": "K Zhou; Y Yang; T M Hospedales; T Xiang", "journal": "", "ref_id": "b66", "title": "Learning to generate novel domains for domain generalization", "year": "2020" }, { "authors": "K Zhou; Y Yang; Y Qiao; T Xiang", "journal": "", "ref_id": "b67", "title": "Domain generalization with mixstyle", "year": "2021" } ]
[ { "formula_coordinates": [ 6, 197.61, 136, 187.97, 8.81 ], "formula_id": "formula_0", "formula_text": "= [0, 1, 2], [0, 2, 1], [1, 0, 2], [1, 2, 0], [2, 0, 1], or" }, { "formula_coordinates": [ 6, 188.78, 600.05, 251.99, 20.83 ], "formula_id": "formula_1", "formula_text": "AdaIN(𝑓 ) = σ 𝑓 -𝜇 𝜎 + μ,(2)" }, { "formula_coordinates": [ 6, 72.1, 635.6, 371.75, 9.53 ], "formula_id": "formula_2", "formula_text": "𝜇, 𝜎, μ, σ ∈ R 𝐶 (i.e., 𝜇 = [𝜇 1 , • • • , 𝜇 𝐶 ], 𝜎 = [𝜎 1 , • • • , 𝜎 𝐶 ], μ = [ μ1 , • • • , μ𝐶 ], and σ = [ σ1 , • • • , σ𝐶 ])" }, { "formula_coordinates": [ 7, 188.63, 295.71, 252.14, 27.29 ], "formula_id": "formula_3", "formula_text": "𝜇 𝑖 = 1 𝐻𝑊 𝐻 ∑︁ ℎ=1 𝑊 ∑︁ 𝑤=1 𝑓 [𝑖, ℎ, 𝑤],(3)" }, { "formula_coordinates": [ 7, 161.7, 334.63, 279.07, 27.29 ], "formula_id": "formula_4", "formula_text": "𝜎 𝑖 = 1 𝐻𝑊 𝐻 ∑︁ ℎ=1 𝑊 ∑︁ 𝑤=1 (𝑓 [𝑖, ℎ, 𝑤] -𝜇 𝑖 ) 2 + 𝜖,(4)" }, { "formula_coordinates": [ 7, 228.78, 597.31, 183.59, 10.55 ], "formula_id": "formula_5", "formula_text": "𝜇 𝑜1 = [𝜇 𝑜1 1 , • • • , 𝜇 𝑜1 𝐶 ] and 𝜎 𝑜1 = [𝜎 𝑜1 1 , • • • , 𝜎 𝑜1 𝐶 ]" }, { "formula_coordinates": [ 7, 196.92, 632.53, 243.85, 26.88 ], "formula_id": "formula_6", "formula_text": "𝜇 𝑜1 𝑖 = 1 𝐴 𝑜1 𝐴 𝑜1 ∑︁ 𝑎=1 𝑓 𝑜1 [𝑖, 𝑎],(5)" }, { "formula_coordinates": [ 8, 169.73, 91.27, 271.04, 26.64 ], "formula_id": "formula_7", "formula_text": "𝜎 𝑖 = 1 𝐴 𝑜1 𝐴 ∑︁ 𝑎=1 (𝑓 𝑜1 [𝑖, 𝑎] -𝜇 𝑜1 𝑖 ) 2 + 𝜖.(6)" }, { "formula_coordinates": [ 8, 162.45, 168.18, 278.32, 20.83 ], "formula_id": "formula_8", "formula_text": "f 𝑜1 = M back [𝑟 ] 𝜎 𝑓 𝑜1 -𝜇 𝑜1 𝜎 𝑜1 + M back [𝑟 ] 𝜇 ,(7)" }, { "formula_coordinates": [ 8, 52.2, 334.06, 186.45, 19.01 ], "formula_id": "formula_9", "formula_text": "if |M back | >= 𝑁 𝑚 or |M obj | >= 𝑁 𝑚 then 7:" } ]
2024-02-20
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3" ], "table_ref": [], "text": "The exploration of animal detection, tracking, and behavior analysis plays a crucial role in various fields like biology, ecology, farming, and entertainment. However, in the computer vision community, the focus has predominantly been on human modeling and behavior analysis.\nAcquiring 3D data for training models poses significant challenges. Models such as PIFu [1] and SMPL [2] heavily rely on extensive databases containing thousands of 3D scans, encompassing diverse human shapes and poses. Humans, being cooperative subjects, make this process more feasible. Unfortunately, capturing multiple wild animals for controlled scanning in a lab setting is impractical, and the logistics involved in taking scanning equipment into the wilderness are complex.\nTo tackle this challenge, we propose a two-stage model that combines Differentiable Rendering and Implicit representation. This innovative approach leverages synthetic animal 3D models to train the model and enhances its capabilities by generating 3D models from single-view images of real animals.\nIn Stage 1, our process utilizes a pixel-aligned implicit function to predict the continuous inside/outside probability field of a synthetic bird based on the provided image. Using advanced differentiable rendering techniques, we create a render for the 3D implicit representation of the synthetic bird generated by a pixel-aligned feature decoder. This rendered output is then transformed into 2D images, facilitating multi-view self-supervised learning.\nMoving to Stage 2, the model incorporates a pixel-aligned feature encoderdecoder, pre-trained on synthetic birds. To enhance the model's adaptability to real-world scenarios, we employ transfer learning by integrating real bird images along with their silhouettes. This two-stage strategy ensures a robust and versatile approach to animal 3D model generation, where synthetic data aids in the learning process and real-world images contribute to the model's practical utility and generalization capability.\nOur experiment primarily focuses on the 3D digitization of birds. For Stage 1, we collected 20 synthetic 3D bird models representing various bird types, such as owls, blue jays, toucans, parrots, ducks, and pigeons, to train our model. In Stage 2, we used 5964 previously unseen real bird images, along with their silhouettes, for additional model training.\nThe results of our study demonstrate that our differentiable rendering and implicit function-based approach outperforms state-of-the-art methods [3,4] in both quantitative and qualitative aspects of bird 3D digitization. Furthermore, we extended our method to other animals, including horses, cows, bears, and dogs, with qualitative results showcased in this paper.\nThe main contribution of this work is the combination of two-stage supervised and self-supervised training to address the challenge of obtaining animal cooperation for 3D scanning. In the first stage, we leverage synthetic animal models for supervised learning. This allows the model to learn from a diverse set of virtual animal instances. In the second stage, we use 2D multi-view consistency as a self-supervised training method. This further enhances the model's ability to reconstruct accurate and realistic 3D shape and texture from largely available single-view images of real animals." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "3D Shape Representation", "publication_ref": [ "b4", "b5", "b6", "b2", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b2", "b7", "b8", "b9", "b10", "b11", "b12" ], "table_ref": [], "text": "Researchers have explored various representations for 3D processing tasks, including point clouds [5], implicit surfaces [6,7], triangular meshes [3,[8][9][10][11][12][13], and voxel grids [14][15][16][17][18][19][20][21]. While both voxels and triangular meshes are suitable for deep learning architectures (e.g., VON [22,23], PointNet [24,25]), they face challenges such as memory inefficiency or limited differentiable rendering capabilities. Therefore, in this study, we adopt point clouds [3,[8][9][10][11][12][13] as the preferred 3D shape representation for reconstruction tasks." }, { "figure_ref": [], "heading": "Single-view 3D Reconstruction", "publication_ref": [ "b10", "b11", "b12", "b27", "b28", "b29", "b7", "b8", "b30", "b2", "b32", "b33", "b34" ], "table_ref": [], "text": "The objective of single-view 3D reconstruction [1, 5, 14-20, 26, 27] is to generate a three-dimensional shape from a single input image. This challenging task has been approached by various methods with different levels of supervision. Some approaches [11][12][13]28] rely on paired image and ground truth 3D mesh data, which requires extensive manual annotation efforts [29] or is limited to synthetic data [30]. More recent approaches [8-10, 31, 32] mitigate the need for 3D supervision by leveraging differentiable renderers [8,9,31] and adopting the \"analysis-by-synthesis\" approach, either using multiple views or known ground truth camera poses. In order to alleviate supervision constraints, Kanazawa et al. [3] explored 3D reconstruction using a collection of images depicting different instances. However, their method still relies on annotated 2D keypoints to accurately infer camera pose. This work is also significant as it introduces the concept of a learnable category-level 3D template shape, although it requires initialization from a keypoint-dependent 3D convex hull. Similar problem settings have been investigated in other methods [33][34][35], but they are limited to rigid or structured objects like cars or faces. In contrast, our approach encompasses both rigid and non-rigid objects (e.g., birds, horses, penguins, motorbikes, and cars). We propose a method that jointly estimates the 3D shape and texture from a single-view image, utilizing a synthetic animals' 3D template and a collection of real animal images with silhouettes as supervision. Essentially, we eliminate the need for real animals' 3D template priors, annotated key points, or multi-view real animal images." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b0", "b0" ], "table_ref": [], "text": "In order to reconstruct the 3D mesh of an object instance from an image, a network needs to be able to predict the shape, texture, and camera pose simultaneously. To accomplish this, we use the existing network introduced in [1] (PIFu) as the foundation for our reconstruction network. Our goal is to accurately reconstruct the animal's 3D geometry and texture from single or multi-view images, while preserving the details shown in Figure 1.\nFig. 1: Overview of our pipeline for digitalizing models using differentiable rendering and implicit functions. In Stage 1, we use an implicit function to predict the continuous probability field [1] for the synthetic bird's inside/outside regions based on the given bird image. Then, using differentiable rendering, we generate a render of the 3D implicit representation of the synthetic bird produced by a pixel-aligned feature decoder, rendering it into 2D images for multi-view selfsupervised learning." }, { "figure_ref": [], "heading": "2D multi-view consistency", "publication_ref": [], "table_ref": [], "text": "We aim for the 3D implicit representation to exhibit uniformity from different viewing angles. To achieve this, we employ a 2D multi-view consistency strategy, utilizing differentiable rendering techniques as depicted in Figure 1. This approach ensures that the pixel-level implicit function gains additional insights from synthetic animal 3D models. By applying a render function R(P ), where P represents the predicted point cloud 3D implicit representation, the model generates three different views I v by adjusting camera parameters. This enables 2D self-supervised learning through MSE loss specifically on the rendered 2D views.\nL M = 1 n n i=1 I vi -Îgi 2(1)\nwhere n represents the number of sampled points. During training, we choose to use a point cloud as the 3D implicit representation instead of marching cubes. The reason for this choice is that the marching cubes algorithm is not differentiable, which makes it difficult to optimize with gradients during inference.\nThe traditional Marching Cubes algorithm is a non-differentiable geometric method used for surface reconstruction and visualization of isosurfaces from 3D scalar fields. It involves thresholding the scalar field and constructing polygonal surfaces based on the intersections of a binary mask with a grid of cubes.\nTo enable differentiable operations and facilitate gradient-based optimization, differentiable versions of Marching Cubes or similar algorithms are used in applications such as differentiable rendering and neural network training.\nIn our approach, we use differentiable rendering methods to create a render of the 3D implicit representation of the synthetic bird generated by a pixelaligned feature decoder. We render this point cloud 3D representation into three fixed camera views (0 degrees, 90 degrees, and 180 degrees) for self-supervised learning.\nFig. 2: In Stage 2, we utilize a pre-trained pixel-aligned feature encoder-decoder that was trained on synthetic birds. We incorporate real bird images and their silhouettes through transfer learning." }, { "figure_ref": [], "heading": "2D self-supervised learning on real animal images", "publication_ref": [], "table_ref": [], "text": "In the second stage of our approach, we fine-tune the pre-trained implicit function on synthetic animal images by incorporating real animal images using a self-supervised learning approach, as depicted in Figure 2. Initially, the implicit function generates a 3D representation, which is then rendered into a 2D image using the render function R(P ). We apply a 2D loss function to compare the rendered 2D image with the ground truth image. During inference, the trained implicit function can be directly applied to a given 2D animal image within the trained classes from both stages, specifically for birds in this case." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b35", "b36" ], "table_ref": [], "text": "We conducted experiments to evaluate the effectiveness of our proposed methodology on diverse datasets. These datasets included the CUB-200-2011 dataset [36], which consists of bird images, and the ImageNet dataset [37], which includes images of horses, zebras, and cows, encompassing a wide range of animal species." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b37" ], "table_ref": [], "text": "In our implementation, we utilized the PyTorch3D library [38], which provides differentiable rendering capabilities for rasterizing batches of point clouds.\nFor Stage 1, we trained the 3D implicit function network and the 2D render network for 100 epochs with a learning rate of 0.001. In Stage 2, we further fine-tuned the refined implicit function network for 50 epochs with a learning rate of 0.0005. " }, { "figure_ref": [ "fig_0", "fig_1", "fig_2", "fig_0", "fig_2" ], "heading": "Qualitative Results", "publication_ref": [ "b35", "b38" ], "table_ref": [], "text": "To qualitatively evaluate our model, we present the 3D representations of birds and horses in Figure 3, Figure 4, and Figure 5. These figures highlight the distinctive shape features of each category. Remarkably, our model excels in horse prediction, outperforming CMR and UMR by accurately capturing intricate details such as the legs and tail. The visualizations vividly demonstrate the remarkable accuracy and level of detail achieved by our model. Figure 3 showcases the results of our digitization process using real bird images from the CUB-200-2011 dataset [36]. Our DRIFu model demonstrates its adaptability across a diverse range of bird species, generating high-resolution local details and inferring plausible 3D surfaces, even in previously unseen regions. Moreover, it successfully infers complete textures from a single input image, providing a comprehensive view of the 3D models from various angles.\nFor evaluating texture reconstruction, we employed precision and recall metrics, comparing the rendered 2D images with ground truth images. Our approach outperforms other models in terms of texture assessment as well. [39] can be seen in Figure 5." }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [ "b35", "b2", "b35", "b2", "b3", "b2", "b3", "b38", "b38", "b2", "b3" ], "table_ref": [], "text": "We evaluate our reconstruction accuracy quantitatively using three metrics for 2D shape and texture. However, the lack of ground truth for 3D meshes or camera poses in these datasets makes a quantitative evaluation in 3D challenging.\nWe first assess the shape reconstruction for birds. Since the CUB-200-2011 dataset [36] does not provide ground truth 3D shapes, we follow the approach of [3] and calculate the mask reprojection accuracy using the intersection over union (IoU) between the rendered and ground truth silhouettes. Table . 1 demonstrates that our model outperforms other state-of-the-art single-view bird 3D reconstruction models, indicating its ability to predict 3D mesh reconstructions that align well with 2D observations. Similar to the shape evaluation, texture assessment is performed by comparing the rendered and ground truth 2D images using precision and recall measures. Our approach also surpasses the other models in terms of texture accuracy.\nTable 1: Quantitative evaluation of mask IoU and texture precision and recall on the CUB-200-2011 dataset [36]. The comparisons are made against the baseline supervised models [3,4]. We also conducted additional evaluations comparing our model with the baseline supervised models [3,4] using real horse images from the Weizmann Horses dataset [39]. The results of this evaluation are presented in Table . 2. The use of implicit representation enables us to achieve higher fidelity in the reconstruction.\nTable 2: Quantitative evaluation of mask IoU and texture precision and recall on the Weizmann Horses dataset [39]. The comparisons are made against the baseline supervised models [3,4]. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The goal of this research is to develop a model that can reconstruct the 3D shape and texture of animals using single-view images of real animals. One key aspect of our approach is the use of synthetic animal 3D models for supervision, allowing the model to learn from a diverse set of virtual animal instances. A crucial part of our methodology is the introduction of a self-supervised differentiable rendering framework, which ensures consistency between the reconstructed 3D representations and the corresponding images. This approach reduces ambiguities in predicting 3D shape and texture from 2D observations. In addition, we incorporate a transfer learning-based self-supervised framework, which enables our model to leverage the learned 3D representation when dealing with unseen real animal images. This transfer learning mechanism enhances the adaptability and robustness of our model in real-world scenarios.\nThe effectiveness of our proposed method is demonstrated through extensive experiments. Notably, our approach outperforms state-of-the-art supervised category-specific reconstruction techniques. These results highlight the potential and versatility of our model in advancing the field of 3D shape and texture reconstruction from single-view images, particularly for real animals." } ]
effectively captures subtle variations in body shape within a low-dimensional space through extensive training with human 3D scans, its application to live animals presents formidable challenges due to the difficulty of obtaining animal cooperation for 3D scanning. To address this challenge, we propose the combination of two-stage supervised and self-supervised training to address the challenge of obtaining animal cooperation for 3D scanning. In the first stage, we leverage synthetic animal models for supervised learning. This allows the model to learn from a diverse set of virtual animal instances. In the second stage, we use 2D multi-view consistency as a self-supervised training method. This further enhances the model's ability to reconstruct accurate and realistic 3D shape and texture from largely available single-view images of real animals. The results of our study demonstrate that our approach outperforms state-of-the-art methods in both quantitative and qualitative aspects of bird 3D digitization. The source code is available at https://github.com/kuangzijian/ drifu-for-animals.
Two-stage Synthetic Supervising and Multi-view Consistency Self-supervising based Animal 3D Reconstruction by Single Image
[ { "figure_caption": "Fig. 3 :3Fig. 3: Qualitative results showcasing single-view 3D and textured reconstructions of real bird images from the CUB-200-2011 dataset [36].", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Qualitative single-view 3D reconstruction results on real bird images from the CUB-200-2011 dataset [36] are shown in Figure 4.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig.5: Qualitative single-view 3D reconstruction results on real animal images from the Weizmann horses dataset[39] can be seen in Figure5.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Zijian Kuang; Ying Lihang; 2⋆; Jin Shi; Li Cheng
[ { "authors": "S Saito; Z Huang; R Natsume; S Morishima; A Kanazawa; H Li", "journal": "", "ref_id": "b0", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "M Loper; N Mahmood; J Romero; G Pons-Moll; M J Black", "journal": "", "ref_id": "b1", "title": "SMPL: A skinned multi-person linear model", "year": "2015-10" }, { "authors": "A Kanazawa; S Tulsiani; A A Efros; J Malik", "journal": "", "ref_id": "b2", "title": "Learning category-specific mesh reconstruction from image collections", "year": "2018" }, { "authors": "Xueting Li; S Liu; K Kim; S De Mello; V Jampani; M.-H Yang; J Kautz", "journal": "", "ref_id": "b3", "title": "Self-supervised single-view 3d reconstruction via semantic consistency", "year": "2020" }, { "authors": "H Fan; H Su; L Guibas", "journal": "", "ref_id": "b4", "title": "A point set generation network for 3d object reconstruction from a single image", "year": "2016" }, { "authors": "S Liu; S Saito; W Chen; H Li", "journal": "", "ref_id": "b5", "title": "Learning to infer implicit surfaces without 3d supervision", "year": "2019" }, { "authors": "L Mescheder; M Oechsle; M Niemeyer; S Nowozin; A Geiger", "journal": "", "ref_id": "b6", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "H Kato; Y Ushiku; T Harada", "journal": "", "ref_id": "b7", "title": "Neural 3d mesh renderer", "year": "2017" }, { "authors": "S Liu; T Li; W Chen; H Li", "journal": "", "ref_id": "b8", "title": "Soft rasterizer: A differentiable renderer for image-based 3d reasoning", "year": "2019" }, { "authors": "H Kato; T Harada", "journal": "", "ref_id": "b9", "title": "Learning view priors for single-view 3d reconstruction", "year": "2019" }, { "authors": "N Wang; Y Zhang; Z Li; Y Fu; W Liu; Y.-G Jiang", "journal": "", "ref_id": "b10", "title": "Pixel2mesh: Generating 3d mesh models from single rgb images", "year": "2018" }, { "authors": "J Pan; X Han; W Chen; J Tang; K Jia", "journal": "", "ref_id": "b11", "title": "Deep mesh reconstruction from single rgb images via topology modification networks", "year": "2019" }, { "authors": "C Wen; Y Zhang; Z Li; Y Fu", "journal": "", "ref_id": "b12", "title": "Pixel2mesh++: Multi-view 3d mesh generation via deformation", "year": "2019" }, { "authors": "C B Choy; D Xu; J Gwak; K Chen; S Savarese", "journal": "", "ref_id": "b13", "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "year": "2016" }, { "authors": "R Girdhar; D F Fouhey; M Rodriguez; A Gupta", "journal": "", "ref_id": "b14", "title": "Learning a predictable and generative vector representation for objects", "year": "2016" }, { "authors": "J Gwak; C B Choy; A Garg; M Chandraker; S Savarese", "journal": "", "ref_id": "b15", "title": "Weakly supervised 3d reconstruction with adversarial constraint", "year": "2017" }, { "authors": "S Tulsiani; T Zhou; A A Efros; J Malik", "journal": "", "ref_id": "b16", "title": "Multi-view supervision for singleview reconstruction via differentiable ray consistency", "year": "2017" }, { "authors": "O Wiles; A Zisserman", "journal": "", "ref_id": "b17", "title": "Silnet : Single-and multi-view reconstruction by learning from silhouettes", "year": "2017" }, { "authors": "X Yan; J Yang; E Yumer; Y Guo; H Lee", "journal": "", "ref_id": "b18", "title": "Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision", "year": "2017" }, { "authors": "R Zhu; H K Galoogahi; C Wang; S Lucey", "journal": "", "ref_id": "b19", "title": "Rethinking reprojection: Closing the loop for pose-aware shapereconstruction from a single image", "year": "2017" }, { "authors": "C Häne; S Tulsiani; J Malik", "journal": "", "ref_id": "b20", "title": "Hierarchical surface prediction for 3d object reconstruction", "year": "2017" }, { "authors": "J Wu; C Zhang; T Xue; W T Freeman; J B Tenenbaum", "journal": "", "ref_id": "b21", "title": "Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling", "year": "2017" }, { "authors": "J.-Y Zhu; Z Zhang; C Zhang; J Wu; A Torralba; J B Tenenbaum; W T Freeman", "journal": "", "ref_id": "b22", "title": "Visual object networks: Image generation with disentangled 3d representation", "year": "2018" }, { "authors": "C R Qi; H Su; K Mo; L J Guibas", "journal": "", "ref_id": "b23", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "C R Qi; L Yi; H Su; L J Guibas", "journal": "", "ref_id": "b24", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Z Kuang; L Ying; X Tie; S Jin", "journal": "Springer", "ref_id": "b25", "title": "Normalizing flow based defect detection with motion detection", "year": "2022" }, { "authors": "P Henderson; V Ferrari", "journal": "", "ref_id": "b26", "title": "Learning to generate and reconstruct 3d meshes with only 2d supervision", "year": "2018" }, { "authors": "C Sun; L Bin Song; L Ying", "journal": "Springer", "ref_id": "b27", "title": "Product re-identification system in fully automated defect detection", "year": "2022" }, { "authors": "Y Xiang; W Kim; W Chen; J Ji; C Choy; H Su; R Mottaghi; L Guibas; S Savarese", "journal": "", "ref_id": "b28", "title": "Objectnet3d: A large scale database for 3d object recognition", "year": "2016" }, { "authors": "A X Chang; T Funkhouser; L Guibas; P Hanrahan; Q Huang; Z Li; S Savarese; M Savva; S Song; H Su; J Xiao; L Yi; F Yu", "journal": "", "ref_id": "b29", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "W Chen; J Gao; H Ling; E J Smith; J Lehtinen; A Jacobson; S Fidler", "journal": "", "ref_id": "b30", "title": "Learning to predict 3d objects with an interpolation-based differentiable renderer", "year": "2019" }, { "authors": "Z Kuang; X Tie; X Wu; L Ying", "journal": "Springer", "ref_id": "b31", "title": "Funet: Flow based conference video background subtraction", "year": "2022" }, { "authors": "A Szabó; P Favaro", "journal": "", "ref_id": "b32", "title": "Unsupervised 3d shape learning from image collections in the wild", "year": "2018" }, { "authors": "S Wu; C Rupprecht; A Vedaldi", "journal": "", "ref_id": "b33", "title": "Photo-geometric autoencoding to learn 3d objects from unlabelled images", "year": "2019" }, { "authors": "P Henderson; V Ferrari", "journal": "", "ref_id": "b34", "title": "Learning single-image 3d reconstruction by generative modelling of shape, pose and shading", "year": "2019" }, { "authors": "S B Wah; Catherine ", "journal": "", "ref_id": "b35", "title": "The caltech-ucsd birds-200-2011 dataset", "year": "2011" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b36", "title": "Imagenet: A largescale hierarchical image database", "year": "2009" }, { "authors": "N Ravi; J Reizenstein; D Novotny; T Gordon; W.-Y Lo; J Johnson; G Gkioxari", "journal": "", "ref_id": "b37", "title": "Accelerating 3d deep learning with pytorch3d", "year": "2020" }, { "authors": "E Borenstein", "journal": "", "ref_id": "b38", "title": "Weizmann horse database", "year": "2011" } ]
[ { "formula_coordinates": [ 4, 253.98, 583.63, 226.61, 30.32 ], "formula_id": "formula_0", "formula_text": "L M = 1 n n i=1 I vi -Îgi 2(1)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b12", "b3", "b6", "b8", "b20", "b11", "b13", "b22", "b5", "b15", "b0", "b18", "b16" ], "table_ref": [], "text": "Recent advancements in remote sensing technologies [8,13] have revolutionized the way we collect and analyze data relevant to our understanding of the Earth's surface. The data collected from these methods serve central to a multitude of applications such as weather forecasting [4], environmental monitoring [7], urban planning [9,21], and defense intelligence [12]. Despite the wealth of data made available through remote sensing, effective utilization of this information poses a significant challenge. One bottleneck lies in the semantic segmentation of the remote sensing imagery [14,23]. Traditional methods rely heavily on manual guidance, which is both time-inefficient and susceptible to human error.\nThe Segment Anything Model (SAM) [6], an example of large vision models, demonstrates considerable versatility and zero-shot learning [16] abilities, powered by its rich training data. However, its ability to perform one-shot or few-shot semantic segmentation tasks for remote sensing imagery [1,19] remains largely underexplored. The SAM model's design is category-agnostic, which inevitably forces reliance on manual guidance. Recognizing the potential and the need to optimize this aspect, we ventured into the integration of few-shot learning in this regard.\nIn this research, we introduce a novel few-shot semantic segmentation algorithm designed to automate the semantic segmentation process termed Selfguided Large Vision Model (Few-shot SLVM). Our approach enables the use of the SAM model with the intent to achieve efficient generation of semantically rich segmentation outcomes. The cornerstone of this method is an innovative automatic prompt learning technique that leverages prior guided masks to produce coarse pixel-wise prompts for SAM, bypassing the need for intensive manual guidance.\nThe framework we propose for few-shot semantic segmentation provides a promising avenue for the efficient parsing of remote sensing imagery. The system's capacity to produce high-quality segmented images with limited supervision is bound to drive advancements in various applications dependent on remote sensing. We focused our experiments on the DLRSD [17] datasets with the goal of validating the superiority of this approach over other existing methodologies for few-shot semantic segmentation, particularly in the context of remote sensing.\nTo summarize, our major contributions are:\n1) We introduce the Self-guided Large Vision Model (Few-shot SLVM), a novel few-shot semantic segmentation framework, that significantly automates the segmentation process for remote sensing imagery without heavy reliance on manual guidance. 2) We propose an innovative 'automatic prompt learning' technique using the Segment Anything Model (SAM) for rendering coarse pixel-wise prompts, bringing a novel solution to semantic segmentation of remote sensing imagery. 3) We carry out extensive benchmarking on the DLRSD datasets, showcasing the superiority of our methodology against existing few-shot segmentation techniques within the domain of remote sensing." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "In this section, we initially introduce recent work in the field of semantic segmentation, laying the foundation for essential context with regards to our proposed method. Subsequently, we steer our focus towards the few-shot semantic segmentation techniques that underpin our proposed method while concluding this section with a discussion on the few-shot learning approaches that are explicitly designed for semantic segmentation of aerial imagery." }, { "figure_ref": [], "heading": "Semantic Segmentation for Visual Scenes", "publication_ref": [ "b5", "b9", "b2", "b23" ], "table_ref": [], "text": "Semantic segmentation forms a critical research area within computer vision, carrying substantial impact on interpreting visual scenes. Several works executed on traditional datasets [6,10] have relied heavily on conventional Convolutional Neural Networks (CNNs), yielding valuable but large-dataset-dependent methods [3,24]. They provide valuable insights but often struggle with time-efficiency due to their heavy reliance on extensive annotated datasets." }, { "figure_ref": [], "heading": "Few-shot Learning and Large Vision Models", "publication_ref": [ "b17", "b19", "b1", "b14", "b5" ], "table_ref": [], "text": "Few-shot learning, with its ability to generalize from limited data, has recently emerged as an effective approach for semantic segmentation [18,20]. However, the scope of these works does not extend towards the incorporation of large vision models. In recent years, Large Vision Models, like GPT-3 [2] and CLIP [15], have sparked interest due to their impressive performance in numerous visual and text-based tasks, including SAM [6] due to its specialized segmentation capabilities." }, { "figure_ref": [], "heading": "Few-Shot Learning in Semantic Segmentation of Remote Sensing Imagery", "publication_ref": [ "b9", "b18", "b4", "b21" ], "table_ref": [], "text": "Efforts specifically centred around applying few-shot learning for the semantic segmentation of remote sensing imagery remain sparse [10,19]. The studies by [5,22] have revealed promising gains in using SAM for semantic segmentation tasks. Still, their focal point is largely limited to medical imaging, despite the high potential applicability and relevance to remote sensing imagery." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Our methodology outlines the systematic integration of the Segment Anything Model (SAM), Prior Guided Metric Learning, and an innovative few-shot learning setup to effectively automate semantic segmentation within the setting of remote sensing imagery. To provide more insight, we denote the training dataset as D = {I, T }, support set as D s = {I s , T s }, and query set as D q = {I q , T q }, where I = {i 1 , ..., i n } represents images, and T = {t 1 , ..., t n } corresponds to their segmentation ground truth. The primary objective is to design a plugand-play self-prompting module, enabling SAM to obtain the location and size information of the segmentation target, only necessitating a few labeled data, for instance k images. The pre-trained high-level features with a support mask are transformed into prior mask utilizing cosine similarity measures. We take the query features and generated prior mask as input to produce coarse pixel-wise prompts for SAM. During the training process, the model is trained using the image embeddings from the SAM encoder and the resized ground truth label, while the cumbersome encoder, prompt encoder, and decoder parts of the SAM structure are kept frozen." }, { "figure_ref": [], "heading": "Prior Guided Metric Learning", "publication_ref": [], "table_ref": [], "text": "Assuming an input image X and the output segmentation mask Y , with the encoder function E(•) and decoder function D(•) of the Segment Anything Model (SAM), we can define the process of generating the mask for SAM as follows:\nY = D(E(X))(1)\nwhere E represents a pre-trained image embedding encoder and D is a learned mask decoder, the Prior Guided Metric Learning Module is introduced to incorporate prior information with the prompt. Specifically, after passing through the powerful encoder of the large vision model SAM, we first perform the Hadamard product between the high-level support features E H (I S ) and the mask M . Subsequently, we use cosine similarity to calculate the pixel-wise association between the high-level query features E H (I Q ) and the mask-weighted support features, defined as:\nP = cosine(E H (I Q ), E H (I S ) ⊙ M )(2)\nBy concatenating the intermediate query features E M (I Q ) with the pixel-level prior-guided information P , new query features are generated to effectively integrate the support information with the prior information, resulting in enhanced segmentation results." }, { "figure_ref": [], "heading": "Automatic Prompt Learning", "publication_ref": [], "table_ref": [], "text": "In this step, we design a novel automatic prompt learning method that generates coarse pixel-wise prompts for SAM from the prior mask Y P to guide the segmentation prediction. The prompt indicators W are derived from the prior mask Y P , and the output mask Y is reformulated as:\nY = D(W ⊙ E(X), (1 -W ) ⊙ E(Y P ))(3)\nDuring the training process, both the cumbersome encoder and decoder of SAM are kept frozen, guiding it to focus on the area of interest through the continuous optimization of self-guided prompt embeddings." }, { "figure_ref": [], "heading": "Few-Shot Learning Adaptation", "publication_ref": [], "table_ref": [], "text": "Our proposed Few-shot Self-guided Large Vision Model (SLVM) functions by learning from a limited set of support examples and extrapolating this learning to the query set. For this, we employ a cosine similarity loss function, defined as:\nL = 1 N N i=1 1 - Y i • T i ∥Y i ∥ • ∥T i ∥(4)\nLastly, to enhance performance, we introduce a fine-tuning strategy. The training objective comprises both the Self-guidance loss L s and the Fine-tuning loss L f , represented as:\nL total = αL s + βL f(5)\nThe strategy operates with a two-fold phase that initially trains with self-guidance loss only and then fine-tunes using the total loss. Through this detailed walkthrough of our method, we lay out the blueprint of our Few-shot SLVM model's capability to combine the power of SAM, few-shot learning, and prior metric learning for semantic segmentation in remote sensing images." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b16", "b18" ], "table_ref": [], "text": "DLRSD dataset [17] employed in the experiments comprises 2100 high-resolution aerial images, with each image having dimensions of 256 × 256 pixels. These images encompass 17 distinct object classes, including airplanes, bare soil, buildings, cars, and various others. Each sample in the dataset is labeled with pixellevel annotations, providing detailed ground truth information for precise object segmentation. The dataset poses several challenges encountered in real-world scenarios, such as occlusion, shadows, and variations in terrain scales, making it a valuable resource for evaluating algorithms robustness. Similar to the methodology of Wang et al. [19], the DLRSD dataset is partitioned into four separate folds. The first three folds consist of four categories each, while the fourth fold includes five categories, namely sea, ship, tank, tree, and water. This partitioning allows for a more comprehensive evaluation and analysis of the proposed method's performance on different object classes and scenarios." }, { "figure_ref": [], "heading": "Implement Details", "publication_ref": [], "table_ref": [], "text": "For the following experiments, we employ the SAM ViT-Huge model as our backbone. During the training phase, we utilize the PyTorch framework and trained end-to-end with the AdamW optimizer. We use a mini-batch size of 8 and set the initial learning rate to 0.00025. To decay the learning rate, we employ a Cosine Annealing scheduler and the momentum is set to 0.9. For data augmentation, we randomly perform flipping (vertically or horizontally) and rotation operations on the input images, with a resulting size of 256x256 pixels. All experiments are conducted on 2 NVIDIA GeForce RTX 3090 Ti GPUs, and the training process lasts for 1000 epochs. By utilizing appropriate pretrained backbones and carefully setting hyperparameters, we achieve optimal performance in terms of accuracy and efficiency in our experiments." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "To provide a comprehensive evaluation, we compared our proposed Few-shot SLVM with three other state-of-the-art few-shot segmentation methods evaluated on the DLRSD dataset, taking into account the 1-shot and 5-shot settings, and utilizing mean intersection over union (mIoU) as the evaluation metric. Comparative Analysis: As Table 1 depicts, Few-shot SLVM surpasses competing methodologies, underscoring its robustness and accuracy. The mean IoU at the 1shot and 5-shot settings improved by 2.04% and 4.14% over the closest competing models, which demonstrates the model's superior segmentation quality." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We performed an ablation study to understand the contribution of each component in our model. We focused on three main modules: (1) Automatic Prompt Learning (APL), (2) Prior Guided Metric Learning (PGML), and (3) Few-Shot Learning Adaptation (FSLA). Results from the ablation study: Ablation Study Insights: The stark performance drop in the absence of 'Automatic Prompt Learning' (Table 2) confirms its centrality in generating precise The value of the APL component is evident from the substantial uptick in the overall accuracy. Through the automation of prompt generation, it diminishes reliance on manual interventions. This hastens the segmentation workflow and either maintains or heightens the quality of the results. The difference in the mean IoU between the model with no components and the one with APL indicates a marked improvement of 6.85%, underscoring the importance of APL. 2. Prior Guided Metric Learning (PGML): Delving into the performance with and without the PGML, the significance of this module comes to light. When we compare the model with only APL to the one equipped with both APL and PGML, there's a modest increase in Mean IoU from 40.59% to 42.30%. This suggests that PGML refines the feature representations and contributes to better distance metrics. By incorporating prior knowledge, PGML offers a more informed and directed approach to metric learning, thus making the model more resilient and effective in differentiating between classes. 3. Few-Shot Learning Adaptation: The FSLA component bolsters the model's ability to learn from limited data. This trait is paramount, especially in remote sensing scenarios where access to abundant labeled data might be constrained. Its role is apparent in the enhancement of the model's generalization capabilities, especially on classes that are either unseen or minimally represented. The final row of the table demonstrates that integrating FSLA boosts the mean IoU to 44.74%, the best among the configurations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, this research emphasizes the advancements brought forth by the Few-Shot Self-guided Large Vision Model (Few-shot SLVM) in few-shot semantic segmentation for remote sensing imagery. The Few-shot SLVM integrates three pivotal modules: Automatic Prompt Learning (APL), Prior Guided Metric Learning (PGML), and Few-Shot Learning Adaptation (FSLA). APL streamlines the segmentation process, reducing manual input and enhancing accuracy. PGML optimizes feature representation, refining class differentiation and subsequently boosting the mean IoU. FSLA excels in limited data scenarios, improving model generalization for rare or unseen classes. Collectively, these components elevate the Few-shot SLVM to deliver state-of-the-art results in few-shot remote sensing segmentation. Future endeavors will focus on refining these modules and extending their applicability across broader datasets for robust model validation." } ]
The Segment Anything Model (SAM) exhibits remarkable versatility and zero-shot learning abilities, owing largely to its extensive training data (SA-1B). Recognizing SAM's dependency on manual guidance given its category-agnostic nature, we identified unexplored potential within few-shot semantic segmentation tasks for remote sensing imagery. This research introduces a structured framework designed for the automation of few-shot semantic segmentation. It utilizes the SAM model and facilitates a more efficient generation of semantically discernible segmentation outcomes. Central to our methodology is a novel automatic prompt learning approach, leveraging prior guided mask to produce coarse pixel-wise prompts for SAM. Extensive experiments on the DLRSD datasets underlines the superiority of our approach, outperforming other available few-shot methodologies.
Self-guided Few-shot Semantic Segmentation for Remote Sensing Imagery Based on Large Vision Models
[ { "figure_caption": "Fig. 1 .1Fig.1. The overview of our proposed Few-shot Semantic Segmentation framework based on Self-guided Large Vision Model (Few-shot SLVM). We utilize the pretrained Large Vision Model, SAM, to extract both high-level (H) and intermediate (M) semantic features from support and query images. The pre-trained high-level features with a support mask are transformed into prior mask utilizing cosine similarity measures. We take the query features and generated prior mask as input to produce coarse pixel-wise prompts for SAM. During the training process, the model is trained using the image embeddings from the SAM encoder and the resized ground truth label, while the cumbersome encoder, prompt encoder, and decoder parts of the SAM structure are kept frozen.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Comparisons of few-shot segmentation performance between our proposed Few-shot SLVM and other methods under different splits on the DLRSD dataset.", "figure_data": "Methods1-Shot5-ShotFold-0 Fold-1 Fold-2 Fold-3 Mean Fold-0 Fold-1 Fold-2 Fold-3 MeanCANet25.31 12.55 18.41 26.66 20.73 28.29 17.10 21.36 29.45 24.05PANet36.15 20.55 26.98 38.41 30.52 40.85 23.61 35.87 45.67 36.50DMML-Net 45.03 31.23 47.38 47.17 42.70 57.23 39.86 56.62 62.60 54.08Ours50.70 34.15 50.47 43.64 44.74 61.90 52.20 58.72 60.05 58.22", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "ABLATION STUDY ON THE PROPOSED COMPONENTS OF OUR METHOD ON THE DLRSD DATASET UNDER ONE-SHOT SETTING.", "figure_data": "MethodsFold-0 Fold-1 Fold-2 Fold-3 MeanAPL PGML FSLA39.02 21.19 38.70 36.04 33.74✓46.77 27.30 46.44 41.85 40.59✓✓50.66 31.35 45.03 42.16 42.30✓✓ ✓50.70 34.15 50.47 43.64 44.74segmentation prompts, contributing to an overall 6.85% increase in Mean IoU.Similarly, disabling 'Few-Shot Learning Adaptation' leads to performance degra-dation, highlighting its role in enhancing the model's adaptability to limited datascenarios.Module Effectiveness: 1. Automatic Prompt Learning:", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Xiyu Qi; Yifan Wu; Yongqiang Mao; Wenhui Zhang; Yidan Zhang
[ { "authors": "Y Chen; C Wei; D Wang; C Ji; B Li", "journal": "Remote Sensing", "ref_id": "b0", "title": "Semi-supervised contrastive learning for few-shot segmentation of remote sensing images", "year": "2022" }, { "authors": "R Dale", "journal": "Natural Language Engineering", "ref_id": "b1", "title": "Gpt-3: What's it good for?", "year": "2021" }, { "authors": "F I Diakogiannis; F Waldner; P Caccetta; C Wu", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b2", "title": "Resunet-a: A deep learning framework for semantic segmentation of remotely sensed data", "year": "2020" }, { "authors": "I Gherboudj; H Ghedira", "journal": "Renewable and Sustainable Energy Reviews", "ref_id": "b3", "title": "Assessment of solar energy potential over the united arab emirates using remote sensing and weather forecast data", "year": "2016" }, { "authors": "S He; R Bao; J Li; P E Grant; Y Ou", "journal": "", "ref_id": "b4", "title": "Accuracy of segment-anything model (sam) in medical image segmentation tasks", "year": "2023" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W Y Lo", "journal": "", "ref_id": "b5", "title": "Segment anything", "year": "2023" }, { "authors": "J Li; Y Pei; S Zhao; R Xiao; X Sang; C Zhang", "journal": "Remote Sensing", "ref_id": "b6", "title": "A review of remote sensing for environmental monitoring in china", "year": "2020" }, { "authors": "Y Mao; K Chen; W Diao; X Sun; X Lu; K Fu; M Weinmann", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b7", "title": "Beyond single receptive field: A receptive field fusion-and-stratification network for airborne laser scanning point cloud classification", "year": "2022" }, { "authors": "Y Mao; K Chen; L Zhao; W Chen; D Tang; W Liu; Z Wang; W Diao; X Sun; K Fu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b8", "title": "Elevation estimation-driven building 3d reconstruction from single-view remote sensing imagery", "year": "2023" }, { "authors": "Y Mao; Z Guo; L Xiaonan; Z Yuan; H Guo", "journal": "IEEE", "ref_id": "b9", "title": "Bidirectional feature globalization for few-shot semantic segmentation of 3d point cloud scenes", "year": "2022" }, { "authors": "Y Mao; X Sun; X Huang; K Chen", "journal": "", "ref_id": "b10", "title": "Light: Joint individual building extraction and height estimation from satellite images through a unified multitask learning network", "year": "2023" }, { "authors": "K Michael; A Masters", "journal": "IGI Global", "ref_id": "b11", "title": "Realized applications of positioning technologies in defense intelligence", "year": "2006" }, { "authors": "O Moselhi; H Bardareh; Z Zhu", "journal": "Applied Sciences", "ref_id": "b12", "title": "Automated data acquisition in construction with remote sensing technologies", "year": "2020" }, { "authors": "X Qi; Y Mao; Y Zhang; Y Deng; H Wei; L Wang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b13", "title": "Pics: Paradigms integration and contrastive selection for semisupervised remote sensing images semantic segmentation", "year": "2023" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b14", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "B Romera-Paredes; P Torr", "journal": "PMLR", "ref_id": "b15", "title": "An embarrassingly simple approach to zero-shot learning", "year": "2015" }, { "authors": "Z Shao; K Yang; W Zhou", "journal": "Remote Sensing", "ref_id": "b16", "title": "A benchmark dataset for performance evaluation of multi-label remote sensing image retrieval", "year": "2018" }, { "authors": "F Sung; Y Yang; L Zhang; T Xiang; P H Torr; T M Hospedales", "journal": "", "ref_id": "b17", "title": "Learning to compare: Relation network for few-shot learning", "year": "2018" }, { "authors": "B Wang; Z Wang; X Sun; H Wang; K Fu", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b18", "title": "Dmml-net: Deep metametric learning for few-shot geographic object segmentation in remote sensing imagery", "year": "2021" }, { "authors": "Y Wang; Q Yao; J T Kwok; L M Ni", "journal": "ACM computing surveys (csur)", "ref_id": "b19", "title": "Generalizing from a few examples: A survey on few-shot learning", "year": "2020" }, { "authors": "T Wellmann; A Lausch; E Andersson; S Knapp; C Cortinovis; J Jache; S Scheuer; P Kremer; A Mascarenhas; R Kraemer", "journal": "", "ref_id": "b20", "title": "Remote sensing in urban planning: Contributions towards ecologically sound policies? Landscape and urban planning", "year": "2020" }, { "authors": "J Wu; R Fu; H Fang; Y Liu; Z Wang; Y Xu; Y Jin; T Arbel", "journal": "", "ref_id": "b21", "title": "Medical sam adapter: Adapting segment anything model for medical image segmentation", "year": "2023" }, { "authors": "Y Yi; Z Zhang; W Zhang; C Zhang; W Li; T Zhao", "journal": "Remote sensing", "ref_id": "b22", "title": "Semantic segmentation of urban buildings from vhr remote sensing imagery using a deep convolutional neural network", "year": "2019" }, { "authors": "X Yuan; J Shi; L Gu", "journal": "Expert Systems with Applications", "ref_id": "b23", "title": "A review of deep learning methods for semantic segmentation of remote sensing imagery", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 276.54, 487.34, 204.04, 8.74 ], "formula_id": "formula_0", "formula_text": "Y = D(E(X))(1)" }, { "formula_coordinates": [ 4, 232.35, 603.29, 248.24, 9.65 ], "formula_id": "formula_1", "formula_text": "P = cosine(E H (I Q ), E H (I S ) ⊙ M )(2)" }, { "formula_coordinates": [ 5, 224.86, 187.99, 255.73, 9.65 ], "formula_id": "formula_2", "formula_text": "Y = D(W ⊙ E(X), (1 -W ) ⊙ E(Y P ))(3)" }, { "formula_coordinates": [ 5, 243.26, 315.7, 237.33, 30.32 ], "formula_id": "formula_3", "formula_text": "L = 1 N N i=1 1 - Y i • T i ∥Y i ∥ • ∥T i ∥(4)" }, { "formula_coordinates": [ 5, 265.12, 385.71, 215.48, 9.65 ], "formula_id": "formula_4", "formula_text": "L total = αL s + βL f(5)" } ]
10.1007/978-3-030-61401-0_45/TABLES/5
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b8", "b10", "b11", "b12", "b13", "b14", "b17", "b18", "b19", "b20", "b21", "b22" ], "table_ref": [], "text": "Cancer is a major global health problem which is considered one of the measure causes of death globally. According to WHO, breast cancer is the second largest cause of death worldwide, and it is reported in 2018 that almost 9.6 million death cases occurred due to the breast cancer around the globe. BC can affect women and men; however, it is mainly widespread in women who may develop this tragical disease in a period of her lifetime. The breast cancer institute stated that breast cancer is one of the death-leading diseases that plague women in the world [1]. Cancer is caused by the uncontrollable growth of the cells which cumulate to create tumor. Regularly, pathologists classified cancers into two types, one is benign and the other one is malignant. In a benign tumor case, the cells build up abnormally and create a lump. However, they do not invade other neighboring organs of the body so they are not classified as a cancer. Cancer begins as a benign type and, without a treatment at the early stages, it turns to become a malignant type. Malignant tumor cells are prone to invade the neighboring organs if no medical intervention is taken. For example, if a malignant tumor is not remedied, it may reach into the muscles under the breast, which is hard to remove and the endanger of recurrence is much higher. Early breast cancer disease detection and prevention enhance survival by 85 [2]. Pathologists use morphological abnormalities of the nucleus as the main feature to differentiate between malignant and benign cancerous [3] .early diagnosis helps the treatment to be more effective [4]. there are many different tests that can be used to find and locate breast cancer. Some of these tests, such as magnetic resonance imaging MRI and computed tomography CT scans, mammogram, ultrasound and histopathological images. biopsies or Histopathological images are some of the first screening methods that serve to diagnose cancer or to see how far it has spread. These are used for those at danger of having breast cancer. These tests are interpreted by specialists, such as pathologists and radiologists. However, due to the very complex and there is a lot of information to process in images, it is also possible that even specialists can sometimes miss cancer cells on the images. Biopsy test is used as a technique for the diagnosis of the breast cancer which needs the experiences of the pathologists to diagnose the test, this task is always a time-consuming and in-depth assessment [5]. Many researchers developed several CAD systems for different diseases including bladder cancer, lung cancer, skin cancer, prostate cancer, colon cancer, cervical cancer, liver cancer and breast cancer [6][7][8] [9][10] [11]. Pathologists can use Computer aided detection (CAD) techniques to accelerate the process and to achieve early detection of breast cancer [12]. Artificial intelligence has improved the CAD systems through its several areas such as machine learning and deep learning. Machine learning contains four phases which are preprocessing, segmentation of the region of interest (ROI), features extraction which is a challenging task and selection, and lastly classification of suspicious lesions. The prediction accuracy of ML algorithms and their behavior is affected by the selections of features chosen [13]. The images are used as the building blocks of these systems to evaluate these images which can show breast-related information; however, the cancer signs are very subtle and their different formations reveal at their early stages [14]. Another area of artificial intelligence is deep learning which uses neural network that is inspired by biological neurons, it is composed of interconnected neurons which process the data through adjusting the weights and biases. Deep learning differs from machine learning because it does not need hand-crafted features and of its abilities to learn from complex image features, deep learning can be trained on huge datasets. These techniques which teaches the computer to learn and do tasks that need abilities of smart creatures are used in the CAD systems to diagnose and classify the breast cancers. neurons are the base of all neural networks including CNN. CNN is designed to learn the latent and intrinsic features from 2D or 3D images in a supervised manner, it is one of the most widely used deep learning techniques, mostly utilized for time series forecasting, natural language processing, and picture classification. State-of-the-art performance in a variety of application fields, including computer vision, image recognition, speech recognition, natural language processing, and speech recognition, has been made possible by its capacity to extract and recognize fine characteristics [15][16][17] [18]. Typically, CNN model is composed of multiple layers such as an input layer followed by several convolutional layers, pooling layers and an output layer which includes dense layers. Convolution layer has many neurons that are connected spatially and share the weight and bias. Standard convolution layer converts the input image into a feature map using convolution operation. At convolution layer, the input data is mapped with a group of kernels that produces a new feature map and this process is called convolution. Due to the advent of large-scale training data such as ImageNet [19], CNN displays superior performance in large-scale visual recognition. Researchers are also encouraged to adapt CNNs that have already been trained on ImageNet to other domains and datasets [20], such as breast cancer images datasets, due to the CNN's outstanding performance. Furthermore, CNN typically produces a more discriminative picture representation [21], which is necessary for breast cancer image classification. A pretrained model is a saved network that was already trained on a large dataset, usually on a large-scale imageclassification task. the pretrained model can be used as it is or transfer learning can be used to adapt this model to a specific task.\n. The majority of the most advanced CNNs available today can be used for precise image classification. There are multiple CNN architectures such as ImageNet, Inception-v4, ResNet-50, Inception-ResNet, Xception, Inception-v3, Inception-v1, and ResNetXt-50 LeNet-5, AlexNet, GoogleNet, VGGNet [22] such powerful backbones pretrained on such big datasets of extensive categories drawn from a diversity of sources could learn powerful representations with labels. The intuition behind transfer learning for image classification is that if a model is trained on a large and general enough dataset, this model will successfully serve as a generic model of the visual world. You can then make use of these learned feature maps without having to start from scratch by training a large model on a large dataset.\nEnsemble learning methods in the decision-making stage is a strong approach in which confidence scores of multiple base learners are grouped together to obtain the final prediction about an input sample. It improves the prediction capability and accuracy of the overall model which its individual base models cannot achieve. This strategy also increases the robustness of the model. Ensemble learning helps to correct the false positive or false negatives when a model makes a biased decision for a specific test sample. It decreases the variance of the prediction errors by adding some bias to the competitive base learners. The most famous ensemble approaches in literature that used in different applications are average probability, majority voting and weighted average probability. In the past, several ensemble-based studies have been made for breast cancer histology image analysis [23]. In our study case, we use totally different approach, in such a way every model is trained fully independently on the whole dataset, it is possible to train any numbers of models such as three or five, these models then will have output, the algorithm will make final prediction based on the average of all the outputs of models it makes on fully independent trained models." }, { "figure_ref": [], "heading": "Literature review", "publication_ref": [ "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30" ], "table_ref": [], "text": "In literature many researches have been made using Artificial intelligence (AI) which is a term used to refer to the creation of models that have intelligent behaviors related to intelligent beings without the human interventions. AI utilizes a wide range of tools and principles such as math, logic, and biology. Modern AI technologies are becoming more and more capable of handling diverse and unstructured data, including images and natural language text, which is an important feature. A topic of which has had great interest is the use of artificial intelligence in medicine. many researchers use machine learning in the cad system, which is considered a branch of AI. Machine learning enables systems to find features and derive their own rules to make automatic decisions and predicts values [24]. ON another hand, deep learning is also used in CAD breast cancer detection systems and in many applications where deep learning can be a three-layer or more neural network. These neural networks make an effort to mimic how the human brain functions, enabling it to \"learn\" from vast amounts of data. Additional hidden layers can be added to enhance and refine the performance of the neural network model. In this literature review will cover both machine learning and deep learning networks. In [25] article, they used Machine learning algorithms (logistic regression, random forests, support vector machines) where they used number of features such as BMI, Glucose, Insulin, HOMA, Leptin, Adiponectin, Resistin and MCP-1. Using support vector machine (SVM) models has achieved sensitivity (82 % -88%) and specificity ranging between 85 and 90%. The 95% confidence interval for the AUC was [0.87, 0.91]. In [26] Rahul Karmakar has proposed five classifiers (K-Nearest Neighbors (KNN), Random Forest (RF), Decision Trees (DT), Logistic Regression (LR), and Support Vector Machines (SVM) to measure how they perform on WISCONSIN datasets predicting breast cancer. After using the k-fold cross-validation technique, Random Forest produced the best results. The dataset was divided into 90% training and 10% testing. The score of Random Forest, was nearly 0.96488 using cross-validation, which was the highest among them. It was concluded that compared to other classifiers used in their study, Random Forest is significantly more accurate. However, Logistic Regression performed admirably and provided accuracy that was comparable to Random Forest. The best classifier for predicting breast cancer, according to their analysis of the application of various classifiers Random Forest with different divisions of training data and testing data. In [27] the work has developed a CAD system using convolutional neural network where they built the model with four inputs to accommodate the four images with different magnification levels in parallel. The model depended on EfficientNet-B0 as the core of the CAD system which it used histopathological images to classify using the neural network and it performed well surpassing machine learning algorithm and some other neural networks. In [28] VGG16, VGG19, and ResNet50, three popular pre-trained deep CNN models, were used for both full training and transfer learning. Because it is so difficult to categorize breast cancer histology images, a very sophisticated architecture is needed to solve this issue. Due to their more intricate architecture, the VGG16, VGG19, and ResNet50 pre-trained CNN models are extremely preferable. Additionally, these models have demonstrated comparatively high performance for difficult computer vision problems across a variety of domains and it was concluded that pre-trained VGG16 with logistic regression classifier produced the best performance with 92.60% accuracy, 95.65% area under ROC curve (AUC), and 95.95% accuracy precision.\nIn [29] the authors proposed network with three stages in which the first stage has three parallel CNN branches with deep residual blocks. The next the three parallel CNN branches were merged to build a feature fusion. Finally, the features are classified at the last stage. The Breakhis dataset was used to evaluate this approach using the four magnification factors where it achieved 97.14% accuracy. Another fusion technique which uses the layers of the pretrained VGG19 which can provide a better initial wight optimization. those fused layers which can approximately cover the nuclei-scale, nuclei organization, and structurescale features, the robustness of this so-called FCNN is embodied in its ability to cover multi-scale information, while other comparative ones can only focus on a certain level of information, this was proposed by [30].On other hand, mammograms are also used in ensemble approaches, in a way the ensemble classifier and feature weighting algorithm were built where an ensemble classifier model is designed using k-nearest neighbor (KNN), bagging, and eigenvalue classification (EigenClass) to determine whether a mammogram contains normal, benign, or malignant tumors based on a majority voting rules as in [31]." }, { "figure_ref": [ "fig_1" ], "heading": "Proposed Methodology", "publication_ref": [ "b31" ], "table_ref": [], "text": "We looked to the literature and found it was not mentioned about drawing on different well-recognized CNN models ensemble that are previously trained fully and then averaged ensembled. The decisions which are based on the common behaviors of the classifiers, making them more reliable and avoiding overfitting [32]. In our approach we took different popular convolutional neural models, next trains every model independently then combines them in parallel. The image to be classified will be entered to the different models simultaneously to achieve more accuracy and better performance. Every model will predict its classification independently and its influence on the model will be proportional to its accuracy. After the classification of every model, each model will have a probability output between 0 and 1. if the output is less than .5 that means it belongs to the first class and if it is bigger than .5, that means it belongs to the second class. We sum the last outputs of models and multiply every model by its accuracy, then we divide by sum of the models accuracies as shown in the next function. In our approach we used state of the art models such as Inception-V3, ResNet50 and DenseNet-201 after finetuning them to make them suitable for recognition of breast cancer type. The approach can be illustrated in fig. 1. which shows independently trained models connected in parallel. Each model will have different effect on the model depending on its accuracy according to the following formula. This design will benefit from the strength of every model in way that increases the accuracy of the whole CAD system (automatic breast cancer detection). w represents the accuracies of the models. Gathering those models altogether to use advantages of every model, surely every model will be better at detecting specific lines or edge, and may this model will have defects in detecting and classifying a kind of other edges. " }, { "figure_ref": [], "heading": "Convolutional models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Resnet-50", "publication_ref": [ "b32" ], "table_ref": [], "text": "ResNet stands for residual network, which points to the residual blocks that make up the architecture of the network.\nResidual learning framework was presented to ease the training of networks that are considerably deeper than those used in another architectures. the layers are reformulated as learning residual functions with reference to the layer inputs, as alternative to learning unreferenced functions [33]. The ResNet architecture was built in response to a surprising observation in deep learning research which is making neural network deeper was not always improving the results. The ResNet model follows two basic design rules. Firstly, the number of filters in every layer is the same based on the size of the output feature map. Secondly, if the feature map's size is divided by 2, it has double the number of filters to keep the time complexity of every layer. ResNet-50 is made up of 50 layers that are arranged into 5 blocks, each consisting of a set of residual blocks. The residual blocks ease the preservation of information from previous layers, which helps the network to learn better representations of the input data. Bottleneck design was used for the building block in the ResNet-50, A bottleneck residual block uses 1×1 convolutions, known as a \"bottleneck\", which decreases the number of parameters and matrix multiplications. This allows much faster training of every layer. Also, It uses a stack of three layers instead of two layers in each residual function ." }, { "figure_ref": [], "heading": "DenseNet-201", "publication_ref": [ "b33" ], "table_ref": [], "text": "ResNet has shown that convolutional networks can be considerably deeper, efficient and more accurate to train if they consist of shorter connections between layers near to the input and those near to the output. the Dense Convolutional Network (DenseNet) connects each layer to every other layer in a feed-forward fashion. These kinds of networks are structurally different; however, their basic idea is to use shortcut connections from shallow layers to deep layers. This connection method can circumvent gradient vanishing problem in the networks with deep layers.\nFor each layer, the feature-maps of all earlier layers are used as inputs, and its own feature-maps are used as inputs into all next layers. DenseNets have several appealing advantages: they decrease the vanishing-gradient problem, boosting feature propagation, help feature reuse, and considerably decrease the number of parameters. DenseNets achieved significant improvements over the state-of-the-art on many tasks whilst requiring less computation power to achieve high performance [34]. The architecture is divided into dense blocks with all the successive layers in each block where it uses one-by-one convolution to maintain the spatial resolution, but it reduces the depth of the feature map, followed by max pooling to decrease the feature map size. There are different DenseNets types, such as DenseNet-121, DenseNet-169, DenseNet-201, DenseNet-264, etc., our study used DenseNet-201 which consists of 201 layers with more than 20 M parameters. Fewer parameters are used in DenseNets compared to traditional CNNs because there are no unnecessary feature maps. the structure of DenseNets is divided into dense blocks where the feature map dimensions stay constant inside a block having different filters." }, { "figure_ref": [], "heading": "Inception-v3", "publication_ref": [], "table_ref": [], "text": "Inception v3 substantially focuses on utilizing less computational power by modifying the previous Inception architectures. The paper \"Rethinking the Inception Architecture for Computer Vision \"proposed this idea which was published in 2015. It was co-written by Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, and Jonathon Shlens.\nIn comparison to VGGNet, Inception Networks (GoogLeNet/Inception v1) have demonstrated to be more efficient in computation, both in terms of the number of parameters generated by the network and in terms of memory and other resources which made it more cost-efficient. If any changes in an Inception Network are to be made, care has to be taken to ensure that the computational advantages are achieved. Therefore, the adaptation of an Inception network for different task cases turns out to be a problem because of the uncertainty of the efficiency of the new network. In an Inception v3 model, several techniques for optimizing the network have been suggested to ease the constraints for easier model adaptation. The techniques involve dimension reduction, regularization, parallelized computations and factorized convolutions." }, { "figure_ref": [], "heading": "Preprocessing", "publication_ref": [], "table_ref": [], "text": "Preprocessing techniques are an important part of the deep learning process. By carefully preprocessing data, we can improve the performance of our trained model and make it more robust to variations in the input data. Also, it can have a significant impact on the performance of the trained model. There are a variety of preprocessing techniques that can be used in deep learning, depending on the specific dataset and task at hand. Some common preprocessing techniques that we used are as the followings:" }, { "figure_ref": [], "heading": "Normalization:", "publication_ref": [], "table_ref": [], "text": "If the input images are normalized, the model will converge faster and more accurate. When the input images are not normalized, the shared weights of the network have different calibrations for different features, which can force, the time of cost function to converge, taking longer time and in less proficiently way. Normalizing the data makes the cost function much easier to train." }, { "figure_ref": [], "heading": "Horizontal flip", "publication_ref": [], "table_ref": [], "text": "it is a type of transformation that is used in data augmentation techniques to increase the dataset used in deep learning that causes the images to flip horizontally from left to right. It makes a mirrored image of the original image along the vertical axis." }, { "figure_ref": [], "heading": "Vertical flip", "publication_ref": [], "table_ref": [], "text": "A vertical flip is the transformation of a geometric figure or image in which every point is reflected across a horizontal axis. This means that the top of the figure or image becomes the bottom, and the bottom becomes the top. Vertical flips are often used in image processing. They can be used to create mirror images or to simply invert an image. Vertical flips can also be used as a data augmentation technique in machine learning to increase the size and diversity of a training dataset." }, { "figure_ref": [], "heading": "Shear", "publication_ref": [], "table_ref": [], "text": "Shear in data augmentation is a geometric transformation that skews the image along a particular axis. This can be used to create a more diverse training dataset for machine learning models, and to help them learn to generalize to new data. There are two types of shears: horizontal shear and vertical shear. Horizontal shear: skews the image to the left or right. Vertical shear skews the image up or down. The amount of shear is typically specified by a shear angle, which is measured in degrees. A shear angle of 0 degrees means no shear, while a shear angle of 45 degrees means that the image is sheared at a 45-degree angle." }, { "figure_ref": [], "heading": "Zoom", "publication_ref": [], "table_ref": [], "text": "Zoom data augmentation is a technique used to increase the size and diversity of a training dataset for machine learning models by zooming in or out on images. This can help models learn to be more robust to changes in scale, and to generalize better to new data. Zoom data augmentation can be implemented using a variety of methods. One common approach is to use a random zoom factor, which can be specified as a range or a single value. For example, a zoom factor of 0.5-1.5 would randomly zoom the image in or out by a factor of between 0.5 and 1.5. Another approach to zoom data augmentation is to use a fixed zoom factor. This is typically used when you want to zoom in on a specific region of the image. For example, you might want to zoom in on the face of a person in an image to help a model learn to recognize faces." }, { "figure_ref": [], "heading": "Fig. 5 confusion matrix for Inception-V3", "publication_ref": [], "table_ref": [], "text": "The three state-of-the-art models trained on Breakhis are used to perform experiments. accuracy_score method in scikit learn is used to measure the accuracies. One of These models is ResNet-50 which achieved the best accuracy of 97.55% on the whole dataset. While the accuracy of Inception_V3 on the whole dataset is 96.63% on BreakHis dataset. The lowest performance, in this case, is observed by the DenseNet201 model 94.55 Accuracy. precision, recall, and F1-score of all models on the whole dataset are shown in above Tables 1,2 and 3. In Table 6, the results for complete BreakHis dataset are given. Table 6 shows the evaluation metrics of the weighted averaged ensembled. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "If any outputs of any trained models were weighted summed, then averaged by the sum of the accuracies, their final output accuracy and other evaluation metrics will be better than the best of the joining model. We tried the ensemble techniques on many models and dataset and the final output was always better. This result is because every model will share its experience and its expressiveness and every model will have different defects than others, in this way every model will correct the mistake of other models using the weighted average ensemble. Also, we tried to average them in a way that there is not any weight influence on every model, however this approach did not have big impact on the final output, the F1-score has decreased by one percent from the weighted average method." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by National Natural Science Foundation of China (No. 62073120), the Natural Science Foundation of Jiangsu Province (No. BK20201311)." }, { "figure_ref": [], "heading": "Training methodology", "publication_ref": [], "table_ref": [], "text": "We used TensorFlow for training the models on the laptop with these specifications: GPU Nvidia Quadro P600 which has 4 giga bit ram, and the CPU is Core I7 with ram 16 Giga bit. The back-forward optimization method was Adam with learning rate 0.0001. Furthermore, we used early stop method that will stop the training if there are not any improvements after five epochs. The global average pooling layer is used which averages every feature map and provides a single value. It is followed by a dense layer which is followed by dropout with a probability of 0.4. the final layer is the classification layer which is consisted of only one neuron with sigmoid activation function." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In this study, we are going to show the effectiveness of using weighted ensemble approach in increasing the metrics of evaluation such as increasing accuracies, F1-scores and recall and decreasing the false positives and false negatives predictions. The scikit-learn learn methods have been used to measures the accuracies and other evaluation metrics. We have made various experiments to do the binary classification of breast cancer histopathology images. We have analyzed the performance of state-of-the-art CNN models and have shown the performance metrics of each model alone, and also a comparative analysis with the weighted averaged ensembled one. The weighted averaged ensembled model increased the accuracy by almost one percent, and it also increased all the evaluation metrics. Accuracy, F1 score, recall, and precision are chosen as the evaluation metrics. Firstly, we show the independent models metrics which were the results of using classification report method in scikit learn. Fig. 3 shows the confusion matrix of the ResNet50. table 1 shows the performance metrics of the ResNet-50. DenseNet-201 confusion matrix is show on Fig. 4. Fig. 5 and table 3 shows the evaluation metrics of the Inception-V3 model. Fig. 6 demonstrates the confusion matrix of the weighted averaged ensembled model which shows that the number of false positive and false negative has decreased and it is better than all the models and it also increased the accuracy of the whole model by almost 1 percent. The final accuracy is 98% while the best independently model is 97%. " }, { "figure_ref": [], "heading": "Conflict of Interest", "publication_ref": [], "table_ref": [], "text": "The authors declare that there is no conflict of interests regarding the publication of this article" } ]
Breast cancer is a serious disease that inflicts millions of people each year, and the number of cases is increasing. Early detection is the best way to reduce the impact of the disease. Researchers have developed many techniques to detect breast cancer, including the use of histopathology images in CAD systems. This research proposes a technique that combine already fully trained model using adaptive average ensemble, this is different from the literature which uses average ensemble before training and the average ensemble is trained simultaneously. Our approach is different because it used adaptive average ensemble after training which has increased the performance of evaluation metrics. It averages the outputs of every trained model, and every model will have weight according to its accuracy. The accuracy in the adaptive weighted ensemble model has achieved 98% where the accuracy has increased by 1 percent which is better than the best participating model in the ensemble which was 97%. Also, it decreased the numbers of false positive and false negative and enhanced the performance metrics.
Breast Cancer classification by adaptive weighted average ensemble of previously trained models
[ { "figure_caption": "f =first model output×w1+ second model output×w2 + third model output×w3 𝑤1+𝑤2+𝑤3", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 11Fig. 1 weighted ensembled fully trained models", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 22Fig. 2 samples of datasets for breast cancer", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 66Fig. 6 confusion matrix for the weighted averaged ensembled model", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "weighted averaged ensembled model metrics", "figure_data": "Type of metricprecisionrecallF1-scoresupport0.96.98.973721.99.98.99815Macro avg.98.98.981187Weighted avg.98.98.981187", "figure_id": "tab_0", "figure_label": "4", "figure_type": "table" } ]
Mosab S M Farea; Zhe Chen
[ { "authors": "A Aloyayri; A Krzyżak", "journal": "LNAI", "ref_id": "b0", "title": "Breast Cancer Classification from Histopathological Images Using Transfer Learning and Deep Neural Networks", "year": "2020" }, { "authors": "M Zeeshan; B Salam; Q S B Khalid; S Alam; R Sayani", "journal": "Cureus", "ref_id": "b1", "title": "Diagnostic accuracy of digital mammography in the detection of breast cancer", "year": "" }, { "authors": "E G Fischer", "journal": "Acta Cytologica", "ref_id": "b2", "title": "Nuclear Morphology and the Biology of Cancer Cells", "year": "2020" }, { "authors": "H Zabit; Z Sofia; G Begonya; J J Aguirre; A M Vanegas", "journal": "Sensors", "ref_id": "b3", "title": "Breast Cancer Histopathology Image Classification Using an Ensemble of Deep Learning Models", "year": "2020" }, { "authors": "W Gitanjali; K Amandeep", "journal": "IEEE", "ref_id": "b4", "title": "A Deep CNN Technique for Detection of Breast Cancer Using Histopathology Images", "year": "2020" }, { "authors": "N Kumar; M Sharma; V P Singh; C Madan; S Mehandia", "journal": "Biomedical Signal Processing and Control", "ref_id": "b5", "title": "An empirical study of handcrafted and dense feature extraction techniques for lung and colon cancer classification from histopathological images", "year": "2022" }, { "authors": "N N Prakash; V Rajesh; D L Namakhwa; S Dwarkanath Pande; S H Ahammad", "journal": "Scientific African", "ref_id": "b6", "title": "A DenseNet CNN-based liver lesion prediction and classification for future medical diagnosis", "year": "2023" }, { "authors": "M A Laurie; S R Zhou; M T Islam; E Shkolyar; L Xing; J C Liao", "journal": "Urologic Clinics of North America", "ref_id": "b7", "title": "Bladder Cancer and Artificial Intelligence: Emerging Applications", "year": "2023" }, { "authors": "A Ghoneim; G Muhammad; M S Hossain", "journal": "Future Generation Computer Systems", "ref_id": "b8", "title": "Cervical cancer classification using convolutional neural networks and extreme learning machines", "year": "2020" }, { "authors": "V Anand; S Gupta; D Koundal; K Singh", "journal": "Expert Systems with Applications", "ref_id": "b9", "title": "Fusion of U-Net and CNN model for segmentation and classification of skin lesion from dermoscopy images", "year": "2023" }, { "authors": "Z Liu; C Yang; J Huang; S Liu; Y Zhuo; X Lu", "journal": "Future Generation Computer Systems", "ref_id": "b10", "title": "Deep learning framework based on integration of S-Mask R-CNN and Inception-v3 for ultrasound image-aided diagnosis of prostate cancer", "year": "2021" }, { "authors": "H R H Al-Absi; B Belhaouari; S Samir; Sulaiman", "journal": "", "ref_id": "b11", "title": "A computer aided system for breast cancer detection and diagnosis", "year": "2014" }, { "authors": "I Guyon; A Elisseeff", "journal": "J. Mach. Learn. Res", "ref_id": "b12", "title": "An introduction to variable and feature selection", "year": "2003" }, { "authors": "F Moayedi; Z Azimifar; R Boostani; S Katebi", "journal": "ICIAR", "ref_id": "b13", "title": "Contourlet-based mammography mass classification", "year": "2007" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "", "ref_id": "b14", "title": "ImageNet classification with deep convolutional neural networks", "year": "" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b15", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "year": "2014" }, { "authors": "A Volokitin; G Roig; T A Poggio", "journal": "", "ref_id": "b16", "title": "Do deep neural networks suffer from crowding", "year": "" }, { "authors": "A Sharma; D Kumar", "journal": "Sci Rep", "ref_id": "b17", "title": "Classification with 2-D convolutional neural networks for breast cancer diagnosis", "year": "2022" }, { "authors": "J Deng; W Dong; R Socher; L J Li; K Li; F F Li", "journal": "IEEE", "ref_id": "b18", "title": "ImageNet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "NIPS", "ref_id": "b19", "title": "ImageNet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "B Zhao; J Feng; X Wu", "journal": "Int. J. Autom. Comput", "ref_id": "b20", "title": "A survey on deep learning-based fine-grained object classification and semantic segmentation", "year": "2017" }, { "authors": "P Sudharshan; C Petitjean; F Spanhol; L E Oliveira; L Heutte; P Honeine", "journal": "Expert Syst Appl", "ref_id": "b21", "title": "Multiple instance learning for histopathological breast cancer image classification", "year": "2019" }, { "authors": "S Majumdar; P Pramanik; R Sarkar", "journal": "Expert Systems with Applications", "ref_id": "b22", "title": "Gamma function based ensemble of CNN models for breast cancer detection in histopathology images", "year": "2023" }, { "authors": "L Abokaff", "journal": "SN COMPUT. SCI", "ref_id": "b23", "title": "Classification of Breast Cancer Diagnosis Systems Using Artificial Intelligence Techniques: Survey", "year": "2022" }, { "authors": "M Patrício; J Pereira; J Crisóstomo", "journal": "BMC Cancer", "ref_id": "b24", "title": "Using Resistin, glucose, age and BMI to predict the presence of breast cancer", "year": "2018" }, { "authors": "R Karmakar; S Chatterjee; A K Das", "journal": "SN COMPUT. SCI", "ref_id": "b25", "title": "Breast Cancer Prediction Using Machine Learning Approach-A Performance Analysis", "year": "2023" }, { "authors": "M Ahmed", "journal": "", "ref_id": "b26", "title": "A combined feature-vector based multiple instance learning convolutional neural network in breast cancer classification from histopathological images", "year": "2023" }, { "authors": "Rajesh Shallu; Mehra", "journal": "", "ref_id": "b27", "title": "Breast cancer histology images classification: Training from scratch or transfer learning", "year": "" }, { "authors": "A M Ibraheem; K H Rahouma; H F A Hamed", "journal": "J. Med. Biol. Eng", "ref_id": "b28", "title": "3PCNNB-Net: Three Parallel CNN Branches for Breast Cancer Classification Through Histopathological Images", "year": "2021" }, { "authors": "X Yu; H Chen; M Liang", "journal": "Multimed Tools Appl", "ref_id": "b29", "title": "A transfer learning-based novel fusion convolutional neural network for breast cancer histology classification", "year": "2022" }, { "authors": "F Yan; H Huang; W Pedrycz; K Hirota", "journal": "Expert Systems with Applications", "ref_id": "b30", "title": "Automated breast cancer detection in mammography using ensemble classifier and feature weighting algorithms", "year": "2023" }, { "authors": "C I De Oliveira; M Z Do Nascimento; G F Roberto; T A A Tosta; A S Martins; L A Neves", "journal": "Multimedia Tools and Applications", "ref_id": "b31", "title": "Hybrid models for classifying histological images: An association of deep features by transfer learning with ensemble classifier", "year": "2023" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b32", "title": "Deep Residual Learning for Image Recognition", "year": "2016" }, { "authors": "G Huang; Z Liu; L Van Der Maaten; K Q Weinberger", "journal": "", "ref_id": "b33", "title": "Densely connected convolutional networks", "year": "" } ]
[]
2024-02-01
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b37", "b51", "b53", "b78", "b51", "b10", "b8", "b10", "b24", "b9", "b76", "b16", "b73", "b19", "b21", "b38", "b39", "b50", "b70", "b50", "b66", "b70", "b41", "b79", "b49", "b59", "b70", "b10", "b85", "b54" ], "table_ref": [], "text": "Developing intelligent agents capable of adhering to human directives remains a significant challenge in embodied AI. Recently, Vision-and-Language Navigation (VLN) [3,11,38,52,54,79], which requires an agent to comprehend natural language instructions and subsequently execute proper actions to navigate to the target location, serves as a use- [52] validation unseen set using OSR and SR metrics. Among them, 'DUET' [11] is the base model, 'Frequent Update' means updating at certain intervals within each sample, 'Stable Update' refers to initializing with the original base model for each sample and using its best in-sample update interval INT=1. All these strategies adopt TENT [68] for model updates. The results show that overly fast or overly slow TTA fail to achieve significant improvements.\nful platform for examining the instruction-following ability. Despite tremendous progress has been achieved such as transformer-based sequence-to-sequence learning [9,11,25], large-scale training data collection [10,77], and various reinforcement and imitation learning strategies [17,74], the navigational capabilities of agents within varied testing environments still warrant further improvement.\nIn the VLN task, agents are required to sequentially execute actions contingent upon the evolving environmental cues. Regrettably, owing to disparities in environmental factors, such as distinct room types and objects as shown in Figure 1(a), the trained agents inevitably confront significant shifts in data distribution when applied in practical scenarios [20,22]. In light of this issue, depending solely on a pre-trained and fixed VLN model is inadequate.\nRecently, Test-Time Adaptation (TTA) [39,40,51,71] has been recognized as an effective technique for leveraging unlabeled test samples to update models and address shifts in data distribution. It has garnered notable success across various computer vision tasks, such as image classification [51,68], segmentation [67,71], and video classification [42,80]. For instance, TENT [68] utilizes an entropy minimization objective to update model parameters, thereby enhancing the generalization ability to recognizing test data. Nonetheless, the application of TTA in the realm of VLN remains relatively uncharted. Although prevailing TTA methodologies can be integrated into VLN models with certain alterations, this direct application cannot well handle the adaptability-stability dilemma of models due to the multi-step action-execution nature of VLN. Specifically, in contrast to traditional classification tasks, where a single TTA operation suffices for a test sample, VLN mandates an agent to perform sequential actions within a single test sample. On one hand, while conducting TTA at every (or a few) action steps enables rapid agent adaptation to dynamic environments, frequent model updates may introduce significant model alterations, potentially causing cumulative errors and catastrophic forgetting [50,60,71], thus compromising model stability during testing. On the other hand, initializing the same model for stable TTA in each test sample may hinder the model's ability to adaptively learn experience from historical test samples, thereby impeding its potential for achieving superior performance. Figure 1(b) shows that both overly fast or overly slow model updates fail to achieve significant performance improvements.\nTo tackle the above issues, we proposes a Fast-Slow Test-Time Adaptation (FSTTA) method for the VLN tasks. Built upon a unified gradient-parameter decompositionaccumulation framework, our approach consists of a fast update phase and a slow update phase, pursuing a balance between adaptability and stability in model updates. Specifically, with a test-time training objective, such as entropy minimization, we can derive gradients at each action step in the fast update phase. However, due to the unsupervised nature of TTA, these gradients inevitably contain noise information. Using these gradients for model update can interfere with the adaptability, especially when the update is frequently invoked. Therefore, we attempt to find a reliable optimization direction by periodically analyzing the gradients generated during the recent multi-step navigation process. We first establish a local coordinate system to decompose these gradients into components with varying levels of consistency. Subsequently, these components are adaptively accumulated to pinpoint a concordant direction for updating the model. Besides, a gradient variance regularization is incorporated to dynamically adjust the learning rate.\nAfter a certain number of fast updates, the model parameters (also called model state) are recorded. To further mitigate the issues of cumulative errors and catas-trophic forgetting that may result from excessively frequent model updates, during the slow update phase, we revert the model to its historical state and conduct a decompositionaccumulation analysis on the parameter variation trajectory for a direct model update. This process is akin to the fast phase but shifts its focus from gradients to the parameters. Both phases are performed alternately during testing to balance the adaptability and stability of the model. As shown in Figure 1(b), the proposed method achieves significant improvement against other model update strategies.\nOur contributions can be summarized as follows:\n• We investigate the test-time adaptation within the realm of VLN. Our proposed method verifies TTA as a promising and viable avenue for enhancing VLN performance. • Based on a unified decomposition-accumulation framework for both gradients and parameters, our method ensures swift model adaptability to environmental changes in the short-term fast update phase, while preserves stability throughout the long-term slow update phase. • Our FSTTA elevates the performance of several leading VLN models across four popular benchmarks. When applied to the notable DUET model [11], our method yields a performance boost of over 5% on the representative discrete/continuous datasets REVERIE/R2R-CE. Furthermore, our method shows superior results compared to other premier TTA techniques. [86] and parameter-efficient adapter [55]." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b16", "b62", "b73", "b29", "b45", "b32", "b56", "b85", "b9", "b51", "b28", "b42", "b36", "b35", "b37", "b43", "b77", "b11", "b71", "b41", "b79", "b34", "b46", "b55", "b64", "b69", "b75", "b14", "b46", "b80", "b80", "b46", "b69", "b55" ], "table_ref": [], "text": "(ii) Adopting various training paradigms such as reinforcement and imitation learning [17,49,63,74]. Moreover, to estimate the completeness of instruction following and decide when to conduct backtracking, progress monitoring [45,85] and back-tracking [30,46] are also employed to promote training process. (iii) Performing data augmentation for training a stronger model. In recent years, more and more large-scale benchmarks are established via collecting human annotations [33,57,86] or creating new environments [10,52]. Other approaches explore techniques such as mixup and synthesis [29,43], style transfer [37], or future-view image semantics [36] for data augmentation.\n(iv) Leveraging additional information for boosting model capacity. Since the goal of VLN is to navigate in photorealistic environments, there are many kinds of information in the world that can be used such as knowledge [38], 3D scene geometry [44,78], and landmarks [12,72] as a more practical setting, has been tentatively explored for addressing the cumulative errors and catastrophic forgetting issues. Until now, test-time adaptation has been preliminarily explored in some sequential data analysis fields such as action recognition [42] and video classification [80]. However, TTA on VLN tasks is yet to be explored. Gradient-based Methods. Gradients are typically central to modern SGD-based deep learning algorithms. To date, gradient analysis research has predominantly focused on domain generalization (DG) [35,47,56,65,70,76], due to the negative impact of conflicting gradients from multiple domains on model optimization. Pioneering works [15,47,81] perform gradient surgery at the backpropagation phase via various strategies such as normal plane projection [81] and consensus learning [47]. Other approaches resort to gradient agreement regularization for refining the optimization direction by leveraging sharpness [70] or similarity [56,58] measurements. Different from the above models that only consider a single-phase gradient surgery in DG, we jointly analyze the gradient-parameter states for a two-phase (fast-slow) TTA in the VLN task." }, { "figure_ref": [], "heading": "Our Approach", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Preliminaries and Framework Overview", "publication_ref": [ "b10", "b13", "b37", "b10", "b54", "b9", "b10", "b50", "b38", "b50" ], "table_ref": [], "text": "Problem Setup and VLN Base Model. Given a natural language instruction I, the VLN task requires an agent to find the target viewpoint through the environment by executing a series of actions. During the navigation process, an undirected exploration graph G t = (V t , E t ) is progressively constructed, where V t denotes navigable nodes, E t indicates the connectivity edges, t is the current timestep. At this moment, the agent receives a panoramic view that contains 36 single images. The panorama is represented by the image features R t and their object features O t , where these features can be extracted by pre-trained vision transformers (ViT) [11,14,38]. To accomplish the instruction, the agent needs to predict probabilities for the currently navigable nodes and select the most possible one as the next movement action. The probabilities can be predicted as:\ns t = ϕ (I, R t , O t , H t ; Θ) , s t ∈ R |Vt|(1)\nwhere H t indicates the history information that encodes the observed visual features and performed actions [11,55]. ϕ(•) is the VLN base model such as the dual-scale graph transformer [10,11], Θ is the learnable model parameters.\nFramework Overview. In this paper, we devote to adjusting the VLN base model during testing process within an unsupervised manner. Our FSTTA framework is illustrated in Figure 2. For each sample, at timestep t, we employ the commonly adopted entropy minimization objective [51,68] for test-time adaption, which aims to reduce the entropy of the probabilities over the current navigable nodes:\nL(s t ; Θ) = - i s t,i log(s t,i ).(2)\nDuring the optimization of the above objective, gradients are back-propagated for updating the model's parameters. However, updating the whole base model is computationally infeasible. As a result, we only consider a small portion of the model parameters for gradient calculation. Since affine parameters in normalization layers capturing data distribution information, numerous TTA methods opt to update these parameters for adaption [39,51,68]. In this paper, we employ the model's final a few layer-norm operations for TTA and maintain other parameters frozen. For brevity, we still use the symbol Θ to represent these parameters to be updated, Θ ∈ R D . Targeting at fully leveraging the gradient and parameter information, under a unified decompositionaccumulation analysis framework, we propose an effective two-phase adaptation for fast and slow model updates." }, { "figure_ref": [], "heading": "Fast Update via Gradient Analysis", "publication_ref": [ "b49", "b59" ], "table_ref": [], "text": "At timestep t in the navigation process, the agent is required to select an action (navigable node) by using the predicted score s t . With this score, we can calculate the TTA loss (Eq. ( 2)) and then derive the gradient of the model parameters Θ as: g t = ∇L(s t ; Θ), g t ∈ R D . Traditional TTA methods conduct adaptation independently at each time step, which can exacerbate the issue of cumulative errors [50,60], particularly in the VLN process that" }, { "figure_ref": [], "heading": "Sample i", "publication_ref": [], "table_ref": [], "text": "Go in to the room with the barber chair and pick up the magazine on the table behind the barber chair…\nStep 1\nStep 2\nStep 3" }, { "figure_ref": [], "heading": "Gradient Decomposition-Accumulation", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Concordant Gradient", "publication_ref": [], "table_ref": [], "text": "Steps F" }, { "figure_ref": [], "heading": "Fast Update", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Sample i-1", "publication_ref": [], "table_ref": [], "text": "Go to the kitchen on level 1 without a refrigerator…" }, { "figure_ref": [], "heading": "F", "publication_ref": [], "table_ref": [], "text": "Step 1\nStep 2\nStep 3\nSteps" }, { "figure_ref": [], "heading": "Steps", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Sample i+1", "publication_ref": [], "table_ref": [], "text": "Go to push towels to the left of the entryway onto …" }, { "figure_ref": [ "fig_1" ], "heading": "F", "publication_ref": [ "b58", "b3", "b55", "b55" ], "table_ref": [], "text": "Step 1\nStep 2\nStep 3 requires frequent action execution. Therefore, we propose to conduct a gradient decomposition-accumulation analysis, wherein we periodically analyze the gradients generated during the recent multi-step navigation process and identify a concordant direction for an iteration of model update. Gradient Decomposition-Accumulation. During navigation, as shown in Figure 2, we perform model update every M action steps. For the j-th update, gradients from previous M steps are collected as G j = { g j,m } M m=1 , where G j ∈ R M ×D , g j,m indicates the t-th gradient g t when t = M (j -1) + m. Note that these gradients determines the learning direction of our VLN model, and a simple strategy to compute this direction is to take their average ḡj = 1/M m g j,m ; however, this inevitably introduce step-specific noise. To avoid the issue, we aim to find a concordant direction among these gradients. We first establish a local coordinate system with D orthogonal and unit axes (bases) U j = {u j,d T } D d=1 ∈ R D×D for gradient decomposition, where each gradient can be approximatively linearly represented by these bases. Intuitively, the axes along which the gradients exhibit the higher variance after projection represent the directions of gradients with the lower consistency. These directions have the potential to introduce interference in determining a model update direction. Therefore, it is advisable to reduce the projection of gradients in these directions. To solve the bases U j , we can utilize singular value decomposition (SVD) as follows:\nλ j,d , u j,d = SVD d 1 M -1 ĜT j Ĝj , (3\n)\nwhere Ĝj is the centered gradient matrix by removing the mean from G j . The m-th row vector in Ĝj reflects the deviation between g j,m and the average gradient ḡj . λ j,d , u j,d denote the d-th largest eigenvalue and the corresponding eigenvector. Motivated by the principle component analysis [59], it is obvious that a larger λ j,d corresponds to a higher variance of the gradient projection length G j u j,d and vice versa. Hence, we can derive a concordant gradient by adaptively aggregating the gradients' components on all the axes by considering different eigenvalue (importance):\n∇ (f ast) j = D d=1 Φ d (λ j,d )• < ḡj , u j,d > u j,d ,(4)\nwhere the last term denotes the projected component of the averaged gradient ḡj on to the d-th axis. Φ d (•) is referred as the adaptive coefficient for accumulating all the components, which is simply defined as Φ d (λ j,d ) = 1/λ j,d , reflecting the importance of various axes. Notably, when removing the coefficient, ∇ (f ast) j is degenerated into ḡj , which is used in regular gradient descent approaches.\nBased on Eq. ( 4), a concordant optimization direction is established by enhancing the components that are convergent among { g j,m } M m=1 and suppressing those divergent ones. However, the introduction of Φ d (•) makes the length of ∇ (f ast) j uncontrollable. Therefore, we calibrate its length to ∥ḡ j ∥ 2 , which encodes the gradient length from the last three time steps, for a more reasonable model update:\n∇ (f ast) j ← (∇ (f ast) j ∥ḡ j ∥ 2 )/∥∇ (f ast) j ∥ 2(5)\nWith ∇ (f ast) , we can perform fast model update by setting a learning rate γ (f ast) . Although traditional methods employ a fixed learning rate during optimization, such a setting might hinder model convergence, i.e., small learning rates slow down convergence while aggressive learning rates prohibit convergence [4]. Since fast updates are frequently invoked during navigation, relying on a fixed learn-ing rate is sub-optimal. Therefore, we propose to dynamically adjust learning rate throughout the fast update phase. Dynamic Learning Rate Scaling. Different from varying the learning rate through optimizer or scheduler, we argue for a scaling method that leverages gradient agreement information in historical steps to dynamically adjust the speed of model update. Current gradient alignment strategies typically impose direct constraints on the gradients [56,58], which are not suitable for our framework as they undermine the gradient decomposition-accumulation process. Given that the second-order information (variance) has been demonstrated to be more effective than the first-order information (mean) in gradient agreement learning [56], we directly utilize the trace of the gradient covariance matrix, Tr 1/(M -1) ĜT j Ĝj , for scaling. Note that the trace is equal to the sum of eigenvalues σ j = d λ j,d . Here, when σ j deviates significantly from the historical variance, we assign a smaller learning rate, and vice versa:\nγ (f ast) j = Trunc (1 + τ -|σ j -σ|) • γ(fast) ,(6)\nwhere Trunc(•) is the truncation function that truncates the input to the interval [a, b]. τ is a threshold and γ(fast) is the base learning rate. The historical variance σ is updated as σ ← ρσ + (1 -ρ)σ j and maintained for all samples throughout the test stage, ρ is the update momentum. Model Update. With the above gradient and learning rate, we can perform the j-th fast model update:\nΘ j = Θ j-1 -γ (f ast) j • ∇ (f ast) j ,(7)\nwhere the subscript of Θ indicates the index of model update in the current test sample." }, { "figure_ref": [ "fig_1" ], "heading": "Slow Update via Parameter Analysis", "publication_ref": [ "b75" ], "table_ref": [], "text": "In the fast update phase, although we obtain concordant optimization directions, the frequent parameter updates may still dramatically change the VLN model. To maintain the stability of the VLN model during long-term usage, we revert the model to its historical states recorded in the fast update phase, and conduct a decomposition-accumulation analysis on the parameter variation trajectory for direct parameter modulation. The slow update phase shares the core formulation with the fast phase, but shifts the focus from gradients to the model parameters themselves.\nParameter Decomposition-Accumulation. Following the completion of the fast update phase on the o-th test sample, the model state (parameters) is recorded as Θ o,Jo , where J o denotes the final fast update step on this sample, and the subscript o has been omitted in the previous section. We then treat these historical states as a parameter variation trajectory to facilitate stable model updates. As shown in the right part of Figure 2, the slow model update is invoked every N samples. For the l-th update, historical model states are collected as M l = { Θ l,n } N n=0 , where M l ∈ R (N +1)×D , Θ l,n the o-th model state Θ o,Jo when o = N (l-1)+n and n ̸ = 0. Θ l,0 indicates the model state produced by the previous slow update, and we use it interchangeably with Θ (l-1) in the following. Note that in the slow update phase, we additionally incorporate Θ (l-1) from the previous update for analysis since it serves as a starting reference point for direct parameter modulation.\nSimilar to the fast update phase, the centered parameter matrix Ml can be constructed, where the n-th row vector in it reflects the deviation between Θ l,n and the averaged historical parameter Θl = 1/(N + 1) n Θ l,n . With Ml , we can obtain the following eigenvalues and eigenvectors:\nϵ l,d , z l,d = SVD d (1/N • M T l Ml )\n, where a larger ϵ l,d corresponds to a higher variance of the parameter projection length M l z l,d and vice versa. Z l = {z l,d T } D d=1 depicts the local coordinate system where each axis depicts the direction of parameter variation. Intuitively, the principal axes (with larger eigenvalues) delineate the primary directions of historical parameter variation, while minor axes (with smaller eigenvalues) often encompass noise [76]. To find a more reliable optimization path to traverse the trajectory of primary parameter changes, we pay more attention on the axis with the larger variance. Since there is no silver bullet to learning an optimization direction with only parameters, a reference direction can significantly aid in guiding the model towards a local optimal. Here, we leverage the parameter variations to calculate the reference direction:\nh l = 1 N -1 i=0 q i N n=1 q N -n • ( Θ l,0 -Θ l,n ),(8)\nwhere the hyper-parameter q ∈ (0, 1), which assigns larger weight to the more recent parameter deviations as they encapsulate richer sample information. Then, we calculate the optimization path (gradient) in the slow update phase as:\n∇ (slow) l = d Ψ d (ϵ l , h l ) • sign (< h l , z l,d >) z l,d ,(9)\nwhere the use of sign function sign(•) is to force the axes to be positively related to the reference direction h l . Notably, different from Eq. ( 4) that uses the projected components on each axis for estimating an optimization direction, here we only utilize the axes themselves for deriving ∇ (slow) l\n. The reason is that these axes depict the parameter variation direction, which can be directly used for estimating gradients. Ψ d (•) is referred as the adaptive coefficient for accumulating all the axes (optimization directions), defined as: , we can perform the l-th slow model update as follows:\nΨ d (ϵ l , h l ) = ϵ l,d • ∥h l ∥ 2 ∥ϵ l ∥ 2 ,(10)\nΘ (l) = Θ (l-1) -γ (slow) • ∇ (slow) l ,(11)\nwhere γ (slow) is learning rate. Since the slow update phase is designed for stable model learning and is not frequently invoked, we employ a fixed learning rate here instead of conducting dynamic learning rate scaling, as done in the fast phase. The updated parameter Θ (l) will be utilized for the subsequent test samples in conjunction with new fast update phases applied to them." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [ "b51", "b85", "b31" ], "table_ref": [], "text": "We evaluate FSTTA on four benchmarks: REVERIE [52], R2R [3], SOON [86], and R2R-CE [32] datasets. Experiments and ablation studies show our effectiveness." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b51", "b85", "b31", "b9", "b10", "b36", "b51", "b77", "b10", "b9", "b0" ], "table_ref": [], "text": "Datasets. Four datasets are adopted for conducting our experiments. Among them, REVERIE [52] contains 10,567 panoramic images and 21,702 high-level instructions, focusing on grounding remote target object within 90 buildings. R2R [3] provides step-by-step instructions for navigation in photo-realistic environments, which includes 10,800 panoramic views and 7,189 trajectories. SOON [86] also requires the agent to find the target object with a more detailed description of the goal. It has 3,848 sets of instruction and more than 30K long distance trajectories. R2R-CE [32] is a variant of R2R in continuous environments, where an agent is able to move freely and engage with obstacles. The dataset consists of 16,000 instruction-trajectory pairs, with non-transferrable paths excluded. Evaluation Metrics. We follow previous approaches [10,11,37,52,78] and employ the most commonly used metrics for evaluating VLN agents as follows: TL (Trajectory Length), NE (Navigation Error), SR (Success Rate), SPL (Success weighted by Path Length), OSR (Oracle Success Rate), RGS (Remote Grounding Success rate), and RGSPL (RGS weighted by Path Length). Implementation Details. To better conform to practical scenarios, for all datasets, we set the batch size to 1 during evaluation. Each sample (or each action step) is forward propagated only once during the testing process. We adopt DUET [11] and HM3D [10] as the base models. Since HM3D does not provide training code for R2R-CE dataset, we adopt another state-of-the-art method, BEVBert [1], for TTA. Note that for the base models, in Section 4.2, we report the results obtained from running their official codes.\nFor VLN models equipped with TTA strategies, we run the corresponding experiments 5 times while shuffling the order of the samples and report the average results. In our FSTTA, we only utilize the last four LN layers of base models for model updating, all the feature dimensions of these layers are 768. We set the intervals for fast and slow updates to M = 3 and N = 4, the learning rates of the two phases are γ(fast) = 6 × 10 -4 and γ (slow) = 1 × 10 -3 . For the dynamic learning rate scaling, we empirically set the threshold τ = 0.7 in Eq. ( 6) and the update momentum ρ = 0.95 with the truncation interval [0.9, 1.1]. And the hyper-parameter q in Eq. ( 8) is set to 0.1. All experiments are conducted on a RTX 3090 GPU." }, { "figure_ref": [], "heading": "Comparison with State-of-the-art VLN Models", "publication_ref": [], "table_ref": [ "tab_2", "tab_3", "tab_6", "tab_4", "tab_5" ], "text": "REVERIE. Table 1 presents a comparison of our FSTTA against state-of-the-art methods on the REVERIE dataset. Compared with the base models which do not perform test-time adaptation, the proposed method demonstrates favorable performance improvement across most evaluation metrics across the two dataset splits. Specifically, on the validation unseen split, our model exhibits notable advantages over DUET, with improvements of 5.3% on OSR, 7.1% on SR, and 2.7% on SPL. Furthermore, for the recent state-of-the-art method HM3D, our model displays enhanced generalization capabilities on the test unseen split, achieving remarkable improvements over HM3D, including 3.9%, 3.3%, and 1.3% increases on the three metrics. Compared with other state-of-the-arts, our proposed method can achieve superior or comparable performance. These results unequivocally affirm the effectiveness of our fast-slow test time adaptation model, showing the promising potential of TTA in the VLN field. It is noteworthy that none of the prior methods employed a TTA strategy on this task. R2R. Table 2 shows the comparison results on R2R dataset. Our approach outperforms the base models in most metrics (e.g., 72% → 75% for DUET on SR, 62% → 63% for HM3D on SPL). Notably, from the results of the above two datasets, our method, while enhancing the success rate of VLN, causes a slight increase in the path length (TL). We speculate that a possible reason is that performing TTA online may increase the likelihood of the agent deviating from its original action execution pattern, leading to more exploration or backtracking. This situation is further confirmed in the analysis of various TTA strategies in Table 5. SOON. The proposed FSTTA establishes new state-of-theart results across most metrics on this dataset. For instance, as shown in Table 3, on the validation unseen split, our model HM3D-FSTTA achieves SR and SPL of 42.44% and 31.03%, respectively, while the state-of-the-art method GridMM are 37.46% and 24.81%. On the test unseen split, our approach improves the performance of DUET by substantial gains (e.g., 21.42% → 23.23% for SPL). R2R-CE. FSTTA also generalizes well on the continuous environment, i.e., R2R-CE dataset, as shown in Table 4.\nThe results indicate that our approach demonstrates superior or comparable performance against other methods across several metrics." }, { "figure_ref": [ "fig_0" ], "heading": "Results for Different TTA Strategies", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Currently, various TTA methods have been adeptly integrated for the dynamic model updates within diverse computer vision tasks, marking significant progress. Although the exploration of TTA's application within the VLN field remains relatively untapped, the integration of contemporary advanced TTA methodologies into VLN is feasible. Since efficiency is an important evaluation metric for TTA, we provide the average time taken by each method to execute a single instruction for comparison. Obviously, equipping with TTA inevitably incurs additional time costs. For the compared methods, SAR and TENT are the popular entropy minimization models, whereas NOTE, CoTTA, and EATA are state-of-the-art continual TTA methods. The results in Table 5 demonstrate the capability of our proposed FSTTA to blend model performance with testing efficiency. Specifically, on the validation unseen dataset of REVERIE, our method exhibits a discernible enhancement of 6.2% and 2.5% on the SR and SPL metrics compared to the state-ofthe-art SAR method, concurrently manifesting a reduction of 7% in testing time. From the results, directly applying existing TTA methods to the VLN task does not lead to significant performance improvements. Furthermore, we investigate different frequencies of updates based on TENT as well as the stable update approach. 'INT' represents the update interval, which means averaging the gradient information over a certain interval and then performing an iteration of model update; these results are consistent with those in Figure 1(b). It can be seen that our method still outperforms these strategies with marginally increased time costs." }, { "figure_ref": [ "fig_3" ], "heading": "Further Remarks", "publication_ref": [ "b51" ], "table_ref": [ "tab_7", "tab_8", "tab_9" ], "text": "We perform ablation studies and other in-depth analysis of FSTTA on the validation unseen set of REVERIE [52]. Ablation Studies of the Proposed FSTTA. In this work, we propose a FSTTA method for vision-and-language navigation, which consists of both fast and slow model update phases. To validate their effectiveness, we progressively integrate the two phases into the baseline DUET model. In addition, we design a baseline variant, which equips DUET with the vanilla TTA objective (TENT [68]) and simply utilize the averaged gradient in an interval (with the same M ) for fast model updates. Empirical findings from Table 6 illuminate that the integration of fast and slow phases progressively bolsters the base model by 2.8% and 4.3% on the SR metric. Moreover, the dynamic learning rate scaling module (DLR) also contributes to enhancing the model's performance. Furthermore, our method surpasses the vallina TTA method by a significant margin, showing the consideration of the fast-slow update mechanism is effective.\nWill our method experience catastrophic forgetting? For a VLN agent endowed with the TTA capability, it faces the issue of catastrophic forgetting of historical environments and instructions upon continually executing new instructions in new environments. To assess whether our method harbors this issue, we re-evaluate our methods on REVERIE validation seen data. Compared with the base model, as shown in Table 7, we find that: (1) Directly applying FSTTA with the base model on seen data can noticeably enhance performance. (2) After performing FSTTA on the unseen set, the obtained model, when tested directly on the seen dataset without TTA, achieves performance comparable to the base model, confirming that our method does not suffer from catastrophic forgetting.\n(3) Applying the updated model from unseen set to the seen set with TTA yields the best results. This indicates that our method is effective in accumulating experience from historical test data. Generalization Testing in More Practical Environments.\nIn the real-world applications, agents might encounter both previously seen and unseen scenarios. In our preceding experiments, we exclusively test on the validation seen and unseen sets separately. To verify the generalizability, we combine the seen and unseen sets into a unified set. Table 8 shows that FSTTA outperforms other TTA methods in effectively managing a variety of testing scenarios. Qualitative Analysis. Figure 3 provides a visualization of the agent's instruction execution process, validating that our proposed FSTTA approach can indeed dynamically enhance the VLN performance of the agent during testing." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "This paper explores the feasibility of TTA strategies in VLN. We propose a fast-slow test-time adaptation method, which performs decomposition-accumulation analysis for both gradients and parameters, achieving a balance between adaptability and stability. The encouraging performance is validated in extensive experiments. Several limitations of this paper are noteworthy. Firstly, our approach focuses on adapting normalization layers within the trained model. While normalization layers are widely employed in deep learning, there are still a few methods that do not utilize these settings. One viable approach to address this issue is to introduce additional normalization layers to the corresponding models and retrain them using the training data. In the future, we will also explore how our model can update other types of layers. Secondly, the VLN task itself is a cross-modal learning task. However, our TTA process does not explicitly consider this information. We plan to consider cross-modal TTA in the future. Thirdly, compared to the base model, the introduction of TTA inevitably incurs additional computational cost, which is a direction for future improvement. Finally, the frequencies of fast and slow updates are fixed and periodic. Adaptive update invocation strategies is worthy of consideration." } ]
Vision-and-Language Navigation (VLN) has witnessed significant advancements in recent years, largely attributed to meticulously curated datasets and proficiently trained models. Nevertheless, when tested in diverse environments, the trained models inevitably encounter significant shifts in data distribution, highlighting that relying solely on pretrained and fixed navigation models is insufficient. To enhance models' generalization ability, test-time adaptation (TTA) demonstrates significant potential in the computer vision field by leveraging unlabeled test samples for model updates. However, simply applying existing TTA methods to the VLN task cannot well handle the adaptability-stability dilemma of VLN models, i.e., frequent updates can result in drastic changes in model parameters, while occasional updates can make the models ill-equipped to handle dynamically changing environments. Therefore, we propose a Fast-Slow Test-Time Adaptation (FSTTA) approach for VLN by performing decomposition-accumulation analysis for both gradients and parameters in a unified framework. Specifically, in the fast update phase, gradients generated during the recent multi-step navigation process are decomposed into components with varying levels of consistency. Then, these components are adaptively accumulated to pinpoint a concordant direction for fast model adaptation. In the slow update phase, historically recorded parameters are gathered, and a similar decomposition-accumulation analysis is conducted to revert the model to a stable state. Extensive experiments show that our method obtains impressive performance gains on four popular benchmarks.Go to the closet on level 2 and bring me the black box on the top shelf to the right of the door. Go to the bedroom with the doll in a black and white dress on the bed and clean off the bed. / Go upstairs to the kitchen on level 2 and make sure the facet is turned off at the sink./ Rinse out the sink in the bathroom attached to the bedroom…
Test-time Adaptive Vision-and-Language Navigation
[ { "figure_caption": "Figure 1 .1Figure 1. (a) Illustration of the data distribution shift between training and testing samples in the VLN Task. (b) Comparison between various TTA strategies on REVERIE[52] validation unseen set using OSR and SR metrics. Among them, 'DUET'[11] is the base model, 'Frequent Update' means updating at certain intervals within each sample, 'Stable Update' refers to initializing with the original base model for each sample and using its best in-sample update interval INT=1. All these strategies adopt TENT[68] for model updates. The results show that overly fast or overly slow TTA fail to achieve significant improvements.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Overall framework of the proposed Fast-Slow Test-Time Adaptation (FSTTA) for VLN tasks. In the fast update phase, taking 'Sample i' as an example, the model periodically analyzes the gradients ({g}) generated during the recent multi-step navigation and performs a gradient decomposition-accumulation analysis to pinpoint a concordant direction for model update. After a certain number of fast updates, historical model parameters ({Θ}) are recorded. In the slow update phase, we revert the model to its historical state and conduct a parameter decomposition-accumulation analysis to learn an optimization path for direct parameter modulation. Note that 'F', 'S' in the robots means the model parameters after fast and slow updates. 'F1' indicates the first fast update within a test sample.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Go to the laundry room on level 2 and empty the washing machine.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. Visual results of DUET[11] and FSTTA on REVERIE validation unseen set. Our method arrives the correct endpoint.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparison on REVERIE. Results better than the base model are highlighted in bold.", "figure_data": "MethodsREVERIE Val Unseen TL ↓ OSR SR SPL RGS RGSPL TL ↓ OSR REVERIE Test Unseen SR SPL RGS RGSPLHuman------21.18 86.83 81.51 53.66 77.8451.44Seq2Seq [3] [CVPR18]11.07 8.074.202.842.161.6310.89 6.883.993.092.001.58RCM [74] [CVPR19]11.98 14.23 9.296.974.893.8910.60 11.68 7.846.673.673.14SMNA [45] [ICLR19]9.07 11.28 8.156.444.543.619.238.395.804.533.102.39FAST [52] [CVPR20]45.28 28.20 14.40 7.197.844.6739.05 30.63 19.88 11.61 11.286.08Airbert [21] [ICCV21]18.71 34.51 27.89 21.88 18.2314.1817.91 34.20 30.28 23.61 16.8313.28HAMT [9] [NeurIPS21]14.08 36.84 32.95 30.20 18.9217.2813.62 33.41 30.40 26.67 14.8813.08HOP [53] [CVPR22]16.46 36.24 31.78 26.11 18.8515.7316.38 33.06 30.17 24.34 17.6914.34LANA [75] [CVPR23]23.18 52.97 48.31 33.86 32.8622.7718.83 57.20 51.72 36.45 32.9522.85BEVBert [1] [ICCV23]-56.40 51.78 36.37 34.7124.44-57.26 52.81 36.41 32.0622.09BSG [44] [ICCV23]24.71 58.05 52.12 35.59 35.3624.2422.90 62.83 56.45 38.70 33.1522.34GridMM [78] [ICCV23] 23.20 57.48 51.37 36.47 34.5724.5619.97 59.55 55.13 36.60 34.8723.45DUET [11] [CVPR22]22.11 51.07 46.98 33.73 32.1523.0321.30 56.91 52.51 36.06 31.8822.06DUET-FSTTA22.14 56.26 54.15 36.41 34.2723.5621.52 58.44 53.40 36.43 32.9922.40HM3D [10] [ECCV22]22.13 62.11 55.89 40.85 36.5826.7620.87 59.81 53.13 38.24 32.6922.68HM3D-FSTTA22.37 63.74 57.02 41.41 36.9726.5521.90 63.68 56.44 39.58 34.0523.04", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on R2R dataset.", "figure_data": "MethodsR2R Val Unseen TL ↓ NE ↓ SR SPLSeq2Seq [3]8.397.81 22-RCM [74]11.46 6.09 43-SMNA [45]-5.52 4532EnvDrop [63]10.70 5.22 5248AirBert [21]11.78 4.10 6256HAMT [9]11.46 3.65 6661GBE [86]-5.20 5443SEvol [7]12.26 3.99 6257HOP [53]12.27 3.80 6457BEVBert [1]14.55 2.81 7564LANA [75]12.00-6862BSG [44]14.90 2.89 7462GridMM [78]13.27 2.83 7564DUET [11]13.94 3.31 7260DUET-FSTTA 14.64 3.03 7562HM3D [10]14.29 2.83 7462HM3D-FSTTA 14.86 2.71 7563", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experimental results on SOON dataset.", "figure_data": "MethodsOSRVal Unseen SR SPL RGSPL OSRTest Unseen SR SPL RGSPLGBE [86]28.54 19.52 13.341.1621.45 12.90 9.230.45GridMM [78]53.39 37.46 24.813.9148.02 36.27 21.254.15DUET [11]50.91 36.28 22.583.7543.00 33.44 21.424.17DUET-FSTTA 52.57 36.53 23.823.7543.44 35.34 23.234.52HM3D [10]53.22 41.00 30.694.0647.26 40.26 28.095.15HM3D-FSTTA 54.19 42.44 31.034.9348.52 42.02 28.955.20", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Experimental results on R2R-CE dataset.", "figure_data": "MethodsVal Unseen NE ↓ OSR SR SPL NE ↓ OSR SR SPL Test UnseenSeq2Seq [32]7.374032307.91362825CWTP [8]7.90382623----CM 2 [18]7.024234287.70393124Sim2Sim [31]6.075243366.17524437CWP-BERT [26] 5.745344395.89514236DREAMW [69]5.534959445.48495744GridMM [78]5.116149415.64564639ETPNav [2]4.716557495.12635548DUET [11]5.135546405.82504236DUET-FSTTA5.275848425.84554638BEVBert [1]4.576759504.70675950BEVBert-FSTTA 4.396560515.45696050", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Experimental results for different TTA strategies.", "figure_data": "MethodsREVERIE Val Unseen TL ↓ OSR SR SPL RGS RGSPLTime(ms)DUET [11]22.11 51.07 46.98 33.73 32.15 23.03104.84+ EATA [50]23.41 52.09 47.40 33.46 32.09 22.65133.12+ CoTTA [71] 24.88 52.46 47.56 31.43 31.82 21.83 3.89× 10 3+ NOTE [19] 23.15 52.85 48.28 33.98 32.77 22.98137.89+ SAR [51]23.47 53.26 48.00 33.92 33.49 23.09145.53+ Tent [68]24.05 49.43 46.87 31.90 30.04 20.15126.91+ Tent-INT-2 24.24 51.22 48.46 33.67 32.43 21.30124.02+ Tent-INT-3 22.52 52.28 48.60 34.65 32.66 23.12119.34+ Tent-INT-4 22.59 51.40 48.91 35.06 32.59 22.99117.26+ Tent-Stable 22.05 51.43 47.55 33.99 32.34 23.32129.22+ FSTTA22.14 56.26 54.15 36.41 34.27 23.56135.61", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study on REVERIE dataset.", "figure_data": "ModuleREVERIE Val UnseenFast DLR Slow TL ↓ OSRSRSPL RGS RGSPL---22.11 51.07 46.98 33.73 32.15 23.03Tent--22.52 52.28 48.60 34.65 32.66 23.12✓--22.65 53.50 49.74 34.91 33.70 23.36✓✓-22.43 54.01 49.82 35.34 34.32 23.29✓✓✓22.14 56.26 54.15 36.41 34.27 23.56", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results on validation seen set of REVERIE. ✓ 15.13 75.59 75.48 65.84 58.62 52.23 ✓ -13.40 73.16 71.78 64.18 57.05 51.18 ✓ ✓ 15.11 75.58 74.12 65.53 59.20 52.18", "figure_data": "FSTTAREVERIE Val SeenUnseen Seen TL ↓ OSRSRSPL RGS RGSPL--13.86 73.86 71.15 63.94 57.41 51.14-", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Results of validation unseen & seen on REVERIE dataset. 19.18 61.53 57.49 45.66 41.56 34.38 + Tent [68] 20.23 57.33 54.86 41.90 38.09 32.46 + EATA [50] 20.29 62.77 57.31 44.59 41.54 34.16 + SAR [51] 20.52 63.59 57.80 44.72 42.45 34.88 + FSTTA 20.48 63.36 60.23 47.96 43.58 35.65", "figure_data": "MethodsREVERIE Val Unseen & Seen TL ↓ OSR SR SPL RGS RGSPLDUET [11]", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" } ]
Junyu Gao; Xuan Yao; Changsheng Xu
[ { "authors": "Dong An; Yuankai Qi; Yangguang Li; Yan Huang; Liang Wang; Tieniu Tan; Jing Shao", "journal": "", "ref_id": "b0", "title": "Bevbert: Topo-metric map pre-training for language-guided navigation", "year": "2023" }, { "authors": "H Dongyan An; Wenguan Wang; Zun Wang; Yan Wang; Keji Huang; Liang He; Wang", "journal": "", "ref_id": "b1", "title": "Etpnav: Evolving topological planning for vision-language navigation in continuous environments", "year": "2023" }, { "authors": "Peter Anderson; Qi Wu; Damien Teney; Jake Bruce; Mark Johnson; Niko Sünderhauf; Ian Reid; Stephen Gould; Anton Van Den; Hengel", "journal": "", "ref_id": "b2", "title": "Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments", "year": "2018" }, { "authors": "Jonathan Barzilai; Jonathan Michael Borwein", "journal": "Ima Journal of Numerical Analysis", "ref_id": "b3", "title": "Two-point step size gradient methods", "year": "1988" }, { "authors": "Malik Boudiaf; Romain Mueller; Ismail Ben Ayed; Luca Bertinetto", "journal": "", "ref_id": "b4", "title": "Parameter-free online test-time adaptation", "year": "2022" }, { "authors": "Dhanajit Brahma; Piyush Rai", "journal": "", "ref_id": "b5", "title": "A probabilistic framework for lifelong test-time adaptation", "year": "2023" }, { "authors": "Jinyu Chen; Chen Gao; Erli Meng; Qiong Zhang; Si Liu", "journal": "", "ref_id": "b6", "title": "Reinforced structured state-evolution for vision-language navigation", "year": "2022" }, { "authors": "Kevin Chen; Junshen Chen; Jo Chuang; V Marynel; Silvio Savarese", "journal": "", "ref_id": "b7", "title": "Topological planning with transformers for vision-and-language navigation", "year": "2020" }, { "authors": "Shizhe Chen; Pierre-Louis Guhur; Cordelia Schmid; Ivan Laptev", "journal": "NeurIPS", "ref_id": "b8", "title": "History aware multimodal transformer for vision-and-language navigation", "year": "2021" }, { "authors": "Shizhe Chen; Pierre-Louis Guhur; Makarand Tapaswi; Cordelia Schmid; Ivan Laptev", "journal": "", "ref_id": "b9", "title": "Learning from unlabeled 3d environments for vision-and-language navigation", "year": "2022" }, { "authors": "Shizhe Chen; Pierre-Louis Guhur; Makarand Tapaswi; Cordelia Schmid; Ivan Laptev", "journal": "CVPR", "ref_id": "b10", "title": "Think global, act local: Dual-scale graph transformer for vision-and-language navigation", "year": "2022" }, { "authors": "Yibo Cui; Liang Xie; Yakun Zhang; Meishan Zhang; Ye Yan; Erwei Yin", "journal": "", "ref_id": "b11", "title": "Grounded entity-landmark adaptive pretraining for vision-and-language navigation", "year": "2023" }, { "authors": "Mario Döbler; Robert A Marsden; Bin Yang", "journal": "", "ref_id": "b12", "title": "Robust mean teacher for continual and gradual test-time adaptation", "year": "2023" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "ICLR", "ref_id": "b13", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Yunshu Du; Wojciech M Czarnecki; M Siddhant; Mehrdad Jayakumar; Razvan Farajtabar; Balaji Pascanu; Lakshminarayanan", "journal": "", "ref_id": "b14", "title": "Adapting auxiliary losses using gradient similarity", "year": "2018" }, { "authors": "Daniel Fried; Ronghang Hu; Volkan Cirik; Anna Rohrbach; Jacob Andreas; Louis-Philippe Morency; Taylor Berg-Kirkpatrick; Kate Saenko; Dan Klein; Trevor Darrell", "journal": "NeurIPS", "ref_id": "b15", "title": "Speaker-follower models for vision-and-language navigation", "year": "2018" }, { "authors": "Chen Gao; Xingyu Peng; Mi Yan; He Wang; Lirong Yang; Haibing Ren; Hongsheng Li; Si Liu", "journal": "", "ref_id": "b16", "title": "Adaptive zoneaware hierarchical planner for vision-language navigation", "year": "2023" }, { "authors": "Georgios Georgakis; Karl Schmeckpeper; Karan Wanchoo; Soham Dan; Eleni Miltsakaki; Dan Roth; Kostas Daniilidis", "journal": "", "ref_id": "b17", "title": "Cross-modal map learning for vision and language navigation", "year": "2022" }, { "authors": "Taesik Gong; Jongheon Jeong; Taewon Kim; Yewon Kim; Jinwoo Shin; Sung-Ju Lee", "journal": "NeurIPS", "ref_id": "b18", "title": "Note: Robust continual testtime adaptation against temporal correlation", "year": "2022" }, { "authors": "Jing Gu; Eliana Stefani; Qi Wu; Jesse Thomason; Xin Wang", "journal": "", "ref_id": "b19", "title": "Vision-and-language navigation: A survey of tasks, methods, and future directions", "year": "2022" }, { "authors": "Pierre-Louis Guhur; Makarand Tapaswi; Shizhe Chen; Ivan Laptev; Cordelia Schmid", "journal": "", "ref_id": "b20", "title": "Airbert: In-domain pretraining for vision-and-language navigation", "year": "2021" }, { "authors": "Pierre-Louis Guhur; Makarand Tapaswi; Shizhe Chen; Ivan Laptev; Cordelia Schmid", "journal": "", "ref_id": "b21", "title": "Airbert: In-domain pretraining for vision-and-language navigation", "year": "2021" }, { "authors": "Weituo Hao; Chunyuan Li; Xiujun Li; Lawrence Carin; Jianfeng Gao", "journal": "", "ref_id": "b22", "title": "Towards learning a generic agent for visionand-language navigation via pre-training", "year": "2020" }, { "authors": "Yicong Hong; Cristian Rodriguez-Opazo; Yuankai Qi; Qi Wu; Stephen Gould", "journal": "NeurIPS", "ref_id": "b23", "title": "Language and visual entity relationship graph for agent navigation", "year": "2020" }, { "authors": "Yicong Hong; Qi Wu; Yuankai Qi; Cristian Rodriguez-Opazo; Stephen Gould", "journal": "", "ref_id": "b24", "title": "Vln bert: A recurrent visionand-language bert for navigation", "year": "2021" }, { "authors": "Yicong Hong; Zun Wang; Qi Wu; Stephen Gould", "journal": "", "ref_id": "b25", "title": "Bridging the gap between learning in discrete and continuous environments for vision-and-language navigation", "year": "2022" }, { "authors": "Jingyang Huo; Qiang Sun; Boyan Jiang; Haitao Lin; Yanwei Fu", "journal": "", "ref_id": "b26", "title": "Geovln: Learning geometry-enhanced visual representation with slot attention for vision-and-language navigation", "year": "2023" }, { "authors": "Yusuke Iwasawa; Yutaka Matsuo", "journal": "NeurIPS", "ref_id": "b27", "title": "Test-time classifier adjustment module for model-agnostic domain generalization", "year": "2021" }, { "authors": "Aishwarya Kamath; Peter Anderson; Su Wang; Jing Yu Koh; Alexander Ku; Austin Waters; Yinfei Yang; Jason Baldridge; Zarana Parekh", "journal": "", "ref_id": "b28", "title": "A new path: Scaling visionand-language navigation with synthetic instructions and imitation learning", "year": "2023" }, { "authors": "Liyiming Ke; Xiujun Li; Yonatan Bisk; Ari Holtzman; Zhe Gan; Jingjing Liu; Jianfeng Gao; Yejin Choi; Siddhartha Srinivasa", "journal": "", "ref_id": "b29", "title": "Tactical rewind: Self-correction via backtracking in vision-and-language navigation", "year": "2019" }, { "authors": "Jacob Krantz; Stefan Lee", "journal": "", "ref_id": "b30", "title": "Sim-2-sim transfer for visionand-language navigation in continuous environments", "year": "2022" }, { "authors": "Jacob Krantz; Erik Wijmans; Arjun Majumdar; Dhruv Batra; Stefan Lee", "journal": "", "ref_id": "b31", "title": "Beyond the nav-graph: Vision-and-language navigation in continuous environments", "year": "2020" }, { "authors": "Alexander Ku; Peter Anderson; Roma Patel; Eugene Ie; Jason Baldridge", "journal": "", "ref_id": "b32", "title": "Room-across-room: Multilingual visionand-language navigation with dense spatiotemporal grounding", "year": "2020" }, { "authors": "Jungsoo Lee; Debasmit Das; Jaegul Choo; Sungha Choi", "journal": "", "ref_id": "b33", "title": "Towards open-set test-time adaptation utilizing the wisdom of crowds in entropy minimization", "year": "2023" }, { "authors": "Byounggyu Lew; Donghyun Son; Buru Chang", "journal": "", "ref_id": "b34", "title": "Gradient estimation for unseen domain risk minimization with pretrained models", "year": "2023" }, { "authors": "Jialu Li; Mohit Bansal", "journal": "", "ref_id": "b35", "title": "Improving vision-and-language navigation by generating future-view image semantics", "year": "2023" }, { "authors": "Jialu Li; Hao Tan; Mohit Bansal", "journal": "", "ref_id": "b36", "title": "Envedit: Environment editing for vision-and-language navigation", "year": "2022" }, { "authors": "Xiangyang Li; Zihan Wang; Jiahao Yang; Yaowei Wang; Shuqiang Jiang", "journal": "", "ref_id": "b37", "title": "Kerm: Knowledge enhanced reasoning for vision-and-language navigation", "year": "2023" }, { "authors": "Jian Liang; Ran He; Tieniu Tan", "journal": "", "ref_id": "b38", "title": "A comprehensive survey on test-time adaptation under distribution shifts", "year": "2023" }, { "authors": "Hyesu Lim; Byeonggeun Kim; Jaegul Choo; Sungha Choi", "journal": "ICLR", "ref_id": "b39", "title": "Ttn: A domain-shift aware batch normalization in testtime adaptation", "year": "2023" }, { "authors": "Chuang Lin; Yi Jiang; Jianfei Cai; Lizhen Qu; Gholamreza Haffari; Zehuan Yuan", "journal": "", "ref_id": "b40", "title": "Multimodal transformer with variable-length memory for vision-and-language navigation", "year": "2022" }, { "authors": "Wei Lin; Muhammad Jehanzeb Mirza; Mateusz Kozinski; Horst Possegger; Hilde Kuehne; Horst Bischof", "journal": "", "ref_id": "b41", "title": "Video test-time adaptation for action recognition", "year": "2023" }, { "authors": "Chong Liu; Fengda Zhu; Xiaojun Chang; Xiaodan Liang; Yi-Dong Shen", "journal": "", "ref_id": "b42", "title": "Vision-language navigation with random environmental mixup", "year": "2021" }, { "authors": "Ruitao Liu; Xiaohan Wang; Wenguan Wang; Yi Yang", "journal": "", "ref_id": "b43", "title": "Bird's-eye-view scene graph for vision-language navigation", "year": "2023" }, { "authors": "Chih-Yao Ma; Jiasen Lu; Zuxuan Wu; Ghassan Alregib; Zsolt Kira; Richard Socher; Caiming Xiong", "journal": "ICLR", "ref_id": "b44", "title": "Selfmonitoring navigation agent via auxiliary progress estimation", "year": "2019" }, { "authors": "Chih-Yao Ma; Zuxuan Wu; Ghassan Alregib; Caiming Xiong; Zsolt Kira", "journal": "", "ref_id": "b45", "title": "The regretful agent: Heuristic-aided navigation through progress estimation", "year": "2019" }, { "authors": "Lucas Mansilla; Rodrigo Echeveste; Diego H Milone; Enzo Ferrante", "journal": "", "ref_id": "b46", "title": "Domain generalization via gradient surgery", "year": "2021" }, { "authors": "M Jehanzeb Mirza; Jakub Micorek; Horst Possegger; Horst Bischof", "journal": "", "ref_id": "b47", "title": "The norm must go on: Dynamic unsupervised domain adaptation by normalization", "year": "2022" }, { "authors": "Khanh Nguyen; Debadeepta Dey; Chris Brockett; Bill Dolan", "journal": "", "ref_id": "b48", "title": "Vision-based navigation with language-based assistance via imitation learning with indirect intervention", "year": "2019" }, { "authors": "Shuaicheng Niu; Jiaxiang Wu; Yifan Zhang; Yaofo Chen; Shijian Zheng; Peilin Zhao; Mingkui Tan", "journal": "", "ref_id": "b49", "title": "Efficient testtime model adaptation without forgetting", "year": "2022" }, { "authors": "Shuaicheng Niu; Jiaxiang Wu; Yifan Zhang; Zhiquan Wen; Yaofo Chen; Peilin Zhao; Mingkui Tan", "journal": "ICLR", "ref_id": "b50", "title": "Towards stable test-time adaptation in dynamic wild world", "year": "2023" }, { "authors": "Yuankai Qi; Qi Wu; Peter Anderson; Xin Wang; William Yang Wang; Chunhua Shen; Anton Van Den; Hengel", "journal": "", "ref_id": "b51", "title": "Reverie: Remote embodied visual referring expression in real indoor environments", "year": "2020" }, { "authors": "Yanyuan Qiao; Yuankai Qi; Yicong Hong; Zheng Yu; Peifeng Wang; Qi Wu", "journal": "", "ref_id": "b52", "title": "Hop: History-and-order aware pretraining for vision-and-language navigation", "year": "2022" }, { "authors": "Yanyuan Qiao; Yuankai Qi; Zheng Yu; J Liu; Qi Wu", "journal": "", "ref_id": "b53", "title": "March in chat: Interactive prompting for remote embodied referring expression", "year": "2023" }, { "authors": "Yanyuan Qiao; Zheng Yu; Qi Wu", "journal": "", "ref_id": "b54", "title": "Vln-petl: Parameterefficient transfer learning for vision-and-language navigation", "year": "2023" }, { "authors": "Alexandre Rame; Corentin Dancette; Matthieu Cord", "journal": "", "ref_id": "b55", "title": "Fishr: Invariant gradient variances for out-of-distribution generalization", "year": "2022" }, { "authors": "Ram Ramrakhya; Eric Undersander; Dhruv Batra; Abhishek Das", "journal": "", "ref_id": "b56", "title": "Habitat-web: Learning embodied object-search strategies from human demonstrations at scale", "year": "2022" }, { "authors": "Yuge Shi; Jeffrey Seely; Philip Torr; N Siddharth; Awni Hannun; Nicolas Usunier; Gabriel Synnaeve", "journal": "ICLR", "ref_id": "b57", "title": "Gradient matching for domain generalization", "year": "2021" }, { "authors": "Jonathon Shlens", "journal": "", "ref_id": "b58", "title": "A tutorial on principal component analysis", "year": "" }, { "authors": "Junha Song; Jungsoo Lee; In So Kweon; Sungha Choi", "journal": "", "ref_id": "b59", "title": "Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization", "year": "2023" }, { "authors": "Yongyi Su; Xun Xu; Kui Jia", "journal": "NeurIPS", "ref_id": "b60", "title": "Revisiting realistic testtime training: Sequential inference and adaptation by anchored clustering", "year": "2022" }, { "authors": "Yu Sun; Xiaolong Wang; Zhuang Liu; John Miller; Alexei Efros; Moritz Hardt", "journal": "", "ref_id": "b61", "title": "Test-time training with selfsupervision for generalization under distribution shifts", "year": "2020" }, { "authors": "Licheng Hao Tan; Mohit Yu; Bansal", "journal": "", "ref_id": "b62", "title": "Learning to navigate unseen environments: Back translation with environmental dropout", "year": "2019" }, { "authors": "Yushun Tang; Ce Zhang; Heng Xu; Shuoshuo Chen; Jie Cheng; Luziwei Leng; Qinghai Guo; Zhihai He", "journal": "", "ref_id": "b63", "title": "Neuromodulated hebbian learning for fully test-time adaptation", "year": "2023" }, { "authors": "Junjiao Tian; Zecheng He; Xiaoliang Dai; Chih-Yao Ma; Yen-Cheng Liu; Zsolt Kira", "journal": "", "ref_id": "b64", "title": "Trainable projected gradient method for robust fine-tuning", "year": "2023" }, { "authors": "Devavrat Tomar; Guillaume Vray; Behzad Bozorgtabar; Jean-Philippe Thiran", "journal": "", "ref_id": "b65", "title": "Tesla: Test-time self-learning with automatic adversarial augmentation", "year": "2023" }, { "authors": "Riccardo Volpi; Diane Pau De Jorge; Gabriela Larlus; Csurka", "journal": "", "ref_id": "b66", "title": "On the road to online adaptation for semantic image segmentation", "year": "2022" }, { "authors": "Dequan Wang; Evan Shelhamer; Shaoteng Liu; Bruno Olshausen; Trevor Darrell", "journal": "ICLR", "ref_id": "b67", "title": "Tent: Fully test-time adaptation by entropy minimization", "year": "2021" }, { "authors": "Hanqing Wang; Wei Liang; Luc Van Gool; Wenguan Wang", "journal": "", "ref_id": "b68", "title": "Dreamwalker: Mental planning for continuous vision-language navigation", "year": "2023" }, { "authors": "Pengfei Wang; Zhaoxiang Zhang; Zhen Lei; Lei Zhang", "journal": "", "ref_id": "b69", "title": "Sharpness-aware gradient matching for domain generalization", "year": "2023" }, { "authors": "Qin Wang; Olga Fink; Luc Van Gool; Dengxin Dai", "journal": "", "ref_id": "b70", "title": "Continual test-time domain adaptation", "year": "2022" }, { "authors": "Su Wang; Ceslee Montgomery; Jordi Orbay; Vighnesh Birodkar; Aleksandra Faust; Izzeddin Gur; Natasha Jaques; Austin Waters; Jason Baldridge; Peter Anderson", "journal": "", "ref_id": "b71", "title": "Less is more: Generating grounded navigation instructions from landmarks", "year": "2022" }, { "authors": "Shuai Wang; Daoan Zhang; Zipei Yan; Jianguo Zhang; Rui Li", "journal": "", "ref_id": "b72", "title": "Feature alignment and uniformity for test time adaptation", "year": "2023" }, { "authors": "Xin Wang; Qiuyuan Huang; Asli Celikyilmaz; Jianfeng Gao; Dinghan Shen; Yuan-Fang Wang; William Yang; Wang ; Lei Zhang", "journal": "", "ref_id": "b73", "title": "Reinforced cross-modal matching and selfsupervised imitation learning for vision-language navigation", "year": "2019" }, { "authors": "Xiaohan Wang; Wenguan Wang; Jiayi Shao; Yi Yang", "journal": "", "ref_id": "b74", "title": "Lana: A language-capable navigator for instruction following and generation", "year": "2023" }, { "authors": "Zhe Wang; Jake Grigsby; Yanjun Qi", "journal": "ICLR", "ref_id": "b75", "title": "Pgrad: Learning principal gradients for domain generalization", "year": "2023" }, { "authors": "Zun Wang; Jialu Li; Yicong Hong; Yi Wang; Qi Wu; Mohit Bansal; Stephen Gould; Hao Tan; Yu Qiao", "journal": "", "ref_id": "b76", "title": "Scaling data generation in vision-and-language navigation", "year": "2023" }, { "authors": "Zihan Wang; Xiangyang Li; Jiahao Yang; Yeqi Liu; Shuqiang Jiang", "journal": "", "ref_id": "b77", "title": "Gridmm: Grid memory map for vision-andlanguage navigation", "year": "2023" }, { "authors": "Zijiao Yang; Arjun Majumdar; Stefan Lee", "journal": "", "ref_id": "b78", "title": "Behavioral analysis of vision-and-language navigation agents", "year": "2023" }, { "authors": "Chenyu Yi; Siyuan Yang; Yufei Wang; Haoliang Li; Yappeng Tan; Alex Kot", "journal": "ICLR", "ref_id": "b79", "title": "Temporal coherent test time optimization for robust video classification", "year": "2023" }, { "authors": "Tianhe Yu; Saurabh Kumar; Abhishek Gupta; Sergey Levine; Karol Hausman; Chelsea Finn", "journal": "NeurIPS", "ref_id": "b80", "title": "Gradient surgery for multi-task learning", "year": "2020" }, { "authors": "Longhui Yuan; Binhui Xie; Shuang Li", "journal": "", "ref_id": "b81", "title": "Robust test-time adaptation in dynamic scenarios", "year": "2023" }, { "authors": "Marvin Zhang; Sergey Levine; Chelsea Finn", "journal": "NeurIPS", "ref_id": "b82", "title": "Memo: Test time robustness via adaptation and augmentation", "year": "2022" }, { "authors": "Bowen Zhao; Chen Chen; Shu-Tao Xia", "journal": "", "ref_id": "b83", "title": "Delta: Degradation-free fully test-time adaptation", "year": "2023" }, { "authors": "Fengda Zhu; Yi Zhu; Xiaojun Chang; Xiaodan Liang", "journal": "", "ref_id": "b84", "title": "Vision-language navigation with self-supervised auxiliary reasoning tasks", "year": "2020" }, { "authors": "Fengda Zhu; Xiwen Liang; Yi Zhu; Qizhi Yu; Xiaojun Chang; Xiaodan Liang", "journal": "", "ref_id": "b85", "title": "Soon: Scenario oriented object navigation with graph-based exploration", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 348.39, 235.08, 196.72, 11.72 ], "formula_id": "formula_0", "formula_text": "s t = ϕ (I, R t , O t , H t ; Θ) , s t ∈ R |Vt|(1)" }, { "formula_coordinates": [ 3, 361.07, 399.12, 184.04, 14.17 ], "formula_id": "formula_1", "formula_text": "L(s t ; Θ) = - i s t,i log(s t,i ).(2)" }, { "formula_coordinates": [ 4, 91.28, 636.77, 191.22, 22.31 ], "formula_id": "formula_2", "formula_text": "λ j,d , u j,d = SVD d 1 M -1 ĜT j Ĝj , (3" }, { "formula_coordinates": [ 4, 282.49, 643.83, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 325.77, 395.69, 219.35, 20.09 ], "formula_id": "formula_4", "formula_text": "∇ (f ast) j = D d=1 Φ d (λ j,d )• < ḡj , u j,d > u j,d ,(4)" }, { "formula_coordinates": [ 4, 347.45, 605.13, 197.66, 14.07 ], "formula_id": "formula_5", "formula_text": "∇ (f ast) j ← (∇ (f ast) j ∥ḡ j ∥ 2 )/∥∇ (f ast) j ∥ 2(5)" }, { "formula_coordinates": [ 5, 77.76, 316.16, 208.6, 14.07 ], "formula_id": "formula_6", "formula_text": "γ (f ast) j = Trunc (1 + τ -|σ j -σ|) • γ(fast) ,(6)" }, { "formula_coordinates": [ 5, 102.83, 435.4, 183.53, 14.07 ], "formula_id": "formula_7", "formula_text": "Θ j = Θ j-1 -γ (f ast) j • ∇ (f ast) j ,(7)" }, { "formula_coordinates": [ 5, 308.86, 256.55, 137.79, 13.52 ], "formula_id": "formula_8", "formula_text": "ϵ l,d , z l,d = SVD d (1/N • M T l Ml )" }, { "formula_coordinates": [ 5, 333.18, 452.78, 211.94, 26.56 ], "formula_id": "formula_9", "formula_text": "h l = 1 N -1 i=0 q i N n=1 q N -n • ( Θ l,0 -Θ l,n ),(8)" }, { "formula_coordinates": [ 5, 315.99, 550.76, 229.12, 17.27 ], "formula_id": "formula_10", "formula_text": "∇ (slow) l = d Ψ d (ϵ l , h l ) • sign (< h l , z l,d >) z l,d ,(9)" }, { "formula_coordinates": [ 5, 374.21, 693.11, 170.9, 23.23 ], "formula_id": "formula_11", "formula_text": "Ψ d (ϵ l , h l ) = ϵ l,d • ∥h l ∥ 2 ∥ϵ l ∥ 2 ,(10)" }, { "formula_coordinates": [ 6, 97.3, 370.93, 189.07, 14.3 ], "formula_id": "formula_12", "formula_text": "Θ (l) = Θ (l-1) -γ (slow) • ∇ (slow) l ,(11)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b0", "b3", "b4", "b5", "b6", "b7", "b2", "b8", "b9", "b10", "b0", "b11" ], "table_ref": [], "text": "The fascinating accumulation of computational knowledge and capacity constantly gives rise to new methods, especially those that are nature/data-driven [1] and capable of effectively replacing or improving old solutions based on traditional mathematical and statistical structures [2]. This particularly stems from the limitations of conventional methods in solving non-linear, time-variant, and behaviorally uncertain problems, inherent in many real-life phenomena [3]. Artificial Intelligence (AI) has been proven fruitful for such problems in lots of different areas [1], from Medicine [4], Transport [5], Environmental Sciences [6], and Manufacturing [7], to Economics [8] and Finance [3]. The latter area, although apparently saturated, has recently experienced a kind of explosion in AI-related publications [9], and the same applies to Entrepreneurship in which the application of AI is particularly interesting given its still infancy stage [10].\nAlong with its popularity in the scientific community, the propulsiveness of AI in Finance and related disciplines has recently been recognized by practice. According to the \"Hired's 2023 State of Software Engineers\" survey [11], the AI industry has risen to the top of the list of booming technology jobs in 2023. This is expected considering the AI's recent popularity achieved by the public release of Dall-E 2 and ChatGPT. However, the second most common choice of technology professionals was the Financial Technology Industry (FinTech), which overtook sectors such as Healthtech and Cybersecurity.\nAs per the literature search and review, this is the first study to map and bibliometrically analyze the academic field concerning the relationship between AI, entrepreneurship, and finance, and at the same time, the first review that deals with AI methods in entrepreneurship. It aims to explore and review the scientific knowledge about AI methods applicable in the entrepreneurial finance domain. The study provides a quantitative bibliometric review of applying AI in (1) entrepreneurial finance literature, and (2) corporate finance literature with implications for entrepreneurship. In addition to standard bibliometric indicators, rigorous, comprehensive, and temporal data analysis identifies various AI methods in the subject literature, showing a chronological aspect of the subject field and suggesting future application possibilities. Rich insights into the research area produce implications for different target groups dealing with AI in entrepreneurial finance (from the scientific community and computer experts to entrepreneurs and investors in entrepreneurship).\nIn Section 2 we position the study within the existing scientific opus and elaborate the scope and objectives of the research. Section 3 details the bibliometric methodology discussing the data search and screening procedures and describing the applied bibliometric tools. A clarification of the research methodology is followed by Section 4, which presents and interprets the bibliometric results and opens horizons for discussion and research implications elaborated in Section 6. Section 5 is devoted to presenting the foundational paradigm of AI and methods connected to it, with an emphasis on the practical aspect, so as to make an effort to resolve issues of AI results expounded in [12]. Finally, there are conclusions of the study in Section 6." }, { "figure_ref": [], "heading": "Background of the Study", "publication_ref": [], "table_ref": [], "text": "Here, we provide an overview of the existing literature in the emerging sphere of \"AI in entrepreneurship\" (Subsection 2.1), and the relatively saturated field of \"AI in finance\" (Subsection 2.2). The Section presents the general progress of the two domains and elaborates on key research topics. The development of the application of the following important groups of AI approaches in finance is elaborated: Expert Systems, Artificial Neural Networks, Hybrid Intelligent Systems, Support Vector Machines, and Natural Language Processing. The discussion of AI methods is followed by a detailed insight into previous literature reviews in domains of interest (Subsection 2.3). An overview of related bibliometric work identifies many research gaps addressed by this study. The Section ends by defining the scope and objectives of the study arising from the identified research gaps." }, { "figure_ref": [], "heading": "Overview of AI in Entrepreneurship", "publication_ref": [ "b9", "b13", "b9", "b9", "b15", "b16", "b16", "b16", "b16", "b16", "b17", "b9", "b9", "b18", "b18", "b19", "b20", "b18", "b15", "b18", "b21", "b15", "b22", "b23", "b24", "b23", "b18", "b25", "b15", "b26" ], "table_ref": [], "text": "The era of AI in Entrepreneurship has begun recently [10]. Although pioneering scientific papers in the field appeared in the 1980s 1 , the overall output published in the first 20 years is small, with only five papers in the Web of Science Core Collection by 2003. After a period of slow growth between 2003-2016, from 2017, the domain is experiencing a kind of publication explosion 2 [14] -not only quantitatively but also in terms of interests and topics arising from \"the reciprocity of the co-evolving fields of entrepreneurship research and practice\" [10] (p. 529). As influential scholars observe [10,16,17], AI and technologies in general, enrich and transform entrepreneurship as a field of research but also change real-world entrepreneurial activity. Nambisan (2017) [17] recognizes the dual transforming reflection of the proliferation of new technologies in shaping entrepreneurial pursuits. First, technological progress expands the boundaries of entrepreneurial processes and outcomes, making them more fluid and porous (e.g., advances in Financial Technology (FinTech) enrich entrepreneurial finance sources available without spatial and temporal boundaries in a specific entrepreneurial ecosystem) [17]. Second, technology leads to a shift in the focus of an entrepreneurial agency, creating dynamic sets of agents with different characteristics, aspirations, and goals. An example is the development of new infrastructure such as crowdfunding systems that stimulated the birth of more collective forms of entrepreneurial initiative [17,18]. Such disruptive changes and novelties \"on the ground\" are reciprocally reflected in the agenda of entrepreneurship research -it is not only boosted by new AI research tools and methods but also gets completely new targets that are studied with these methods [10].\nRegarding decision-making support and business performance improvement, AI in Entrepreneurial Finance is one of the empirically fruitful research directions [10,19]. For example, AI methods have recently been applied in the codification of the communication behavior of entrepreneurs and the analysis of crowdfunding presentation campaigns [19][20][21]. Hence, AI capabilities have great potential for improving the communication strategies of entrepreneurs and rationalizing the decisions of investors [19]. The application of AI is also evident in Entrepreneurial Finance Management. AIblockchain hybrid platforms support new ways of managing the financial accounting of an entrepreneurial venture [16] and change audit processes by reducing the need for traditional audit procedures such as sampling and confirmations [19,22]. In fact, in many aspects of business management AI automation tools bring a completely new paradigm of business scaling [16]. Predicting an entrepreneurial venture's success (failure) is another domain with AI applications [23][24][25]. Due to prediction accuracy, handling non-linear effects in data, and ambiguity detection, AI techniques are promising compared to traditional prediction methods [24]. The same applies to the segment of business planning of an entrepreneur, especially in activities such as sales forecasting, product pricing [19,26], and predicting the reaction of customers to price changes [16].\nDespite the increasingly frequent application of AI in entrepreneurship research and practice, and numerous fresh scientific topics, the research focus on the types of AI methods applicable in the field and the possibilities of these methods is relatively weak. Given the immaturity and newness of the area, this is not surprising. The AI-entrepreneurship intersection is currently mostly dealt with by entrepreneurship scholars, who have yet to seek partnerships with researchers who are experts in AI [27]. However, the time for multidisciplinary collaboration that would produce more technical scientific insights on AI in entrepreneurship is right in front of us." }, { "figure_ref": [], "heading": "Overview of AI in Finance", "publication_ref": [ "b9", "b8", "b28", "b29", "b30", "b31", "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b40", "b41", "b42", "b43", "b44", "b41", "b44", "b45", "b46", "b47", "b48", "b49", "b50", "b51", "b52", "b53", "b54", "b55", "b56", "b57", "b58", "b59", "b60", "b61", "b62", "b62", "b8", "b8", "b63", "b64", "b65", "b66", "b67", "b19", "b68", "b69", "b70", "b2", "b27", "b71", "b2", "b30", "b2", "b72", "b73", "b74", "b75", "b2", "b74", "b75", "b76", "b75", "b76", "b2", "b75", "b79", "b80", "b81", "b82", "b83", "b84", "b85", "b75", "b81", "b2", "b75", "b84", "b86", "b8", "b2", "b87", "b88", "b88", "b89", "b88", "b88", "b90", "b88", "b91" ], "table_ref": [], "text": "In contrast to AI research in Entrepreneurship [10], AI in Finance is a relatively old scientific domain [9]. The inspection of significant scientific databases (Google Scholar, Scopus, Web of Science) shows that the first relevant journal articles on the AI-finance intersection appeared in the 1970s (Google Scholar) and 1980s (Scopus), and were mostly related to the application of AI in banking and securities investment problems 3 . Some of the early covered topics were credit card application assessment [29], predicting the firm's financial health [30], credit evaluation [31][32][33][34], stock portfolio selection [35,36], stock market behavior prediction [37,38], and assigning ratings to corporate bonds [39]. Since the mid-1980s a small group of authors has outlined the application of expert systems in accounting and auditing [40][41][42][43][44][45], solving problems such as auditor's assessment of uncollectible accounts [42] and assessment of company solvency [45]. The development of the domain continued in the 1990s and 2000s when a larger number of relevant applications of AI methods appeared in corporate bankruptcy prediction [46][47][48][49][50][51][52][53][54] and financial fraud detection (accounting fraud [55], fraud in credit approval process [56,57], and credit card fraud [58][59][60][61]).\nAdditionally, the field of financial forecasting based on sentiment analysis began to develop, with a rise following the notable publication of Das and Chen in 2007 [62]. A few years later, the highly cited work of Pan (2012) paved the way for further advances in financial distress models [63]. Driven by the fourth industrial revolution, AI-finance research has experienced a strong proliferation that continues to this day4 [9]. Recently, new topic niches are emerging, such as AI in the context of FinTech innovations (cryptocurrencies and blockchain, crowdfunding, peer-to-peer lending, financial roboadvising, and mobile payment services) [9,[64][65][66][67][68], and predicting financing success using the new, FinTech funding sources [20,[69][70][71].\nExpert Systems (ES) are the first form of AI in finance, with initial application in 1977 [3,28]. Despite the hardware limitations of the time, by the mid-1990s they were pioneered in fields such as finance, investment, taxation, accounting, and administration [72] (as cited in [3]). A more notable work from that time was published by Shaw and Gentry in 1988 [31], developing the MARBLE system intended to assess the riskiness of business loan applicants. Although ES proved to be more practical compared to conventional statistical techniques, they failed in front of other AI methods such as Artificial Neural Networks (ANN) and Hybrid Intelligent Systems (HIS) -they were only capable of prescription, but not of prediction and improvement of the result by experience, and were not useful for identifying the non-linear relationships [3].\nThese deficiencies are eliminated by ANN, with the beginning of its application in the bond ranking in the 1980s [73,74]. ANNs are \"non-parametric\" methods that are data-driven, self-adaptive, and compared to parametric methods, are less sensitive to model misspecification. They are suitable for models without a priori assumptions about the data, which can be non-linear, and discontinuous [75,76]. These features have proven to be a huge strength over sophisticated statistical techniques in problems such as bankruptcy prediction and stock market prediction [3,75,76], characterized by a complex set of highly correlated, nonlinear, unclearly related variables [77].\nDespite the advantages, some shortcomings of ANNs have also emerged. The most popular in Finance, Back-Propagation Neural Network (BPNN) needs a large number of control parameters, hardly gives a stable solution, and suffers from potential overfitting, leading to poor generalization ability to the out-of-sample data [76]. This is the reason why it is often combined with classical statistical techniques or other intelligent methods such as ES, Fuzzy Logic, Genetic Algorithms (GAs), and Robotics [77]. Hybrid Intelligent Systems (HIS) aim to use the advantages of complementary methods and minimize their shortcomings, and are capable of achieving multi-functionality, technical enhancement, and multiplicity of application tasks. Although their performances are very sensitive to the right choice of integration methods and the problem of parameterization, they have generally proven to be more powerful in solving numerous problems in credit evaluation, portfolio management, and financial forecasting and planning -especially various neuro-fuzzy systems and combinations of NNs, Fuzzy Logic, and GAs [3].\nIn addition to hybridized methods, improved generalization performance came with the Support Vector Machine (SVM) in 1998. Compared to BPNN, SVM mainly 5shows significantly or at least slightly better results in financial time series forecasting [76,[80][81][82][83], credit rating analysis [84,85], and financial distress evaluation [86]. The advantage of SVMs is the implementation of the structural risk minimization principle, minimizing an upper bound of the generalization error, in contrast to previous ANN algorithms based on the empirical risk minimization principle [76,82]. \"Another merit of SVMs is that the training of SVMs is equivalent to solving a linearly constrained quadratic programming\" -resulting in a unique solution, optimal solution, without the problem of converging to a local minimum which may be a drawback of BPNN [3,76,85,87]. All of this has made SVM one of the most common AI methods for solving a range of (especially predictive) problems in Finance [9], whether it is used as a single method or a component of HIS [3].\nIn the last fourteen years, there has been a growing popularity of Natural Language Processing (NLP) in Finance [88,89]. Proponents of the methods argue that a lot of data can hardly be expressed numerically, without losing the holistic meaning, endless variety and nuances, and unstructured text documents usually contain more timely information than quantitative financial sets. Moreover, text from financial news, social networks, or auditor's reports includes opinions, connections, and emotions, and all of this can be useful in a series of financial classification and prediction problems [89,90]. According to Fisher et al. (2016), the most commonly used AI tools for NLP-based research in Finance are SVMs, followed by Naive Bayes (NB), hierarchical clustering, statistical methods, and Term Frequency -Inverse Document Frequency (TF-IDF) weighting. In addition to generating and validating prototype taxonomies and thesauri, NLP has shown promising results in corporate reports readability studies, and especially in topics such as financial fraud detection, and recognizing stock price movement [89]. However, even nine years ago the domain was still in its infancy [91]. Some of the identified research questions at that moment were to what extent in accounting taxonomies and thesauri problems NLP can survive independently, without the need for manual interventions, and how to overcome the problem of small data samples, distributed location of text documents and the changing nature of the accounting vocabulary [89]. More recent, future developments have been recognized in the topic analysis of accounting disclosures and the proliferation of deep learning research in, for example, quantifying the diversity of a firm's operations and locations, and labeling different types of corporate risks [92]." }, { "figure_ref": [], "heading": "Research Gaps and Objectives of the Study", "publication_ref": [ "b13", "b13", "b18", "b92", "b93", "b76", "b76", "b2", "b2", "b94", "b94", "b95", "b95", "b89", "b89", "b96", "b97", "b97", "b98", "b98", "b99", "b74", "b100", "b8", "b8", "b101", "b101", "b102", "b102", "b1", "b1", "b103", "b103", "b104", "b104", "b8", "b13", "b92", "b93", "b96", "b98", "b101", "b103", "b104", "b94", "b99", "b96", "b97", "b98", "b93", "b8", "b101", "b13", "b92", "b8", "b8", "b101", "b101", "b104", "b104", "b98", "b96", "b103", "b103", "b8", "b8", "b1", "b98", "b101", "b101", "b104", "b104", "b98", "b8", "b8", "b101", "b104", "b96", "b98", "b98", "b103", "b1", "b1", "b13", "b92", "b93", "b13", "b13", "b92", "b93", "b93", "b105", "b106", "b108", "b108", "b109", "b108", "b109", "b98", "b110", "b111", "b112" ], "table_ref": [ "tab_0", "tab_1" ], "text": "When it comes to the literature review opus, the AI-entrepreneurship domain has been developing recently, with papers published by Li et al. (2022) [14], Giuggiol and Pellegrin (2023) [19], Blanco-González-Tejero et al. (2023) [93], and Gupta et al. (2023) [94]. In the AI-finance sphere notable review contributions are made by a plethora of authors (e.g. Wong and Selvi (1998) [77], Bahrammirzaee (2010) [3], Fethi and Pasiouras (2010) [95], Omoteso (2012) [96], Das (2014) [90], de Prado et al. (2016) [97], Alaka et al. (2018) [98], Shi and Li (2019) [99], Königstorfer and Thalmann (2020) [100], Kumar et al. (2021) [75], Thakkar and Chaudhari (2021) [101], and Goodell et al. (2021) [9]). More recent, review works in this area are also produced by Ahmed et al. (2022) [102], Gómez et al. (2022) [103], Nazareth and Reddy (2023) [2], Chaklader et al. (2023) [104], and Chen et al. (2023) [105].\nOverall, most occurring review papers are systematic literature reviews, and there are several bibliometric analyses [9,14,93,94,97,99,102,104,105]. A large part of the reviews deals with a narrower niche topic, such as AI in banking [95,100], AI in bankruptcy prediction [97][98][99], or AI in sustainable entrepreneurship [94], while two bibliometric papers seek to give a holistic outlook of the AI-finance field [9,102], but with methodological limitations and without or with an incomplete review of AI methods. The same applies to the bibliometric papers from the AI-entrepreneurship domain [14,93]. In terms of what we were able to gather from the literature, a review focused on the intersection of AI and entrepreneurial and/or corporate finance has not been conducted so far. In order to demonstrate all identified relevant gaps in existing knowledge, below are given deeper insights into the study-related bibliometrics. First, bibliometric studies from the AI-finance intersection are elaborated, followed by a discussion of bibliometric research from the AI-entrepreneurship domain.\nBibliometric studies on the AI-finance intersection are conducted by Goodell et al. (2021) [9], Ahmed et al. (2022) [102], Chen et al. (2023) [105], Shi and Li (2019) [99], do Prado et al. (2016) [97], and Chaklader et al. (2023) [104]. With the aim of reviewing the entire AI-finance domain, Goodell et al. (2021) [9] carried out cocitation and bibliometric-coupling analyses of 283 papers from Scopus published in the period 1986-April 2021. In addition to the generally small sample size 6 [106], they consider only material from the subject areas \"Business, management and accounting\", \"Economics, econometrics and finance\", \"Social sciences\", and \"Arts and humanities\", with a huge number of target papers published in \"Computer Science\" missing [2,99]. Although they use a rich array of search terms related to AI methods, the number and variety of used search terms from the field of finance are limited (that part of the search query included only the following: \"finance\" OR \"financ* manag*\"). The biggest contribution of the study is a thematic overview of the chronological development of the area and the identification of eight thematic clusters and three broad thematic areas: \"1) portfolio construction, valuation, and investor behavior; 2) financial fraud and distress; and 3) sentiment inference, forecasting, and planning\". Although the authors provide a general review of used AI methods in the analyzed finance research, the focus on methods is narrow, and the given method categorization is not adequate (for example, the general category \"AI methods\" is treated as a separate group of methods, in relation to machine learning methods or deep learning methods, and no insight is given as to which methods are hidden under the general category).\nThe entire AI-finance domain is also bibliometrically examined by Ahmed et al. (2022) [102]. The analysis was carried out on a sample of 348 papers from Scopus, considering material only from journals categorized in the first or second quartile (Q1 and Q2 journals as per the Scopus ranking 2021). The target papers are additionally limited to the time of publication between 2011 and 2021, and just as in the previously discussed study, the material belonging to the area of \"Computer Science\" is not included in the analysis. Moreover, only journals in the finance field were considered. The main results refer to the identification of relevant publications in six topic research streams: (1) bankruptcy prediction and credit-risk assessment, (2) stock price prediction, portfolio management, volatility, and liquidity, (3) prediction of the prices of oil, gold, and agriculture products, (4) anti-money laundering, anti-fraud detection, and risk management, (5) behavioral finance, and (6) big data analytics, blockchain, and data mining. The AI methods used in the analyzed papers were not identified or considered in any form, which the authors themselves state as a study limitation, suggesting future research endeavors.\nA recent study by Chen et al. (2023) [105] narrows the AI domain, considering only Explainable Artificial Intelligence (XAI) in finance. The bibliometric dataset was taken from the Web of Science Core Collection (WoSCC) and covers the period from 2013 to 2023. The results identified two main groups of research: (1) application-oriented research by XAI in finance, and (2) innovation-oriented studies with a focus on technology development. The contribution was made by the identification of some topic research trends and prospects. Despite the large sample of documents (N=2733), the study has methodological limitations (the analysis was performed on all search results after applying search filters, without performing any data screening and data cleaning procedures). Additional shortcomings of the study are similar to the previously mentioned one (the dataset includes only papers from finance and economics; the results are thematically focused without giving any insight into AI methods in finance).\nShi and Li (2019) [99] start from a broader view of methods and narrower topic niches, trying to examine the application of various intelligent techniques (statistical, operational research and AI methods) in just one specific financial problem: corporate bankruptcy prediction. The bibliometric study was conducted on a sample of 413 publications from the WoSCC database for the period from 1968 to 2018. The results indicate the propulsiveness of the field after the financial crisis of 2007-2008 and demonstrate relatively weak cooperation among the authors. An important conclusion is that there was an approximately equal representation of papers in computer science journals and those from management and finance, supporting the seriousness of the discussed limitation of studies by Goodell et al. (2021) [9], Ahmed et al. (2022) [102], and Chen et al. (2023) [105]. Moreover, the highest representation of papers was found in Expert Systems with Applications (ESA) which is a computer science and engineering journal. When it comes to intelligent techniques, the study contributes in terms of identifying the most common methods in bankruptcy prediction problem (Neural Network, Multivariate Discriminant Analysis) and those less represented (Fuzzy, Rough Set, Data Mining, Adaboost, K-Nearest Neighbors, Bayesian Network).\nA similar topic is considered by do Prado et al. (2016) [97]. Their bibliometric analysis aimed to identify the application of multivariate data analysis techniques to credit risk and bankruptcy prediction problems. Therefore, the focus of the analysis was not on AI methods, but on 17 techniques of multivariate analysis, among which only ANNs are from the AI domain. The subject of the analysis was 393 scientific articles from the Web of Science (main collection) published between 1968 and 2014. The main conclusions are similar to those of Shi and Li (2019) [99] pointing to the papers' proliferation since 2008 and the multidisciplinarity of the field covering not only Business and Economics, but Computer Science, Statistics, Mathematics, Engineering, and so on. Additionally, the study notes the widespread use of advanced AI techniques (primarily ANNs) since the 1990s and the increasing popularity of combining ANNs with traditional statistical techniques (Logistic Regression and Discriminant Analysis).\nChaklader et al. ( 2023) [104] deals with AI in the context of the progress of Financial Technology (FinTech) companies. The sample includes 302 Scopus indexed papers from the period 2014 to 2022. Based on keyword analysis, the authors identify several trending topics and future research directions. The paper in no way elaborates on the applied AI methods in the FinTech topic niche.\nFinally, it is important to highlight a study on machine learning (ML) in finance by Nazareth and Reddy (2023) [2]. Although it is primarily a systematic literature review on a small sample of papers (N=126), the authors also provide bibliometric data on the field, including certain aspects of bibliometric analysis. The contribution of the study is reflected in the compilation of progress in ML in six different financial domains (stock markets, portfolio management, forex markets, bankruptcy and insolvency, financial crisis, and cryptocurrency). The study reviews more than ten ML models, with implications for their applicability in specific financial fields. However, it is not primarily bibliometric research and does not cover the entire AI domain and its intersection with entrepreneurship. Besides, the focus is on the recent literature (published in 2015 and later), without the intention of providing insight into the chronological development of the field.\nIn the AI-entrepreneurship domain, the following bibliometric studies were conducted: Li et al. (2022) [14], Blanco-González-Tejero et al. (2023) [93], and Gupta et al. (2023) [94]. Li et al. (2022) [14] carried out research on the cross-field of AI and entrepreneurial management. The analysis included only 123 papers from the Web of Science Core Collection published between 1987 and February 2021. The main results refer to the identification of thematic clusters from which ten research hotspots emerged (e.g. the impact of digitalization on different industries, the impact of AI on the development trend of enterprises, enterprise business intelligence (BI) construction and application, and others). Importantly, the study does not focus on the domain of entrepreneurial finance and does not provide any overview of AI methods in entrepreneurship.\nBlanco-González-Tejero et al. (2023) [93] analyzes 520 scientific papers from the Dimensions.ai database published until July 2022. Their research does not refer to AI methods in any form and deals with the role of AI in entrepreneurship in general. Gupta et al. (2023) [94] deals with the literature on the role of AI in sustainable entrepreneurship. The bibliometric analysis includes 482 articles from Scopus published between 1994 and 2022. The authors identify trending research topics in the field of AI-sustainable development. A deeper description of topic areas, as well as a review of AI methods, is not provided. The main conclusion of the paper is that sustainable development is a trendy scientific topic with a growing number of articles and citations.\nIn summary, no bibliometric study, as far as we were able to find, provides an overview of the entire spectrum of AI methods in finance or entrepreneurship. An exception is the study by Goodell et al. (2022) which takes a rough look at the groups of AI methods in finance, with certain technical shortcomings in the grouping strategy. Almost all conducted bibliometric studies are based on a relatively small sample of documents [106], and some of them have other methodological limitations (e.g., the exclusion of Computer Science literature from the dataset, limited application of available bibliometric tools, not performing data screening procedures before analysis, etc.). Crucially, to our knowledge, none of the existing studies cover the intersection of AI, entrepreneurship, and finance to give implications for entrepreneurial finance.\nConsidering research gaps, the present study aims to explore and review the conceptual, intellectual and social structure of scientific knowledge [107] on the intersections of AI and two economics fields: entrepreneurship and finance. Briefly speaking, the focus of the research is (1) AI-entrepreneurial finance literature, and (2) AI-corporate finance literature with implications for entrepreneurship. In the context of the study, entrepreneurial finance is defined as \"the art and science of investing and financing entrepreneurial venture\"7 (p. 9) [109]. According to the definition, there are two fundamental aspects of entrepreneurial finance: investing, i.e. choosing the direction of an entrepreneur's investment (purchase of physical assets, entering a new market, etc.), and financing, or securing money for the realization of the investment plan [109]. Some of the important topics within the domain are: sources of funding for entrepreneurs (bank lending, equity capital, crowdfunding, business angels, venture capital, etc.), investor-entrepreneur negotiation strategies, business planning (including financial planning and forecasting), understanding and analyzing financial statements, and new venture and small business valuation [110]. The concept is not limited to startups but also covers intrapreneurship, acquisitions of existing businesses, and new entrepreneurial ventures within corporations or family firms [109]. Moreover, some authors extend the concept to the financial and investment activities of all small and medium-sized enterprises (SMEs), distancing it from corporate finance as a \"financial decision-making of large corporate organizations\" (p. 4) [110].\nIn accordance with the protocol of previous studies [99,[111][112][113] and the research subject, the specific objectives of the study are defined as follows: 1. To determine the publication productivity and evolution of scientific knowledge on the intersection of AI-entrepreneurship-finance. 2. To identify the most influential articles, and the most prolific academic scholars, journals, institutions, and countries on the intersection of AI-entrepreneurshipfinance, and to determine the degree of academic cooperation and multidisciplinarity in the knowledge field.\n3. To determine and interpret prominent topics on the intersection of AIentrepreneurship-finance, and to identify the chronological development of prominent topics. 4. To determine AI methods (method, algorithm, technique) used in the study of certain topics at the intersection of AI-entrepreneurship-finance, so as to determine the current state and project future possibilities. 5. To reach a more profound insight into the research field, and to reflect on the emerging research directions and the promising AI methods for future applications in entrepreneurial finance, with implications for the scientific community, computer experts, entrepreneurs, and investors in entrepreneurship. 6. To give recommendations for future improvement of bibliometric methodology.\nIn Tables 1 and2 we compare the characteristics of the present study against related bibliometric work. As can be seen, there are several research gaps addressed by our study: the absence of bibliometric research on AI methods with implications for entrepreneurial finance, the small sample of documents in previous studies, limitations related to the exclusion of \"Computer Science\" from the sample, and insufficient focus on AI methods." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "As introductory matters have been presented, now we will present research methodology, including a short review of Bibliometrics. Broad methodology themes are as follows: a) preliminary search and screening of the research field, b) data acquisition and preparation for bibliometric analysis, c) bibliometric data analysis; after which we will proceed to research results, produced as per the described methodology." }, { "figure_ref": [], "heading": "Bibliometrics as a Research Method", "publication_ref": [ "b105", "b113", "b114", "b114", "b114", "b113", "b115", "b105", "b113", "b115", "b106", "b106", "b105", "b116", "b106", "b105", "b105", "b105", "b105", "b106" ], "table_ref": [], "text": "Despite its recent popularity, Bibliometrics is not a new research method [106]. The first documented attempt to use the method is related to statistical research on subject scattering in publications by Campbell in 1896. The application of the method was also recorded in 1917 when Cole and Eales statistically analyzed the growth of literature in comparative anatomy [114]. Afterward, in 1922 Hulme used the term statistical bibliography to describe \"the illumination of the processes of science and technology by means of counting documents\" (p. 348) [115]. Considering statistical bibliography as an unsatisfactory and insufficiently accepted term, Pritchard (1969) [115] proposed bibliometrics as a new name for the subject, marking it as \"the application of mathematics and statistical methods to books and other media of communication\" (p. 348). Since then, the term bibliometrics has been widely accepted, and numerous definitions of it have appeared [114,116]. Common to almost all definitions is that it is a quantitative methodology based on statistical and mathematical techniques used to measure various constituents of certain forms of written communication (e.g., authors, locations, institutions, topics, etc.) [106,114]. With such a definition, bibliometrics is distanced from scientometrics, which is a similar but broader concept covering \"all quantitative aspects of the science of science\" (p. 377) [116].\nAccording to Aria and Cuccurullo (2017), bibliometrics for scientific mapping enables insight into a specific research field with regard to its: (1) intellectual structure (identification of the knowledge base and influence of certain works and authors in the scientific community), (2) conceptual structure (finding major themes and trends), and (3) social structure (diagnosing interactions among researchers, institutions, and countries). The method generally gives a static picture of the research field at some point in time, and the inclusion of temporal analysis in data processing can provide insight into the chronological evolution of the field [107]. Data for bibliometric analysis are suitable if they are objective in nature (e.g., number of papers, number of citations, etc.) and massive in quantity (the sample of documents must be greater than 500, and it is best if it is over 1000) [106]. The subject of bibliometric analysis can be different categories of materials, from books, articles, theses, and patents to the socalled \"grey\" literature [117]. Through a reliable, objective, and repeatable review of a large body of information, it provides the \"big picture\" of a scientific corpus [107]. These are some of the main features by which bibliometric analysis differs from a systematic literature review based on a qualitative, manual analysis of a smaller set of publications. Bibliometrics should also be distinguished from meta-analysis, which is a quantitative review method, but with different goals (to summarize evidence of relationships between variables in a research field) [106]. Donthu et al. (2021) suggest four steps in the implementation of bibliometrics: (1) \"define the aims and scope of the bibliometric study, (2) choose the techniques for bibliometric analysis, (3) collect the data for bibliometric analysis, and (4) run the bibliometric analysis and report the findings\" (pp. 291-293). In order to carry out the latter, researchers have at their disposal two main categories of bibliometric tools: performance analysis and scientific mapping. Main tools can be enriched with network analyses which include a number of network metrics, as well as clustering and visualization [106]. The bibliometric analysis in the present study is grounded on general recommendations for the bibliometric methodology procedure [106,107] and is based on the application of well-known and established analytical tools, as described in Subsection 3.4. Before that, a detailed elaboration of the procedures of the data search, collection, and screening is given. The bibliometric methodology of the study is shown in the flowchart 1." }, { "figure_ref": [], "heading": "Preliminary Search and Initial Screening of the Research Field", "publication_ref": [ "b117" ], "table_ref": [ "tab_3", "tab_4" ], "text": "The bibliometric analysis began by defining the general objective and scope of the study (Subsection 2.3), which was followed by a preliminary search and initial screening of the research field. The purpose of the preliminary search phase was to define an appropriate set of keywords and search queries as key inputs for the final data collection phase (Subsection 3.3). Proper selection of search keywords is crucial since even a small variation in terms and queries changes the data set, potentially generating different bibliometric results [118]. The preliminary search phase was conducted by searching the Google Scholar database using broadly defined keywords generated by the researchers. Google Scholar was selected as suitable for the preliminary search since it is the largest scientific database that offers the widest insight into a specific scientific field. In addition, the Web of Science Core Collection (WoSCC) was searched to ensure insight into the relevant literature of the area. From March 16, 2023, to March 29, 2023, a total of 1427 documents from Google Scholar (1053 after removing duplicates) and 419 documents from WoSCC were searched and screened. The search was conducted using different combinations of keywords shown in Tables 3 and4. Since it was a preliminary phase to get a general overview of the area, the initial screening included different types of documents (journal articles, books, conference papers, theses, and reports). By reading the titles, keywords and abstracts, 266 relevant documents were selected. The selected documents were further examined in detail by re-reading abstracts and keywords and, in some cases, by inspecting the full text of the document in order to ensure the validity of the research.\nA preliminary search and screening of 266 relevant documents indicated that the intersection of AI-entrepreneurship-finance is a propulsive area with a large number of recent publications (from the last 3 years) mainly within two research areas: Computer Science and Business. The scope of the research field was assessed as sufficient for conducting a bibliometric analysis, and several intertwined topic niches (or branches) were found:\n1. AI as support for entrepreneurial financing decisions 1.a Investment success/business performance and entrepreneur's behavior and presentation 1.b Sources of entrepreneurial finance 1.c Valuation of an entrepreneurial venture/Prediction of performance and/or bankruptcy 2. FinTech in the context of entrepreneurship 3. Management of entrepreneurial finance 3.a AI and accounting, auditing and detecting financial frauds 3.b Financial planning and other aspects of financial management Within each topic niche, different combinations of keywords and search queries were defined as inputs for the next stage of the literature search. The list of keywords and queries is shown in Tables 5, 6 and 7 with the ordinal numbers of the corresponding topic niches, according to the enumeration above." }, { "figure_ref": [], "heading": "Searching, Collecting, and Screening the Data for Bibliometric Analyses", "publication_ref": [ "b118", "b119", "b98", "b120" ], "table_ref": [ "tab_7", "tab_8" ], "text": "The data for bibliometric analysis was gathered from the Web of Science Core Collection database (WoSCC). Clarivate PLC's database was selected for the study as one of the most relevant and comprehensive collection of peer-reviewed scientific material. The bibliographic data and metadata it provides are suitable and sufficiently comprehensive for bibliometric analysis and can be exported in the appropriate format. The export of the data was carried out on May 5, 2023. The search was performed using the criteria Topic (searches title, abstract, author keywords, and Keywords Plus), and the following search filters were applied: 1. Document Type: Article, Review Article, Early Access 2. Language: English 3. Web of Science Index: Science Citation Index Expanded (SCI-EXPANDED), Social Sciences Citation Index (SSCI), Arts & Humanities Citation Index (A&HCI), and Emerging Sources Citation Index (ESCI) By limiting the search to the WoSCC and the listed indexes and document types, data was retrieved only from \"journals that demonstrate high levels of editorial rigor and best practice\" [119]. Papers published in conference proceedings and other forms of scientific material were excluded from the data set. The goal was to form a corpus with only peer-reviewed and highest-quality scientific work [120]. Furthermore, no filter on the time range was placed (the data includes papers in the database from the year of publication of the first paper to the date of the search). Also, no filter on research areas was used. Such an approach ensured a complete collection of data across different periods and areas, providing a temporal and disciplinary comprehensive view of the research field [99].\nThe search was conducted using a large number of keywords and 11 different search queries (Tables 5, 6 and7). The query syntax was adapted to the query formatting rules of the Web of Science. Accordingly, the search was based on operators: OR (to find records containing any of the search terms), AND (to find records containing all of the search terms), and NOT (to exclude records containing certain words). Wherever it was meaningful, the wildcard character \"*\" was applied, in order to control the retrieval of plurals, variant spellings etc. Quotation marks were used to search for exact phrases such as \"artificial intelligence\" [121]. The selection of keywords related to AI was aimed at covering as many different AI methods as possible. At the same time, keywords that could result in papers based solely on statistical methods, without an AI component, were avoided (examples are keywords such as \"text mining\" or \"data mining\") -therefore if such a paper was mentioned AI, it was included, no inclusion otherwise.\nThe data search yielded a total of 4644 results, which were subjected to a screening procedure. The screening was carried out by looking at the title, keywords, and abstract of the paper. In contrast to a systematic literature review, in the bibliometric methodology, screening of the abstract and the full text is carried out only if necessary [122]. However, as a precaution, reading of abstracts in this study was performed for almost all of the 4644 papers -excluding those documents for which the title was overwhelmingly clear. The screening strategy was based on a broad understanding of Entrepreneurial Finance, taking into account its overlap with related disciplines such as Corporate Finance, Management, Business Planning, Accounting, and Auditing. Therefore, in addition to entrepreneurial finance literature, the corpus of data for analysis included the body of literature related to corporations and the financial sector, which contains implications for the financing of entrepreneurship or the financial management of new entrepreneurs and small and medium-sized enterprises. Scientific material with implications only for financial institutions and financial markets was excluded from consideration (examples are studies dealing with banks, financial markets, or insurance companies in topics such as bank failure prediction, stock price movements forecasting, or credit card fraud detection).\nThe screening procedure of 4644 records resulted in a set of 2694 relevant documents. The data set was then reduced by removing duplicates and retracted papers.\nThese operations created a final corpus of 1890 documents which were subjected to analysis." }, { "figure_ref": [], "heading": "Bibliometric Data Analyses", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "Our bibliometric analysis was meant to be comprehensive, from the very beginning, as can be seen from research methodology and the aforementioned. In view of that fact, there are three tools selected for the analysis: RStudio (2023.03.0 Build 386), Bibliometrix (4.1.2) and VOSviewer (1.6.19). RStudio was a supporting tool as Bibliometrix is a package within the program. Bibliometrix was used as it is the only tool that supports the entire bibliometric process, and the VOSviewer was selected as an extension to the research and support to Bibliometrix, as in network analyses it was more reliable and had options and features not found in Bibliometrix, and therefore was a useful complementary tool. In this kind of research it is customary to list all techniques, analyses and metrics used and produced by the research, however as the research is vast this will be skipped, as the list would be quite big -therefore we are directing the reader to consult Section 4 of the article where he can find section by section, methodologically conducted all analyses relevant for our research.\nIn Table 8 the reader can observe the quality of data coverage. This data was produced by Bibliometrix as a part of the procedure of importing references into the program. Out of 16 items two are in an acceptable category, five are ranked as good, and 9 are not missing any information at all. This is important as it gave us the opportunity to make in-depth bibliometrics by not skipping any analysis for the reason of insufficient data quality. As the data is of quality, in the next section we are presenting bibliometrics and begin by presenting the results of preliminary data analysis." }, { "figure_ref": [], "heading": "Research Results", "publication_ref": [], "table_ref": [], "text": "This section presents the entire bibliometric analysis results, from preliminaries all the way to conceptual, intellectual and social analyses. Each analysis is clearly delineated by a subsection, within which one will find the interpretation of the data together with items of the analysis. In such a way the reader can be easily oriented and manage the content. Immediately after the section on research results, a section on discussion, implications and research constraints follows. The reader can therefore jump into a discussion, or follow the research and delve deep into analyses conducted. The first analysis deals with a standard descriptive statistic overview of the data, found in Subsection 4.1, and is a starting point for bigger things to come." }, { "figure_ref": [ "fig_0", "fig_9", "fig_0", "fig_1", "fig_2", "fig_1", "fig_1", "fig_2" ], "heading": "Preliminary Data Analyses", "publication_ref": [ "b122", "b123", "b124", "b125", "b127", "b98" ], "table_ref": [ "tab_10" ], "text": "The first year in terms of the time span of documents in the analysis is 1991. This is interesting, as this time frame coincides with the dawn of ever-increasing popularity and thrust of computers into masses, into the lives of a larger amount of people. And it seems as if this was happening, so was it more important to find out, and apply, what computers can do?\nIn Table 9 descriptive statistic on main information and keywords is found. The number of sources is substantial, therefore authors of research papers have quite a number of avenues to choose from. This is also indicative of relevance to the scientific community if there are so many sources publishing these kinds of papers, and as was seen by performing a literature search, this relevance goes beyond economics, as a large number of research is published in computer science journals.\nAfter final filtering was performed, the number of documents for bibliometrics was 1890 -a large number and a number where bibliometrics is needed. Such a number of documents shows that we are far from the fledgling days, and gives a reason and a motivation to ascertain what is the current state of the field. It is also indicative of the amount of research results, authors etc., and information supportive of the reasoning that the number of questions in need of resolving is not small, and with the annual growth rate at 12.9%, with no slowing down in sight as will be seen from further data analysis, this is a place where both academia and industry can find their place in the light. The question always however is, what will be the next big thing? Will the link between quantum computation and algorithmics in the near future be the successor or a parallel art? Time will tell, as the question hangs on the balance of viability of quantum computation, which is not yet certain, but there are indications that quantum computation is here to stay. Document average age of 5.6 years shows that the field is propulsive and produces a substantial amount of knowledge that is young, new. Such a situation confronts those wanting to enter the field with a steep learning curve and requires constant learning and self-improvement, but to be working in such an evolving field is also fascinating and rewarding. From all data in this table, but from later analyses as well, this trend will continue, and potentially even increase in speed and shorten the value for the average age of a document.\nConsidering that the number of documents is 1890, the average citation per document of 23.21 is high. This has been achieved either by a smaller number of highly cited articles (as later analysis shows a factor), or by other field characteristics such as number of authors, interdisciplinary nature of citing, number of research/industry projects in computing, and more specifically AI, etc. It is however possible, and it seems probable, that here it was a combination of factors that contributed to the situation as is now.\nThe number of references is also high, as according to these numbers it would mean that the expected value of references per document is 33. Since for a review paper one expects at least 30 references, preferably 50 references or more, the number of 33 is not so far from the review expected number of 50 references. The reasoning for such a situation can probably be projected from the reasoning for citation count, however, what will the future bring is more difficult to tell, as we are dealing here with a multidisciplinary environment -for the foreseeable future this trend will most likely stay the same. On a different note, documents with this amount of references should be well grounded, and with this amount of citations, relevant for the field.\nWith this amount of keywords, the expected number for a document is 2, not quite the usually recommended number of 5. It is difficult to tell from the analysis why is this so, it however does not necessarily mean that the papers are not well supported by keywords, as later inspection shows, there might be nevertheless some extremes. Useful for future reference is a data point about the comparison of keywords and keywords plus (generated from references [124]), which is roughly 1 : 2 -as it shows that it is possible to describe the structure of knowledge with half of the original keywords, but not so well the knowledge itself. [125] Further on, in Table 10, one can observe data on authors and collaborations. If we take into account the number of papers, then per paper there comes cca. 2 authors, so the papers are not saturated with authors, with only 225 authors producing a single document, that is 5.8% -indicating that those that are working in the field are typically here for a longer period, with intent to leave a more lasting contribution.\nAs for collaborations, it can be seen that there are 266 documents that have only one author, a number close to the number of authors that have only one document published, it is possible that a substantial part here comes from those single document authors, as these are less likely to have developed collaboration group. This number also indicates that we are dealing with a field where collaboration is high, as saturation with authors is not high, and there is cca. 14% of documents with one author only.\nA confirmation of this is seen in the number of authors per document, as the average is 2.91, a highly collaborative for computer science and economics it would seem, expected and a positive sign for a field that is at the crossroads. International collaboration is here as well, with 25.1% of these co-authorships being international, which is not a necessity, but present nevertheless -this means that one in four collaborations is international, which is on a global scale, with vast differences in conducting research, we would argue a substantial percentage, that likely hides other elements as well, like networking, international collaborative groups, international research projects etc. If further analyses are any indication, the number of authors and international collaboration will probably continue to grow at a significant pace.\nNext, we have an analysis of annual scientific production, in Figure 2. Things started slowly, with two articles published in the year of 1991, and this trend lasted until about 2002, only then did AI start getting speed. It took a long time for AI technology to increase enough in maturity, with the additional difficulty of interdisciplinarity, such a development is expected -so the research was scarce, and contributions were not abundant.\nThen in 2003 a difference, the start of a new era, the constant influx of papers, year by year, with an approximately linear trend. As it sometimes is in science, after a technology is discovered it is not immediately recognized, and even less applied, but in the 2000s things are changing and researchers are starting to realize the potential of AI methods, techniques and algorithms. This had a secondary effect as well, the influx of scientists and practitioners to the field, making the field more robust and faster advancing of knowledge.\nIt was however something totally unheard of when the number of papers started to grow exponentially around 2018, the same year when OpenAI published its first Generative Pretrained Transformer (GPT, often called ChatGPT) [126] -with 2017 being the year when deep learning transformer architecture was published by a team predominantly from Google Research and Google Brain [127]. This has placed AI on the map, not only in economics, but in many other areas, and some would argue in almost every research area there is -perhaps all? Such an extremely steep trend in the number of publications has increased the application of AI tremendously, and innovation follows, whether through technology or through methods optimization. From all the information this graph offers, as well as from further analyses below, the trend will continue for the foreseeable future and we will for some time not be seeing a concept drift.\nBy looking at the influence of average citation per elapsed years, in Figure 3, after the original explosion, citations grew ever more rapidly, until they ended in the extreme peak of 2020 -corresponding to AI's rapid development and growth during the cca. last decade. More concretely, we have points of interest, the first is in 1994, however since the number of documents is scarce during preceding years, it is more favorable to select the period until 1999 for inspection. The next period of interest would then be from 2000 until 2004, with the need to specifically determine the situation in the year 2000. Afterward, we have two prominent peaks, in 2005 and then in 2007, therefore the period from 2005 until 2008 needs to be looked at. Then we come to a period of exponential explosion of both published documents and citations. During this time there are three relevant periods that need to be looked at more closely, 2015 -2018, 2019 -2020, and the last speaking of things to come, from 2021 -2023.\nAll of these have been analyzed with lesser or greater depth at different stages of the research, but for the most definite conclusion, one should go to Figures 31, and32, with the accompanying interpretation text. Alongside these points, 2017 is the first year where for the first time AI citations grew to such an extent that the influence of those citations was more worthwhile than average citations (influence of average citation was scaled to the average citation range having in mind minimum and maximum value, in a conservative manner), and as per the data, the last leg of quite rapid AI development began around 2015, with the first ending in 2003, 2004. Furthermore, in 2019 the world was confronted with COVID-19 epidemic conditions, and then pandemic conditions in 2020, which seems had quite an influence on research in a world that become predominantly online and digitally oriented, both in a business environment as well as in private life. The last three years of influence data indicate that the field will continue in, as it seems, exponential growth in the future as well, at least in terms of the influence of average citation -and if that is any indication of innovation, there are great developments in the works for AI and entrepreneurship-finance, with other areas likely following the same trend.\nAverage citation per elapsed years shows how at the beginning a small number of papers has through time accumulated a large number of citations and gained substantial relevance. The further down through time one goes average citation stabilizes into a \"line\", giving an impression that these documents are less relevant in terms of citations -this however is deceptive, as the average calculation does not take into account that recent years are more significant, in terms of cutting edge of the field, and that it is far more difficult to accumulate citations in a short time.\nTo solve this issue we are suggesting the influence of average citation per elapsed year -a detailed description of the calculation can be found in Appendix A. The basic idea behind the influence of average citation is that this link between average document citation and the number of elapsed years is exponentially inversely proportional, giving more importance to recent years -a measure going beyond only taking into account how many years have passed, but at the same time asking what was the strength of those years in terms of citation accumulation and cutting edge of the field.\nIf we compare the influence of average citation with the Figure 2 and accompanying events it is clear that such a curve more closely describes the general situation, and therefore it is advisable to take into account both measures, average citation and influence of average citation per elapsed years.\nBy observing the influence of the average citation we can discern three periods, and one emergence point. The first period ends around 2003, the second ends around 2015 (while in 2014 the soft-search neural machine mechanism for translation was published [128], an important piece in later GPT models), with the last proceeding from around 2015 -the last two periods are delineated by an emergence point of 2016-2017 transition. Here average citation influence has overtaken average citation, with documents becoming influential in terms of the mentioned fixed point, and highly influential from that point onward (influence of average citation was scaled to average citation range with having in mind minimum and maximum value, in a conservative manner).\nLastly for this analysis, before the emergence point, there were two periods that were influential during their time, the first during 1994-1995, and the second during 2002-2007 -even though by looking at individual points they are far apart, generally speaking, extending to neighboring points, they are influential, which corresponds to the increased interest in AI, coming of age, and then applicatory and scientific breakthrough during the last few years. As for the foreseeable future, documents will most likely stay in the range of influential to highly influential since AI developments are far from becoming stagnant.\nAs per research objectives, the focus is topics, methods, techniques and algorithms in the context of AI, Finance and entrepreneurship, therefore a number of Sankey diagrams were created so as to place these in relation to other items of interestall diagrams were created with the same settings, defining the maximum number of items so as to leverage depth and breadth. In Figure 4 we see, as per central pillar, prevalent topics, in general Artificial Intelligence paired with performance evaluation and credit assessment. These themes are overly general in order to dive deep, but they do give a broad overview of what is being researched. If bankruptcy and bankruptcy prediction are merged, then bankruptcy is a bigger element than artificial intelligence -essentially making the field predominantly about machine learning and performance evaluation.\nOn the left, countries producing those themes are placed, with China and the US in the lead, followed by India, the United Kingdom, Korea and Spain. China and the USA have the biggest footprint in machine learning, approximately equal, and the same goes for AI as well, while Korea has a substantial footprint in bankruptcy prediction. The computer science side of things is dominant, with performance evaluation having a significant but substantially smaller piece of relevance. These three countries are in the lead -other countries are significant but more dispersed across topics.\nOn the right one can observe affiliations, and as the data shows, National Central University, Islamic Azad University, and Chinese Culture University are the biggest.\nNational Central University's largest contribution is in machine learning, bankruptcy prediction, and data mining, without having an impact in deep learning. Islamic Azad University's impact is dispersed, approximately evenly, without having an impact in neural networks. Chinese Culture University's greatest impact is in machine learning and bankruptcy prediction, without having an impact in deep learning and credit scoring. Out of 8 universities, five are Chinese, according to the data on the left. No university covers all topics, and perhaps an interesting point, both National Central University and Chinese Culture University are having a substantial impact in machine learning, and bankruptcy prediction, without having an impact in deep learning, perhaps an indication of aligned focus. Out of the universities making the list, Dalian University of Technology has strict specialization in machine learning, the only such case here, it however does not have the greatest impact, regardless of the aforementioned, as well perhaps an indication of a focus, reason of which is unclear.\nAs we are primarily interested in concepts, that is what will the content reveal, we will again keep as a central pillar keywords, and place them into juxtaposition with sources and source citations, Figure 5. Topics are the same as in Figure 4, with the size of the footprint being the difference, and aside from the fact of machine learning domination, computer science is in the back of the queue -bankruptcy and machine learning are again at the forefront, just in reversed order this time, being an indication of a larger picture. Thus it seems that top countries are more geared towards the technological aspect, as observed in Figure 4, while top sources are geared more towards the practical domain, as seen in Figure 5.\nOn the left, we see sources, Expert Systems with Applications, IEEE Access, and the European Journal of Operational Research (EJOR) as being the most prominent ones. All the sources are dispersed in the themes they cover, and it is difficult to single anyone out, with not all the sources covering every topic -so there is a certain amount of specialization there. If we look on the right side, we observe sources again, but citations in this case.\nOne would expect, when we compare sources and their citations, for these lists to have substantial overlap, and they have, but there are surprises as well. Out of the 8 journals on the left, three are missing on the right: IEEE Access, Sustainability, and Journal of Forecasting. Their output was not enough to earn them citation relevance on the right side, not to put excessive emphasis on this and discredit these journals, but could this kind of analysis be relevant, as one of the factors, in trying to detect predatory journals? The absence of IEEE Access is of special question, as its imprint on the left is significant, of course a broader analysis would probably include that journal as well, but it is strange enough to ask the question, what has happened so that these three journals are missing on the other side.\nAs for the situation with source citations, Expert Systems with Applications reigns supreme, a first one on the left side as well, with European Journal of Operational Research, and Decision Support Systems (a third place here, but not as relevant on the left, showing that output is not a foolproof way for high citation count, but not irrelevant either). Newcomers here, not found on the left, are the Journal of Finance, Journal of Accounting Research, Management Science, and Journal of Banking & Finance, in spite of not being so highly relevant in terms of output and citation they are very impactful.\nWhen we look at the journals themselves, we see that they correspond well with the topics in the middle, with computer science journals having the lead, indicating that the central point of the papers is the method itself, with the application domain having a supportive role. These kinds of analyses can be used by authors to decide in which journal they want to publish, e.g. Expert Systems with Applications publishes in all topics and very highly contributes in terms of citation count to machine learning, bankruptcy and credit scoring, with 2nd and 3rd journal of the right following a similar pattern.\nThe last analysis in this subsection is in Figure 6, a Sankey diagram and a continuation of the previous ones. Keywords are in the middle, we are on purpose having the same fixed point for every Sankey, as this allows for a comparison between diagrams and pillars. Topics are the same, performance themes are again dominant, and bankruptcy prediction is vastly superior, a consequence of selected variables to create the diagram from, with machine learning and neural networks following. There is a pattern here, it seems that in a general case, performance evaluation is in the lead, followed by a substantial presence of artificial intelligence, machine learning in particular. A logical consequence of analyzing a field that is an intersection of AI, finance, and entrepreneurship -the field is geared towards economic themes and is strongly drawing on AI, essentially the application of AI in finance and entrepreneurship, with the most prominent journal being Expert Systems with Applications, a fitting name for the purpose, aligning with field research activities.\nAll topics are quite strongly connected to all the elements on the left, keywords plus. Considering that keywords plus are algorithmically generated from document references, they represent foundational knowledge on which contributions rest, and as it can be seen, that foundational knowledge predominantly is bankruptcy prediction and neural networks, together with a number of themes as, far as it can be gathered, from AI, economics and statistics. In order to produce contributions in the middle, researchers have stood on the existing models, knowledge on prediction, classification, performance evaluation and statistics, coupled with the influence of neural nets -this is what the analysis of top items reveals.\nOn the right, references can be seen, these represent the intellectual roots of the field. The top three references contributing the most to topics of interest in the middle are Altman, Ohlson, and Beaver -however \"none of their work is based on an artificial intelligent-based approach, due to the fact that all aforementioned work are pioneer studies in the bankruptcy prediction field, the posterior authors tend to cite them in their papers with high frequency.\" [99] Out of all other references those that are AIbased are: Kumar, Min, Tam, Shin; while Zmijewski is dealing with methodological issues and financial distress prediction. There are an equal number of economicsbased (also combined with other aspects) vs economics AI-based papers, with the constraint of one paper not having their title captured, indicating that both fields have approximately the same relevance.\nWhen one compares intellectual roots on the right and foundational knowledge on the left there is correspondence, but things are changing. On the left it seems that prevalent themes are largely in economics, reasonable results as economics are the insider here, while on the right we see the same number of references for those more leaning toward economics and those that are AI-based, indicating a potential change in direction as the recent references are all AI-based.\nComputer science is increasingly entering economics thought, and it seems that references, through which foundational knowledge also, are becoming AI-based, and intellectual roots of the field are being modified. This will most likely continue and will produce even more citations and scientific contributions where AI and economics go hand in hand. The question is, will there come a time when AI will be so intertwined with economics and society that AI-based papers will be all to see here? As is on the left, so is similarly on the right, references are linked to all or almost all topics in the middle, indicating a broad relevance." }, { "figure_ref": [], "heading": "Sources Data Analyses", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "In the sources data analyses the first insight to consider is in Figure 7, here we see sources and their respective output in terms of number of documents. By far the most published source is Expert Systems with Applications, after which with a huge falling behind there comes Computational Intelligence and Neuroscience, others are decreasingly following with every previous one being not that far behind. ESA clearly shows a great deal of specialization for the field at hand and is a great venue for authors to publish in, with other sources also being a potential place for research publication. These are also publications that one can read and be well-informed about the subject.\nOut of all the sources, the Journal of Forecasting is the only one that has an economic, social and behavioral focus, with not that strong computer science leaning. All others range from small to very large computer science footprints, indicating how computer science sources present a fertile ground for application-based and interdisciplinary research. Economics journals, and perhaps most others as well, are in a difficult situation in ascertaining the merit of research that has strong computer science, and especially artificial intelligence, elements. This situation is also potentially an indication that computer experts are dominant authors, with them leading and carrying out the research.\nIn 8 we are continuing with most local cited sources. ESA is in the strong lead again, considering the journal's output in the number of documents and it seems high specialization this is far from surprising. However there are surprises, as a number of sources from Figure 7 are here missing, and new sources have entered the door. This time economics journals have a more commanding presence in terms of relevance, and there could be multiple causes. It might be that some sources are not outputting relevant content, it might also be that citation practice is influencing these results, it could also happen that lack of expertise in computer science is producing lower citation count for certain journals, or it might be that in a predominantly economic field economics takes the center stage.\nThese sources can also be inspected in terms of intellectual roots, as they are the ones that are being cited in so many documents, and represent the publication branch of the field. By having economics sources more relevant, there is a coupling of themes, a consequence of which is so strong a link between AI and performance evaluation, as seen in previous analyses. Data in this figure, as well as many others, follows the Pareto distribution, and it is interesting how many phenomena follow such or similar distribution, and how many potential errors can be detected by inspecting whether or not something is following such a curve.\nWith that being said we can turn our attention to Table 11, determining core sources as per Bradford's Law. A more detailed look than before, 20 sources, with ESA being in great lead and being the number one source.\nESA is followed by other sources -here as well computer science journals are dominating, or at least the ones having a computer science component. Per the number of articles, after ESA, sources are close to one another, most likely a consequence of source scope, journal relevance, publisher, prominence, etc.\nIf one takes a closer look at the total sum, it can be seen that the sum equals 629, a substantial amount of papers, since the number of papers analyzed is 1890, this makes 33.28% -in accord with Bradford's Law, a high amount of documents, published in a small number of sources, if we recall that total number of sources was 637, this makes only 3.13%. A classical example of a center of power so to speak, often seen in nature, and the world we live in, that naturally leads authors to gravitate to these sources.\nFollowing a strain of thought, we are arriving at h-index, in Figure 9, a measure combining both the number of documents and citation count. If we compare this analysis with the one in Figure 8, there is a clear difference, some publications are perhaps in the extreme with a small number of highly cited documents, which index h will demote -5 out of 10 are missing, a substantial amount, a warning for ascertaining an object with one measure only.\nESA is once more on top, with others following, but far behind -with one measure after another, ESA confirms its relevance, making a strong case for its top position. From EJOR onward, other publications are gradually following, most likely a situation similar to that before. As a whole, by observing all sources and their corresponding indexes, a Pareto distribution as well. When looking at not only citations but the spread of those citations as well, these are the sources that are relevant.\nIn Figure 10 we can observe sources of cumulative production of documents. All sources started slowly in the early 1990s, after which as per field development and an increasing AI presence document production has for the most part steadily increased. There are however a number of points of interest.\nESA approximately follows the same trend as other sources, but then in cca. 2008 there is a takeoff that has continued to this day, indicating that this journal has very early on recognized AI relevance in finance and entrepreneurship, and it seems established a strong venue for these kinds of papers. During this time, other journals have also increased their output, there is an obvious change here, but far less than ESA.\nThere are other moments in time, with Sustainability, Computational Intelligence and Neuroscience, Mobile Information Systems, and IEEE Access. A long time has passed since these have started to publish a substantial amount of documents in this crossroad field, with one of the factors being the lifetime of the journal. With that in mind, both authors and the journals needed to recognize the importance of the area scope, and as well so to speak find each other.\nDuring the first three years of the entire period, there was only one journal that was published, and that was Knowledge-Based Systems. This journal has published one document every year of those first three, after which this trend has varied from low to moderate, and is at the moment 7th journal in terms of the number of documents published. It is perhaps a strange fact that this journal that was a pioneer in publishing such papers is not publishing more, on the other hand, neither document output, nor citations, etc. is an indication of quality, which can be ascertained only by reading the paper, and it is possible that the journal has rigorous criteria for accepting a document for publication -of course with other possibilities for this situation also.\nBy looking at sources' yearly production of documents, in Figure 11, the image can be additionally cleared up. When ESA had a document renaissance, both the European Journal of Operational Research, and Knowledge-Based Systems started to join the trend, but only Expert Systems with Applications has continued it.\nBy the end of the period, around 2016 and onward, ESA is not as drastically ahead as it seemed in cumulative analysis, but it is in the top three nonetheless, with the Decision Support Systems' peak much more elucidated here. If we observe 2022, ESA is strong, however, Mobile Information Systems is a small amount ahead, with Computational Intelligence and Neuroscience jumping to an enormous lead, indicating that perhaps a change is ahead, with other journals taking center stage, and ESA behind, but following -such a situation could be a consequence of ESA saturation, and other journals taking more of a prominent role in AI, entrepreneurship and finance.\nJournals are typically trying to be of quality and on the cutting edge, this can potentially lead to an overproduction, has this happened here we can't say for certain, but considering the data, for a number of journals it is a possibility." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_1", "fig_1", "fig_1" ], "heading": "Authors Data Analyses", "publication_ref": [ "b106", "b121", "b106", "b121", "b106", "b121", "b105", "b132", "b133", "b134", "b106", "b121", "b136" ], "table_ref": [], "text": "When conducting bibliometric analysis of scientific data it is natural to perform an analysis of the author's data and interpret the author's relevant metrics (number of published documents, citations, h-index, etc.), so as to see who's who in the field.\nIt was however not possible to perform such an analysis, as during research it was detected that Bibliometrix [107,123] has some serious issues in that respect, as the unique identification of authors is not working as expected -in one situation it was so extreme where for a particular author's complete name abbreviation the program used about 10 different authors in order to calculate a metric 8 .\nIn such a situation confidence in the results is seriously shaken and the analyses would be useless, and as every analysis depends on the unique identification of an author, we had to move over these -an unfortunate fact as it is, but necessary nevertheless, at least when using Bibliometrix [107,123].\nAside from Bibliometrix [107,123] there are other tools [106] with which one could conduct at least some part of the analysis if not everything, and as per our inspection VOSviewer [133,134] is one of the more prominent ones and it seems trustworthy [135]. Thus in order to analyze authors and their prospective metrics we have used this tool, as the tool calculates the data we need, even though it does not present it as Bibliometrix [107,123], however, the data is all we need, as analysis and visualization can be done by the researcher.\nThe first analysis was performed on the most relevant author as per the number of documents 9 and is seen in Figure 12. Here we see three groups, the first is made of two authors with 37 and 35 documents, then there is a substantial fall to 21, but still high, after which there is a group of 7 authors ranging in terms of the number of documents from 8 to 12, a group with authors close to one another that could be additionally divided into four groups, as per the number of documents. The data, and corresponding curve that could be drawn by entire dataset data points, approximately delineates Lotka's law, stating that as the number of documents increases, the number of authors that have published that amount decreases by a power inversely proportional (for n documents one has 1 n 2 authors, out of the authors that have one document published, e.g. if 10 authors have written 1 document then 1 author has written 3 documents:\n10 × 1 3 2 ≈ 1) [136]\n. After a number of documents were published we are interested in citations, presented in Figure 13. Top authors have having substantial local citation count, indicating high relevance for the field, brought about by researching prominent themes, such as prediction, AI algorithms, etc., as bibliographic data reveals. By looking at the data for the top 10 authors, three groups are clearly observable -with the first two having three authors, and the last with four authors. Such a situation is potentially an indication of collaboration, or research that is closely related. As for distribution, it is somewhat more linear than the one before, however, when we consider the entire dataset, a Pareto-like distribution is also in place here. When we compare with Figure 12 only three authors here are the same. A number of authors in Figure 12 does not have as long a publication history, however, a number of authors in Figure 13 has with a few papers, and some are of an older nature, achieved high relevance -a clear indication that a number of papers is not everything, but is a factor.\nThe last thing we wanted to achieve is to calculate standard and amortized hindex for authors, so as to evaluate impact. Considering problems with the tools it was decided that we would try to uniquely identify authors via Google Scholar, and the author corpus for this task was the authors analyzed for the most number of documents published in Figure 12 and most locally cited authors in Figure 13 -as those authors are logical candidates for the entire dataset, and while analysis of the entire dataset is not possible, these logical picks are most likely being correct or close to correct. Nevertheless, it was not possible to make a unique Google Scholar identification as all the authors either do not have an account or perhaps they have changed affiliation and it was difficult to be sure if that is the right author. An effort was also made with a well-known tool Publish or Perish [137], but to no avail. As a last resort to make the research as complete and rigorous as possible, we have searched the authors directly in Web of Science and obtained index h in such a way -one should take note that such h-index was calculated from Web of Science classification10 , and is a few months newer than our own data. This was however the best possible solution and was worth taking since it does represent the correct state in science, it is just not directly comparable with our own analyses, it does give authors relation to each other, just in a general manner on a larger corpus of documents. Index h calculation for the aforementioned corpus of authors, from which the top 10 were taken, following set methodology, is presented in Figure 14.\nWith the mentioned constraints, out of the authors in Figure 14 6 appear in the most cited analysis, while four are not, with those four still appearing in a number of documents analysis -the results are close, it could however indicate that if one has large citation count, a chance to fare well in index h is greater than if only document count is high. As for the authors in Figure 14 that appear in both citation and document analysis, there are three authors falling into that category (with the overlap of citation and document analysis being those same three authors), indicating the importance of balance between document count and citations, as extremes will induce low/lower h-index.\nIt should be noted that these extremes can be unfairly penalized by h-index, e.g. an author can have 10 papers, with 5 papers having 100 citations each. Such a situation would produce index h of 5, presenting that an author has 5 documents with 5 or more citations, which is true, but a gross understatement of an author's performance. Such an author could potentially be a candidate for some of the most prestigious scientific awards, with h-index not giving such an indication.\nWhen the top 10 authors as per h-index are considered, distribution is for the most part linear, a logical consequence of index h cutting the extremes, therefore these authors are most likely well-balanced in terms of the number of documents and relevance. However, with a substantial number of low indexes appearing at the bottom of the entire dataset, distribution as a whole would be similar to Pareto. The future will most likely see these authors at the same place, for some time at least, as the field will most likely stably develop and grow in the present direction for a foreseeable period of time.\nWith amortized h-index here so as to impede the problem of sources having different starting years the following is observed. Zhang's relevance has substantially diminished, as well as Ravi's who is known in front of Zhang -with Tsai, Chih-Fong; Li, Hui; and Sun, Jie taking top three places, respectively. So when we take into consideration the idea behind amortized h-index and make scaling of standard h-index, there are changes that reveal a somewhat different ranking, namely in Zhang, GQP'; Ravi, V.'; and Xu, Wei' repositioning. Relevant information, as some authors have been overly emphasized by standard index h, with the other authors being of higher importance than it originally seemed." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Affiliation Data Analyses", "publication_ref": [ "b106", "b121", "b132", "b133", "b106", "b121", "b132", "b133", "b136" ], "table_ref": [], "text": "Then we moved on to affiliation analysis, however it again turned out that Bibliometrix [107,123] has problems uniquely identifying institutions, a problem that seems the same as the one with the authors, as per our manual analysis the number of affiliations was inflated, therefore for this analysis we have used VOSviewer [133,134], that is the calculated data from the tool, the tool more capable in that regards.\nThe first analysis was on the most relevant affiliations by the total number of published articles and can be seen in Figure 15. Distribution here as well is one like Pareto, with two institutions being very dominant, while others approximately linearly decreasing in terms of number of publications. Zhejiang Normal University is as it seems far superior than any other institution, and has high specialization in the observed field, while National Central University follows. Institutions not making the ranking are involved in the research about AI, finance, and entrepreneurship, but to a substantially lesser degree and presumably with other fields of research taking more prominent/equal roles.\nWhen one knows document output, the next useful information is citation count and this is presented in Figure 16. Institutions are more closer to one another, in terms of the number of citations than in the previous analysis, indicating that the number of documents is not the only criterion for achieving a high citation count.\nIf we look at all institutions, aside from these 10, the distribution would also conform to a Pareto-like curve, just with a more competitive edge in this instance -as if there is a battle for influence. It is also possible that such grouping is a consequence of collaboration or some other linking factor.\nWhen compared to document output in Figure 15 there are only two overlapping affiliations, Zhejiang Normal University and National Central University, which is both telling and surprising. It would not be strange for this to happen for institutions at the back of the line, however when affiliation has high document output, not appearing in citation count raises questions. The top two affiliations in terms of document output are also highly relevant in citation count, while other affiliations are missing altogether as if there is a gap, either in quality or perhaps temporal delay of relevance caused by other factors, some of which could be affiliation renown, track record, current employees, alumni, prominence, country of origin, the main discipline of research, policy alignment, state affiliation, grants, industry collaboration, size, etc.\nThere is one more analysis that would be of interest to present the results from, and that is h-index for affiliations. Neither Bibliometrix [107,123] nor VOSviewer [133,134] present this measure in the aforementioned context, and in order to achieve this we have used the tool Publish or Perish [137] and extracted the data via Crossref [138], a well known and respectable knowledge metadata database that is heavily used by publishers and institutions -in this way analysis will not be dependent on any particular scientific/professional database, and free from potential skewing.\nSuch h-index will not be directly comparable with analyses conducted from our own data, yet it will give an insight into how local data institutions compare with global reach in mind, and that is useful and interesting. As a basis for the analysis, affiliations as the most relevant in terms of the number of documents published in Figure 15 and citation count in Figure 16 were used. These institutions are in focus of the research and therefore a focus of index h calculation as well. Index h calculation for the aforementioned corpus of affiliations, from which the top 10 were taken, following set methodology, is presented in Figure 17.\nAs it is difficult to compare this analysis with those before, we will forego such discussion, but what we can do is comment on the analysis itself. Thus we can articulate that NYU is in the lead, far and beyond the rest of affiliations. After NYU, GSU and CUHK are following, respectively. The rest of the affiliations flow in a close descending order.\nWhen we observe the general picture we see a Pareto-like distribution, with NYU reigning supreme, followed by far behind GSU, and then the rest of affiliations following in a close manner to one another.\nIt is however not only about the entire period of time, but about the latest developments as well, and this can be seen in Figure 18. NYU is still highly relevant and far in the lead, though this time ZNU has jumped from 7th position to the second, in line with previous analyses, indicating recent developments and ZNUs latest prominence. Afterward, we have five affiliations sequencing a group, perhaps an indication of interrelationship, but without deeper analysis, it is difficult to tell.\nA last point of interest here, there are two new affiliations in this analysis, as compared to Figure 17, with Chinese Culture University and Oriental Institute of Technology missing, while Dongguk University and Institute for Development and Research in Banking Technology are making an appearance. An indication of current and potentially future developments as well, in terms of not only citations but the spread and broader relevance of publications. This situation could be a consequence of the number of document counts and strong technological presence in today's world, notwithstanding potential other factors as well, yet without the ability to directly compare to local research dataset and with Bibliometrics constraints it would not be prudent to overly project." }, { "figure_ref": [ "fig_16", "fig_16" ], "heading": "Countries Data Analyses", "publication_ref": [ "b137", "b137", "b137", "b137", "b138", "b139", "b137", "b137", "b140", "b137", "b141", "b137", "b142", "b137" ], "table_ref": [ "tab_2", "tab_2", "tab_2" ], "text": "Sources, authors and affiliations are linked, and if we generalize the picture, it is about countries that we speak about -as revealed by authors, and affiliations also. Such analysis is first presented in Table 12, with countries scientific production.\nThe most prominent countries are China, the United States of America, and India, followed closely by the United Kingdom. China is by far the most productive (not surprising considering its population size and number of scientists [139]), followed by the United States of America which is also well very high producing country. Afterward, the relative measure as per baseline is slowly going down, without such a high difference. These results indicate the high focus and interest of China and the United States of America in AI, entrepreneurship and finance, and if further innovation comes, it will most likely predominately be from these countries. Most of the countries in the list in Table 12 represents those wealthy of the world with a strong economy and high focus on innovation, and thus such results are not surprising, with the future most likely bringing similar trend. [139] In Figure 19 we can see corresponding authors and collaborations. Before interpretation, one would need to know what it means to be a corresponding author.\nUnfortunately, this definition is not so clear cut, and it can have various meanings, from simply being the one in charge of communication, or having a substantial research impact, to having the greatest research impact and being a problem bearer. Depending on the definition, so does the interpretation change. In our instance, we will adopt the meaning of being a problem bearer, the one who has scientifically contributed the most, as out of all options this meaning is most heavily leaned towards -it is a general statement, supposition most likely of being correct, yet is a constraint and should be viewed as one as deviation from such definition could be extremely pronounced in various fields and countries also.\nAnd so the most contributing nations it would seem are China, the USA, and Korea, with China having a strong lead and the USA also being substantially more prominent than other nations. Other nations are more closely related in terms of a number of documents, and respective contributions to the scientific field of focus. It is possible that the state of China and the USA, but Korea as well in a sense, is at least partially produced by cooperation between these countries. Here as well high percentage of the Western world and wealthy nations can be observed, as everything has a price, and those that have the right conditions are those that are producing, although there are other countries, like India for example.\nSubsequently, the collaboration aspect tells a bit different story. Out of the three most published, that is China, the USA and Korea, the USA is in the lead in terms of multiple country publications with 24.3%, indicating the environment most geared toward collaboration with the outside world. This however is not the top, as the UK with 59.2% outmatches everyone by far, possibly a consequence of the commonwealth and highly multicultural society -while France and Italy are following strongly, and even beating the USA. There are other countries as well with rather high collaboration numbers so to speak, nevertheless with substantially fewer documents, and therefore perhaps not that significant.\nCountries' production over time is presented in Figure 20, an important analysis so as to ascertain the trend, and potential future impact. There are two curves that obviously stand out, China and the United States of America, all the others are substantially below and could in a way be grouped together, in spite of time and count differences.\nThe USA had an important role for quite a number of years before China started its ascent around the year 2005. By 2008 these countries were equal, and in 2009 China made an overtaking move. Since then, in absolute terms, China has been publishing substantially more in the number of documents. It is however fascinating to compare these results in terms of other factors, e.g. population size, number of scientists [139], time available for research [139], etc. China has a population of cca. 1, 413, 659, 000 [140], while the USA has a population of cca. 339, 277, 000 [141], which makes China cca. 4.17 times larger than the USA. Therefore as China's document count is 2063 (as stated in Table 12), one would expect from the USA cca. 495 documents, as opposed to 717. Which makes the USA extremely productive, cca. 45% above expectations and China -the same calculation can be performed for other countries as well. Thus it seems that there is more here than meets the eye and that special care needs to be taken into account when interpreting bibliometrics results.\nAll the countries have had a positive trend, during the last few years at least, with the USA and China being on the extreme side of things (in line with the enormous funding of science [139]), and as it seems highly relevant. As confirmed by other analyses, an explosion driven by the huge leaps in AI, and one that will most likely continue for the foreseeable future, with other countries being in a supportive role.\nIt is of course a part of the picture of how cited all of these documents are, and this we answer in Figure 21. Not surprisingly, China, the USA and Korea are leading the group of most cited countries (in line with the enormous funding of science [139]), with a Pareto-like distribution being observed. Here well can be ascertained that the USA is substantially more relevant than China, not in terms of absolute value, but when e.g. population size would be a factor, with Korea beating both of them, as Korea (South Korea) has a population size of cca. 51, 268, 000 [142] -which makes Korea extremely successful and a powerhouse in terms of relevance given by citation sum (in South Korea \"research spending in the university sector has quadrupled\" over the past several decades [139]).\nThis rule of being bigger when smaller might be deceptive, one might say that population might be an aggravating factor, nevertheless in the case of India for example it seems that population size is not a factor, as that country has a population size of cca. 1, 370, 695, 000 [143], a number quite close to China's, yet without results of China -indicating that population size is not the constraint, but that there are other factors at play [139]. The same can be seen with Spain, population size cca. 47, 900, 000 [144], but far less relevant than Korea, with approximately the same population size as Korea. Although it is not excluded that e.g. population size can be a factor, and it probably is one of the factors, it seems that this factor can be accommodated [139].\nOn the other hand, the absolute number shows the overall strength, regardless of different issues, and is something that must be taken into account and can't be disregarded. It not only establishes trajectory, but makes strong fixed points or points, for citation, relevance, collaboration, etc., and this strength is difficult to ignore, both globally and locally.\nTo make a full circle and try to ascertain the issue as rigorously as large research allows, we have contextualized citations and presented an expected value, as seen in Figure 22, with average article citation following Pareto-like distribution. The results are both interesting and in a way surprising. Out of the countries in Figure 21, representing total citation sum, China, India, UK, and Italy are not represented in the average citation per article analysis -while Malaysia, Norway, Qatar, and Cyprus have made an entry.\nThis indicates that the missing countries have produced relevance by a large number of articles that are not as relevant, which is then revealed when the average is taken into consideration, with the reverse being indicated for countries that are new. So the top three countries then are, Korea, Malaysia, and Germany, with both Korea and Malaysia being contenders for which one would not expect such high relevance, yet here they are. The same could be said for some other countries as well, with the 9th place of the USA representing quite a surprise, while the absence of China in the top 10 is an even bigger astonishment.\nPareto distribution, or Weibull perhaps, is found to be describing the phenomena, and so many others also. This is significant in at least two points. The first is that it can be a detection tool for an analysis, a way to right at the beginning try to determine whether what one has produced is a correct/complete picture of a situation. While the second point is of a more informative nature, telling us that it is a typical case having a few best ones, and then come all the others that are perhaps also solid, but are nowhere near the top." }, { "figure_ref": [ "fig_5", "fig_6", "fig_5", "fig_2", "fig_2" ], "heading": "Documents Data Analysis", "publication_ref": [ "b143", "b143", "b144", "b145", "b146" ], "table_ref": [], "text": "As one ascertains the documents it can be observed that many publications are in operations research, a discipline well known and highly researched in the past. As we are nearing the present day, the research is shifting to computing, information and algorithms. Time will eventually tell, but it seems that computer science will deal with the issue more favorably and bring the solutions into the lives of a large number of people in an accessible manner.\nContinuing from analyses for countries, we are coming to an analysis of the most locally cited documents, as seen in Figure 23. The highest citation count is 187, the three most cited documents are not those that are quite old, therefore indicating that relevance was not a result of old age so to speak. Notwithstanding that fact, aside from one document published in 2017, the youngest document is 15 years old, published in 2008, with the oldest documents going all the way to 1994. Indicating that generally speaking one needs to wait for a number of years, far more than the usual citation accumulation period of two to five years, before he can expect to shine in the top 10 of the subject area published in.\nThere is however that paper of 2017, an exception, yes, but worthy of notice nonetheless, as the exceptions are being looked at here, are they not. A much younger paper than the rest, but it entered into the top 10 most cited documents nevertheless. Published in Expert Systems with Applications, as already established, quit the influential journal in the observed field, and thus perhaps an expected outcome. The paper itself deals with machine learning and bankruptcy prediction, as later analyses will reveal themes of high interest, indicating relevance in the hereunder, as well as prominence of the topics in not the distant past and most likely in the near future at least.\nThe citation distribution, at least for the presented data, is approximately linear, and documents are not that far apart in regard to citation count. Indicating competitiveness and that a group of close relevance documents is to be expected for the top cited documents of the field at hand -a logical result, as a discipline, or a field, is comprised of a corpus of knowledge, not a few all-containing documents, and it seems that document citations are following that rule.\nAs an added information to the analyses, there is also an expected value, so as to complete the picture of the situation. For the most part, the average reveals the same order of relevance, with two standouts. The first is that same document of 2017, with a value substantially higher than that of the first document -additionally confirming the relevance of the paper and the themes, while at the same time indicating future developments. The second is a paper from 2008, and by closer inspection, one can observe neural networks, bankruptcy prediction and credit scoring, subjects quite similar to the first document in regard to the average value. Considering the aforementioned it is perhaps possible to predict future developments by finding such patterns and projecting them to later years, a separate research topic worthy of pursuing.\nAlongside document relevance, it is also important to determine the same for the references, the information presented in Figure 24. The number of references is large, 62517 in total, indicating extensive foundational knowledge, which is not surprising considering we are dealing here with the intersection of three separate disciplines, AI, finance, and entrepreneurship. Coming from references to documents, in regard to sources, we can also observe a transition from let's say economics sources, to sources that are more of a computing nature, a transition most likely made possible via AI entering into finance and entrepreneurship.\nThe total citation sum is substantially higher here than in document analysis, as is the age of references, an expected outcome, and older and foundational knowledge if more relevant. The distribution is Pareto-like, an obvious transition from the linear distribution present in document analysis, indicating crystallization of the best through time. The expected value reveals that three references are more relevant than it seems, and these are also the youngest references of the entire group, which are two from 2005 and one from 2007 (a review document). A closer inspection reveals bankruptcy prediction in the context of computing and Artificial Intelligence -aligning with the findings in document analysis in Figure 23, the future here is AI, and it could have been predicted for some time now.\nThe last analysis in this context will be reference Spectroscopy, found in Figure 25. In 1764 the first reference is found, most likely referring to a document of Richard Price [145] communicating Thomas Bayes's doctrine of chances to John Canton, as Bayes never published this great work before he died, while Price to whom the manuscripts of Bayes were passed found it of high merit and well worthy of preservation with hopes of communicating the find to the Royal Society of which Bayes was a member, and thus this great work was published. [145][146][147][148] Considering that we are dealing here with economics, finance, machine learning and generally speaking with everything AI, it is of no surprise, especially with the importance of probability in AI and computing in general, that foundational knowledge would begin with Bayes and 1764.\nAfter that first paper, there is a reference here and there, yet considering future developments, nothing of greater note happened until around the 1930s when both numbers of references and citations started a greater growth -and it took another cca. 30 years, until around 1964, which was the beginning of the growth never seen before, firstly in an approximately linear fashion, after which there came an exponential explosion that continued to the present day.\nIt took significantly more than half the period, from 1764 until 2023, for references to start higher and higher growth, indicating that aside from some older foundational knowledge, this discipline is vastly based on recent knowledge, which is most likely, at least in part, a consequence of timely advances in economics and computer science, with then consequently new discoveries transferring to the shoulders of giants (supported by comparison of Figures 2 and25). The citation curve follows the reference curve and presents a case for relevance, references are moderately to highly cited, and as time passes increasingly highly cited, indicating relevance and most likely practice of the authors working in the discipline." }, { "figure_ref": [ "fig_7", "fig_8", "fig_8", "fig_8" ], "heading": "Words Data Analysis", "publication_ref": [ "b147", "b148", "b149", "b150", "b151", "b152" ], "table_ref": [ "tab_3", "tab_3", "tab_3", "tab_4", "tab_3" ], "text": "One of the objectives of the research is to find prominent topics at the intersection of AI, finance, and entrepreneurship, and this analysis, analysis of words, will allow us to accomplish that goal, together with other related analyses of the research.\nAs seen in Table 13, and out of those that are of a more concrete nature, an indication is that the most relevant topics in terms of computing are deep learning, neural network, support vector machine, blockchain, and decision tree. In regard to topics that would be classified into an economic group, we have bankruptcy prediction, credit scoring, firm performance, business failure, and fraud detection.\nThe aforementioned topics indicate the focus of the published documents and authors, which are most likely linked to research projects with corresponding institutions and as such speak about a much wider field of thought, interest, and influence. Such a time slice is also useful as an indication of potential future developments as well as the basis for information for authors to ascertain hot topics, areas of potential major developments, and other enabling factors.\nAs useful as a point in time is, one would also like to ascertain the entire period of research with having more robust information and a foundation for reasoning about prediction as well, for top 10 terms and in a cumulative manner this can be seen in Figure 26. Computer science terms are in a substantial lead, indicating that even though the application part is in economics, it seems that documents have a heavy focus on computing and most likely improvement thereof.\nUp until about 2003, listed terms are seeing slow but steady growth, indicating development and interest. From then onward there are two main periods, to around 2018, and from then until the present day. All terms are seeing high growth of occurrence, just at different stages in the timeline, most likely fueled by research contribution and the world we live in influence.\nThere are some obvious extremes, two of which coincide with bankruptcy prediction (appearing two times, these need to be taken together) and neural network (appearing two times, these need to be taken together). Both of these have at around the start of the century seen substantial interest, with the occurrence skyrocketing at around the year 2008 -most likely governed by the well-known financial crisis of 2007-2009 together with the increase in real estate prices and accumulating debt of the decade [149]. Such research trajectory could have been an indication of things to come.\nAnother such event is enormous interest in AI and Machine Learning around the year 2018, as articulated before, a cause of which most likely is huge technological advances in that particular area of computing coupled with increased interest and availability of such research results in various forms. The third extreme point, linked to the second, happening at about the same time is the rise of deep learning (which needs to be merged with term learning) -this rise is extremely high, especially considering that it began from a very low point, a logical consequence considering that deep learning is a building block of neural network and as such linked to its relevance.\nEven though deep learning is a paradigm that is far from new and its beginning stretches all the way to 1962 [150,151], it took a long time for the approach to mature, at least in this discipline, to find its right application and a way into society, referring of course to the recent advances of deep learning in natural language, communication, image generation, programming, etc.\nAs a consequence of the events spoken of, other areas are being researched and influenced by, such as data mining, and credit scoring, which themselves command substantial occurrence, just not as high as other terms, indicating that not everything is in aforementioned and likely supportive/collaborative role of these and as we saw from Table 13 other tools and areas of research as well, considering a need for data combing and scoring this was expected.\nA continuation of the previous analysis, giving insight for a particular year is presented in Figure 27. On the x-axis one can observe a specific year and on the y-axis particular term can be observed, and so as not to repeat what has already been said we are directing the reader to the Figure 27 and previous analyses.\nAside from already stated, perhaps there is a need for mentioning case-based reasoning, that deals with solving new problems while employing knowledge from problems previously solved, an indication of a field coming of age; ensemble that is most likely referring to ensemble learning, that uses multiple models/algorithms to solve a problem, an indication of a field substantially developed; self-organizing map, an artificial neural network used for reducing dimensionality of data, invented in 1982 [152][153][154] and as such an indication of practical relevance in the observed discipline; discriminant analysis, a method used to find features that separate classes of objects, an indication of, together with a number of other terms, a statistical relevance in the discipline; and corporate governance, management term describing governance concerned with delivering long term success to a company, an indication of technology being recognized as one of the factors of importance to the subject.\nAnalysis in Figure 27 also allows for longitudinal inspection, and as seen, considering that the starting year of the analyzed corpus is 1991, it took almost a decade for terms to achieve the desired frequency of relevance, a logical consequence of a beginning and a developing field trying to crystallize and establish its foundations.\nIn regard to median frequency, the first stretch of the timeline was marked by expert systems, a simpler form of artificial intelligence based on if-then rules, small beginnings considering the state of affairs in the year 2023, but a start, with the median high point in 2008. Then we come to a period where various disparate approaches have found their way into the fray: genetic algorithms, case-based reasoning, discriminant analysis, neural networks, support vector machine, etc., with the median high point being 2012 with neural network term -this period is also seeing entering of economic aspect with a business failure prediction, a start of a deeper interconnection, found together with case-based reasoning, most likely linked with various cases and problems both in economics and computer science.\nFurther on in the timeline, we see a proliferation and expansion of the terms of the past, with decision support systems appearing here as well in the year 2015, and also corporate governance in 2014, expansion of the knowledge and rising to a governance level we see, important milestones scientifically and practically -the high point here is in the year 2014 with bankruptcy prediction, a strong indication of interdisciplinarity, most likely collaboration and perhaps other elements as well.\nFrom 2016 onward we have a strong presence of computing, economics and statistics mingling together, with ensemble methods appearing and elucidating now substantial maturation of the discipline and research contributions. This period ends in 2019 with bankruptcy and financial distress prediction, while it has a median high of 90 in 2017 with data mining, a value not that far behind of 2019 with bankruptcy and a value of 77. This period started with computing and statistical terms but ended with economics and financial distress. If previous analyses taken together with this one are of any indication, it seems that the financial waters of the future are wavy and muddy.\nThe last period of time it seems starts around 2020, and as here we most likely have a substantial data gap we can project it until 2023 and be uncertain about the interpretation of it. What can be said with some certainty is that the discipline is continuing in the same/similar direction, with deep learning becoming a strong factor and natural language processing having a role as well -these will most likely have a significant impact on the technological side, but as the history and necessity tells us their inclusion into finance and entrepreneurship will not be abstained from. Sorted according to search date, in ascending order. G and W represent total sums for Google (Table 3) and Web of Science (Table 4), respectively 1 First part of the analysis presented in the table can be seen in Table 3 2 The WoSCC search was conducted using three combinations of keywords that gave the largest number of relevant results in the Google Scholar search 3 No. of documents in the initial screening after removing duplicates 4 No. of selected relevant documents 39 \n(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"debt*\" OR \"loan*\" OR \"venture capital*\" OR \"venture fund*\" OR \"angel*\" OR \"equit*\" OR \"bootstrap financ*\" OR \"bootstrapping\") AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm* OR entrepreneur*) 736" }, { "figure_ref": [], "heading": "(b)", "publication_ref": [], "table_ref": [], "text": "(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"security token offer*\" OR \"initial coin offer*\" OR crowdfund* OR kickstart* OR \"peer-to-peer lending\" OR \"peer-to-peer loan\")\n(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"SME failure\" OR \"SMEs* failure\" OR \"enterprise* failure\" OR \"compan* failure\" OR \"business* failure\" OR \"firm* failure\" OR \"entrepreneur* failure\" OR bankruptcy)\n(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"SME valuat*\" OR \"SMEs* valuat*\" OR \"enterprise* valuat*\" OR \"business* valuat*\" OR \"compan* valuat*\" OR \"firm* valuat*\" OR \"entrepreneur* valuat*\" OR \"SME success*\" OR \"SMEs* success*\" OR \"enterprise* success*\" OR \"business* success*\" OR \"compan* success*\" OR \"firm* success*\" OR \"entrepreneur* success*\" OR \"SME performance*\" OR \"SMEs* performance*\" OR \"enterprise* performance*\" OR \"business* performance*\" OR \"compan* performance*\" OR \"firm* performance*\" OR \"entrepreneur* performance*\")" }, { "figure_ref": [], "heading": "547", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "Sorted according to search query, in ascending order 1 Continuation of the analysis presented in the table can be seen in Table 6 2 Topic niches identified in the phase of preliminary search and initial screening of the research field presented in detail in Subsection 3.2 of the paper 3 The search was updated and finalized, with the data being exported, on May 5, 2023 4 Total number of records 40 " }, { "figure_ref": [], "heading": "2", "publication_ref": [], "table_ref": [], "text": "(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (fintech OR \"financ* technology\") AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm* OR entrepreneur*) 166" }, { "figure_ref": [], "heading": "(a)", "publication_ref": [], "table_ref": [], "text": "(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND audit* AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm*) 375" }, { "figure_ref": [], "heading": "(a)", "publication_ref": [], "table_ref": [], "text": "(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"accounting\" OR \"accountant\" OR \"financ* statement*\") AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm* OR entrepreneur*) 696" }, { "figure_ref": [], "heading": "(a)", "publication_ref": [], "table_ref": [], "text": "(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"fraud detect*\" OR \"financ* fraud\" OR \"accounting fraud\") AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm* OR entrepreneur*) NOT \"credit card*\" 258" }, { "figure_ref": [], "heading": "(b)", "publication_ref": [], "table_ref": [ "tab_6", "tab_8", "tab_7" ], "text": "(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"financ* management\" OR \"financ* planning\" OR \"financ* decision*\" OR \"financ* analys*\" OR \"financ* sustainability\" OR \"financ* distress\" OR \"financ* risk\" OR \"financ* constraints\") AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm* OR entrepreneur*)\n(\"artificial intelligence\" OR \"machine learning\" OR \"deep learning\" OR \"soft computing\" OR \"neural network*\" OR \"natural language processing\") AND (\"demand predict*\" OR \"demand forecast*\" OR \"predict* demand\" OR \"forecast* demand\" OR \"price* predict*\" OR \"price* forecast*\" OR \"predict* price*\" OR \"forecast* price*\" OR \"salar* predict*\" OR \"salar* forecast*\" OR \"wage* predict*\" OR \"wage* forecast*\" OR \"predict* salar*\" OR \"forecast* salar*\" OR \"predict* wage*\" OR \"forecast* wage*\") AND (\"SME\" OR \"SMEs*\" OR enterprise* OR business* OR compan* OR firm* OR entrepreneur*) 690 6148\nSorted according to search query, in ascending order 1 First part of the analysis can be seen in Table 5, with the continuation of the analysis presented in Table 7 2 Topic niches identified in the phase of preliminary search and initial screening of the research field presented in detail in Subsection 3.2 of the paper 3 The search was updated and finalized, with the data being exported, on May 5, 2023 Sorted according to search query, in ascending order. D and B are representing duplicates and retracted papers, and number of papers for bibliometric data analysis (2694 -804), respectively. 1 Previous part of the analysis can be seen in Table 6 2 Topic niches identified in the phase of preliminary search and initial screening of the research field presented in detail in Subsection 3.2. The search was updated and finalized, with the data being exported, on May 5, 2023. 3 Selected Web of Science indexes, presented in Figure 1 4 Performing reading of the title, keywords and abstract 2 Given by an author in the paper. " }, { "figure_ref": [], "heading": "Year of Publication", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Number of Articles", "publication_ref": [], "table_ref": [], "text": "Fig. 2 Annual Scientific Production. The year 2023 is not relevant for discussion and interpretation as it is not over and full results are not in yet, however unlike for conferences where it can take quite some time for them to be indexed in Web of Science, journals are indexed quickly and the year 2022 is relevant, which the data itself confirms, as the peak is exactly in 2022 and follows the curve trend, only in 2023 there is an unusual drop in a number of published articles. This drop is very likely only due to incomplete data for a year in progress, and the increasing trend will probably continue, pegged on an ever-increasing explosion of application of artificial intelligence methods. " }, { "figure_ref": [], "heading": "H-index", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Affiliations", "publication_ref": [ "b131", "b136" ], "table_ref": [], "text": "Fig. 17 Affiliations Impact (index h, proposed by Hirsch in 2005 [132] for measuring not only citations but output also, the definition of which is, \"the number of papers with citation number ≥ h\"), ending in July 2023 with data obtained from Crossref [138] and standard index h calculated by Publish or Perish [137]. As affiliation names are excessively long, they are abbreviated, with a legend being given below the figure. This analysis was performed for the entire publication periodanalysis for the last 10 years, from 2013 -2023, can be seen in Figure 18. In order to make the figure more compact, the publication source was transferred to the figure legend below -with the link presented as well, so as to make the transition from the analysis to the publication as easy as possible. Entries are sorted according to the total citation sum. The average yearly citation was calculated in a conservative manner by adding one additional year to the time span of the calculation, e.g. for Kumar In order to make the figure more compact, the reference source was transferred to the figure legend below -with the link presented as well, so as to make the transition from the analysis to the reference as easy as possible. Entries are sorted according to total citation sum, with the analysis being performed on 62517 reference corpus. The average yearly citation was calculated in a conservative manner by adding one additional year to the time span of the calculation, e.g. for Altman EI, 1968 In order to ascertain main themes and achieve a more fine-grained result an analysis of co-occurrence is also presented in Figure 28, together with density analysis in Figure 29. Main themes, in line with analyses performed thus far, are machine learning, artificial intelligence, and bankruptcy prediction (which needed to be merged with bankruptcy). In itself knowing only these themes is not that useful, fortunately, via co-occurrence one can determine clusters and links as well." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "NYU", "publication_ref": [ "b132", "b133", "b132", "b133", "b132", "b133" ], "table_ref": [], "text": "Within the machine learning cluster prominent themes are firm performance, crowdfunding, natural language processing, sentiment analysis, and decision support systems -indicating it seems the need on the one side, and context on the other. Within artificial intelligence prominent themes are big data, artificial neural network, blockchain, peer-to-peer lending, fin-tech, accounting, auditing, finance, and SMEs (small and medium-sized enterprises) -combining new technologies and new ways of conducting business, with entrepreneurship and old questions of interest, indicating a changing economic environment, an adaptation to the new. While within bankruptcy prediction prominent themes are ensemble learning, boosting, and imbalanced datacluster concerned with using state of the art for business purposes, while battling with data issues.\nThere are other large themes of note, like credit scoring (part of a cluster are genetic programming, random forests, xgboost (most likely extreme gradient boosting), etc.), neural networks (should be merged with neural network cluster, part of these are data mining, demand forecasting, classification, random forest, financial distress prediction, support vector machine, feature selection, logistic regression, etc.), artificial neural networks (mergeable with other neural networks, part of a cluster are support vector machines, decision trees, genetic algorithms, rough sets, corporate failure, etc.), deep learning (part of a cluster are optimization, fraud detection, long short-term memory, predictive models, feature extraction, etc.), and genetic algorithm (part of a cluster are prediction, financial risk, supply chain finance, big data analytics etc.).\nA diverse set of clusters, having a strong neural net presence (out of 13 clusters, in 7 of them one can find a neural network in one form or the other). There are also clusters more geared toward computing, but also those geared perhaps more toward economics, most likely a result of scientific contribution and author background. It is also a fact of note that clusters are generally quite intertwined, both through cooccurrence and theme appearance in different clusters, thus making a theme a part of one cluster, or multiple clusters, but then perhaps having a cluster where that theme is the dominant one -indicating relevance to cluster and the field (e.g. neural network, genetic algorithm, support vector machine, bankruptcy prediction, business failure, fraud detection, credit risk, decision support).\nOut of 1890 documents, 231 terms were extracted, with the effort taken to strike a balance between quantity and quality. Considering 2490 links, totaling 4258 in strength, and with term occurrence going as high as over 300, the resulting network makes a compact occurrence network that describes this discipline at the intersection of AI, finance, and entrepreneurship in a relevant manner, generally but in-depth as well, enabled by the extensive document corpus.\nBefore this section is concluded we still want to determine the age overlay, presented in Figure 30. Focusing on the last decade, as per set parameters, themes have seemed Fig. 28 Authors Keywords Co-occurrence (determined according to documents in which items cooccur). The list of relevant terms was generated using authors' own keywords, by VOSviewer [133,134] (logo was removed so as to make the figure content larger and easier to see), for the entire data document lifespan. In order for a word to appear in the figure, occurrence needs to equal to at least 5 -links between keywords were fully counted (every co-occurrence is equal in weight). The analysis extracted 231 relevant terms, in 13 clusters, with 2490 links totaling 4258 in strength. Every colored blob represents a term and its size depends on the occurrence count, while graph edges represent cooccurrence with thickens representing how strong the link between terms is. Colors represent clusters, a tightly connected subgraph of nodes and edges, that is terms and links between them.\nto overcome a change, from more contextualized at the beginning of the decade to more technological at the end of the decade, most likely a consequence of the popularity of artificial intelligence and related themes, and so a focus went from application in economics to emphasis on computing. General terms like machine learning, and artificial intelligence have a dominant presence, perhaps indicating substantial hype, with the emphasis on methods being less of a factor than one would hope.\nThe network can also be deceptive if not meticulously reasoned about, as for example, it seems that the presence of neural network is diminished, but the deep learning theme is very strong, leaning strongly towards the end of the decade. Bankruptcy has Fig. 29 Authors Keywords Co-occurrence Density (determined according to documents in which items co-occur). The list of relevant terms was generated using authors' own keywords, by VOSviewer 11 [133,134] (the logo was removed so as to make the figure content larger and easier to see), for the entire data documents lifespan. In order for a word to appear in the figure, occurrence needs to equal to at least 5 -links between keywords were fully counted (every co-occurrence is equal in weight). The analysis extracted 231 relevant terms, in 13 clusters, with 2490 links totaling 4258 in strength. A density map represents a heat map where areas shifted to the blue are cold and weak in occurrence, while areas shifted to the red are hot and strong in occurrence.\nlost it luster, but prediction and themes related to performance, and analytics as well, are still relevant today.\nAside from deep learning, highly occurring recent themes are crowdfunding, blockchain, fin-tech, big data, and text mining. Still a mix of computing, economics, and statistics, an indication of discipline direction but development and changing environment as well. When looked at as a whole, it seems as of late that there is overly strong of an emphasis on computer science, and statistics, with too little time spent on a problem domain, yet AI is not its own end goal, at least it should not be. There is also a possibility that certain themes need maturation, or that collaboration between theory and practice, or computing and economics experts is not on a level. If recent themes are an indication, then there is a new kind of economics in development, both financial and social. Fig. 30 Authors Keywords Co-occurrence (determined according to number of documents in which items co-occur) Overlay. The list of relevant terms was generated using authors own keywords, by VOSviewer 12 [133,134] (the logo was removed so as to make the figure content larger and easier to see), for the entire data document lifespan, with the overlay focusing on the last decade. In order for a word to appear in the figure, occurrence needs to equal to at least 5 -links between keywords were fully counted (every co-occurrence is equal in weight). The analysis extracted 231 relevant terms, in 13 clusters, with 2490 links totaling 4258 in strength. The overlay map represents a graph timeline with nodes and edges shifted to the purple being old and part of the history, while areas shifted to the red are new/novel and part of the present." }, { "figure_ref": [ "fig_9", "fig_0", "fig_9", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Conceptual Data Analysis", "publication_ref": [ "b153", "b154", "b155", "b154", "b155", "b132", "b133", "b123" ], "table_ref": [ "tab_17", "tab_17" ], "text": "Analysis of authors' own keywords has been performed, it is now time to do the same for abstracts, and this can be seen in Figures 31 and32, representing longitudinal theme analysis. As we are dealing here with over 30 years of history, the analysis is divided into two parts, with the first part spanning from 1991 until 2014, and the second part continuing until the end, that is until 2023.\nThe first period is from 1991-1999, with themes being concerned with neural networks, performance, prediction, and expert systems, in line with previous analyses, and with a substantial prevalence of expert systems during the beginnings of the corpus data, while neural networks and financial performance make the most dominance, likely a consequence of later research outcomes.\nAt the beginning of the 21st century, we can observe a very strong presence of dealing with financial data in tandem with artificial intelligence, primarily machine learning and neural networks -with prediction staying its course, and after years of research case-based reasoning becoming a factor.\nThen from 2005-2008, neural networks became quite the powerhouse, with nothing else coming even close, and this state will become a trend from then onward. It also seems that in this period research was also concerned with broadness, as the number of themes is higher and more disparate than in previous periods, and we see input variables as substantial factors, supply chain, continuation of some statistical themes, appearance of genetic algorithm, etc.\nThe last period analyzed in detail from Figure 31 is 2009-2014. Here we observe a consolidation of themes with the mentioned neural network trend. Here for the first time support vector machine became one of the dominant themes, with a selforganizing map, a type of neural network, being noticeable also. The period from 2015-2023, analyzed in detail in Figure 32, is what comes next, and from the looks of it three postulates are awaiting, machine learning, artificial intelligence, and financial risk, thus building on the aforementioned.\nIn Figure 32 we begin by looking back to a period marked by neural networks and financial predictions and it seems to support vector machine, coming out of hiding, an indication of how important it is of importance to analyze data at different levels of abstraction.\nThe first period analyzed in detail, meaning of shorter time span, is from 2015-2018, and this period is seeing past themes receiving life again, the research is continuing, and improvements are being made. There are however two additions of note, language processing, and hybrid approach. With all the new developments, hybridization is naturally something of interest, and the research outcomes taken together with the need have most likely made language processing an invention sought after.\nBefore a conclusion can be made, there is yet the period of 2019-2020, with quite a large number of themes, a logical consequence of a period for which data is perhaps not yet complete and where an explosion of AI developments has taken place. As such, the enormous prevalence of a general theme of AI is present, not that useful in terms of finding specifics, however an indication of a situation. The research is continuing, with information technology, SMEs (small and medium-sized enterprises), and shortterm memory (most likely long short-term memory) indicating a role of information, innovation, technology, and entrepreneurship in the period, and probably in the years to come.\nThe last period of the entire analysis is from 2021-2023, a period of significant data gaps, it is however not expected that the picture painted will be substantially different. Artificial intelligence is the norm, and specifically neural networks it would seem, a trend that started during the last part of the last decade of the 20th century, and still going strong, maturation, innovation and the environment have made AI a force to be reckoned with, a state most likely entrenched for a time. The supply chain is unusually strong, likely a result of the COVID-19 pandemic and the starting point in Wuhan, China [155]. Machine learning is still a factor, but it seems that the statistical approach is waning. 3. As we are interested in concepts, authors' abstracts were used for the analysis, since by choosing bi-grams we were able to achieve improved descriptiveness of the clusters, compared to keywords, with the number of words being 1000 and minimum cluster frequency set at 20, so as to have high confidence in the results and pick those terms that are prevalent. The minimum weight index was set to the lowest value of 0.02, wanting to explore indepth, while the clustering algorithm used was Walktrap, a random walk-based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community). [156,157] Colored rectangles represent clusters that are observed during specified periods, period is noted above the clusters, with terms representing an overarching theme of the cluster -size of the cluster represents significance during the period, the bigger the cluster, the more importance for the period the cluster has. Clusters are typically linked to other clusters, either in the vicinity, that is right next to, or to the ones farther apart, jumping the periods, depending on which clusters the influence was exhorted. The thickness of the edge represents how strong the influence is, with thicker the edge, the stronger the influence. For most highly cited documents per period with data on the specific peaks one should consult Appendix B. A word of caution, analysis in for example Figure 28 has been conducted with authors' own keywords, while thematic evolution of concepts was performed with abstracts, therefore one is not to compare terms in such a situation, but themes and ideas.\nBefore we present intellectual structure data analysis there is one more thing to present here, namely artificial intelligence method occurrence by topic niches, in Table 14. With this analysis, we have some sort of a lower bound of methods used, as some uses might be hidden in the text itself. Artificial intelligence has been mentioned an enormous number of times, with methods being mentioned far less, perhaps an indication of authors projecting AI onto an area but not dealing concretely with any method and thus producing research and application that is substantially below AI term occurrence.\nIf one reads bibliometric literature with documents trying to ascertain what methods are being published, it is found that such analyses are only partially useful, as they include terms such as artificial intelligence, machine learning, etc., broad terms describing either entire AI or large subdivisions, thus inhibiting specific information that is sought after. In this paper, we have taken a different approach by avoiding altogether such terms that are overly broad and in the end not that useful, keeping Fig. 32 Thematic Evolution of Concepts -from 2015 until 2023, with the last time slice, 2021 until 2023, representing the future to come for which the data is not yet complete. Time periods were selected for the reason of being one of the periods of interest from analysis in Figure 3. As we are interested in concepts, authors' abstracts were used for the analysis, since by choosing bi-grams we were able to achieve improved descriptiveness of the clusters, compared to keywords, with the number of words being 1000 and minimum cluster frequency set at 20, so as to have high confidence in the results and pick those terms that are prevalent. The minimum weight index was set to the lowest value of 0.02, wanting to explore in-depth, while the clustering algorithm used was Walktrap, a random walk-based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community). [156,157] Colored rectangles represent clusters that are observed during specified periods, period is noted above the clusters, with terms representing an overarching theme of the cluster -size of the cluster represents significance during the period, the bigger the cluster, the more importance for the period the cluster has. Clusters are typically linked to other clusters, either in the vicinity, that is right next to, or to the ones farther apart, jumping the periods, depending on which clusters the influence was exhorted. The thickness of the edge represents how strong the influence is, with thicker the edge, the stronger the influence. For most highly cited documents per period with data on the specific peaks one should consult Appendix B. A word of caution, analysis in for example Figure 28 has been conducted with authors' own keywords, while thematic evolution of concepts was performed with abstracts, therefore one is not to compare terms in such a situation, but themes and ideas.\nin mind that sometimes it is a struggle to differentiate between method, technique, algorithm, and even discipline subdivision, as Bibliometrics deals with general data analysis, and so as to resolve the issue and not miss certain uses and present an analysis that is not unpopulated, if something is close to a process/activities, then it is mentioned in the analysis in Table 14.\nThe focus was methods, techniques, and algorithms that are not from statistics, that is the focus was on those items that are from computing, or computer science, e.g. randomized algorithms, neural nets, etc., as such items seem of most relevance today, as seen from Figure 32, and will most likely be dominant in the near future as wellwith having in mind that these methods also rely to a certain degree on statistics. If an item occurrence is below a minimum number of keyword occurrences set to 3, it is not included in the analysis, as such an item is on a level of rumor.\nAs confirmed by the analyses performed thus far, the most prominent is branch 1(c), which deals with valuation, and prediction of performance, with the largest corpus of documents being here as well. If a number of documents is taken into consideration then branch 3(b), financial planning, is also prominent in AI methods use, a branch linked to 1(c) -performance, finance, and planning, items of most importance to every private company, and is therefore expected that companies would be most interested to leverage new technologies in these areas first, with the branch 3(a) following and 1(b) also being in the mix.\nAs far as methods are concerned, neural networks, and items linked to them, together with support vector machine, are extremely dominant, with neural networks being far ahead of everything else -these two are most likely driven by their success in achieving results, and likely the hype also. Nevertheless, the number of items in the analysis is substantial, with 20 approaches in total, indicating an effort to tackle issues from different angles. These are the methods, using the term methods to encompass items sought after, predominantly neural networks, with statistical approaches, but with heuristics and meta-heuristics as well, all the way to programming paradigms. A number of approaches has a small footprint, especially if one looks at individual branches, yet these might be the first sparks, or perhaps an indication of the need to try the method in a different area.\nThe lack of application e.g. in 1(a) and 2 (topic niches that have started to develop only recently), might also be an indication that collaboration of experts from computer science and economics is needed, as without that cooperation modernization of economics, and specifically entrepreneurship, will be difficult, while some methods are naturally more in line with certain problems. There are potentially many reasons why an item is present and present to a certain degree, and why it is not, with this analysis presenting state of the art, while at the same time a potential enabler for future developments. The severe lack of application, aside from a few deep learning attempts, in FinTech (in the context of entrepreneurship) most definitely presents a point of interest and future research.\nBoth neural networks, and support vector machine have the most prominence in branches 1(c) and 3(b), making them a natural combination of branches as per financial interest and methods that are propulsive and provide it seems the best results. Alongside these there is also the matter of ensemble learning and genetic algorithm, on the one side combining multiple approaches in order to solve a problem, and on the other using a metaheuristic to optimize and to search, borrowing from nature so as to solve a problem brought about by nature. AI methods, techniques and algorithms were extracted from documents, categorized according to aforementioned branches, by VOSviewer [133,134] via co-occurrence (determined according to the number of documents in which items co-occur) analysis for author keywords (so as to descriptively capture the content of the documents, as keywords plus are less comprehensive in terms of the actual content [125]) with the minimum number of keyword occurrence set to 3, so as to leverage precision and obscurity. Such analysis has produced clusters of term co-occurrences from which specific AI methods were then manually extracted and occurrence obtained. As a last few words, explainable AI (XAI) can be brought into the foreground, as it is one thing to obtain a quality result, while it is another to know how the algorithm got there, and sometimes that is important, reasoning from cause to effect. This is especially evident when one works in a situation with potentially devastating consequences, or in an area of high uncertainty, e.g. one might try to optimize business processes so as to achieve greater financial gain, yet without knowing why is the result the way it is, it is problematic to be sure in the result of a method and also more difficult to predict what will real consequences be." }, { "figure_ref": [], "heading": "ANN -Artificial", "publication_ref": [], "table_ref": [], "text": "On the algorithmic side, it also might be possible to design improved algorithms or get an idea of how to make something improved, or even design an algorithm from the beginning when one has more information about the problem being tackled, if one can understand how AI agent went about an issue and why it made such a move." }, { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_0", "fig_0", "fig_1", "fig_2", "fig_0", "fig_0", "fig_2", "fig_1", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_1", "fig_0", "fig_2", "fig_1", "fig_1", "fig_1" ], "heading": "Intellectual Structure Data Analysis", "publication_ref": [ "b98", "b98", "b78", "b154", "b155", "b159", "b159", "b132", "b133", "b106", "b121", "b132", "b133", "b106", "b121", "b45", "b78", "b132", "b133", "b106", "b121", "b132", "b133", "b106", "b121" ], "table_ref": [ "tab_18", "tab_19", "tab_18", "tab_17", "tab_17", "tab_17", "tab_17", "tab_17", "tab_19", "tab_17", "tab_17", "tab_17", "tab_17", "tab_20", "tab_17", "tab_20", "tab_19", "tab_20", "tab_18", "tab_18", "tab_19" ], "text": "In order to ascertain the authors' contribution to the field and determine which references (both references and documents of the research corpus) are of foundational knowledge, a references co-citation network was performed, and presented in Figure 33. At first glance, one can observe that Altman (1968), Ohlson (1980), and Beaver (1966) are of the highest relevance, there is however a caveat here.\nIndeed, \"through the analysis of reference co-citation, the most frequently cited\" references \"are Altman (1968), Ohlson (1980) and Beaver (1966).\" [99] Yet \"none of their work is based on an artificial intelligence approach,\" but \"due to the fact that all aforementioned works are pioneer studies in the bankruptcy prediction field, the posterior authors tend to cite them in their papers with high frequency.\" [99] It is therefore of interest to present a broader outlook, and with that in mind analyses in Tables 15 and16 are conducted also. As Bibliometrix is inferior to VOSviewer in terms of large network visualization, both tools were used, first we present analysis from Bibliometrix in Figure 33, and afterwards from VOSviewer in Figure 34.\nAnalysis in Figure 33 presents two clusters, the first blue, with Altman (1968), Ohlson (1980), and Beaver (1966), and the second red -indicating two subdivisions within the observed discipline at the intersection of AI, finance, and entrepreneurship. In the blue cluster, other references of note are presented in Table 15, and as seen an overall analysis shows the dominance of the blue cluster. All the references are of an older type, and the journals published correspond to analyses performed thus far.\nHalf of the references are not a part of the research corpus, but are present in a reference form only, indicating relevance of both the old and the new, but also the relevance of a broader knowledge. Of the references part of the research corpus, in every instance branch VP (1(c) in Table 14) is present, thus indicating why the blue cluster has such dominance overall, as described when interpreting Table 14. With FP (3(b) in Table 14) and AIF (3(a) in Table 14) also being a factor and contributing to the relevance of the blue cluster, which was also of note in the methods analysis performed in Table 14.\nAs in the overall analysis, all the most relevant items are in the blue cluster, a special analysis of the red cluster is warranted and is thus presented in Table 16. The top three authors here are Tsai (2008), Nanni (2009), and West (2005). In the red cluster references are of a more recent nature, with one reference from 2017, indicating a discipline subdivision that is more of a continuation of the roots -confirmation of which is a list of journals, where Expert Systems with Application is a dominant force, in alignment with previous analyses. The prevalent theme here is also VP (1(c) in Table 14), with the absence of FP (3(b) in Table 14) and AIF (3(a) in Table 14), instead SEF (1(b) in Table 14) is present, a strong branch, but not to par with other prominent ones, thus such a cluster is not as relevant as the blue one is. The red cluster is sitting more on the recent achievements, both via year of publication and via branch of economics, and is therefore a cluster less relevant than the blue cluster.\nFig. 33 References Co-citation Network (determined according to the number of times references have been co-cited, i.e. cited together in a third item). As in Bibliometrix, it is difficult to ascertain relationships on large graphs, the network above was generated on 50 nodes, with isolated nodes removed and the minimum number of edges set to 1, in order to take into account the entire subgraph but without islands which are not being part of the greater field. The clustering algorithm used was Walktrap, a random walk-based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community). [156,157] Nodes are colored according to the cluster they belong to, while labels denote a reference (author, year, source). Reference size is determined according to the number of citations, with a higher number of citations represented by a bigger node, coupled with an edge, the size of which is being determined as per co-citation, thicker edge means there is a greater degree of co-citation.\nTo ascertain a broader image, an analysis of the entire dataset was performed, together with the density analysis, and is presented in Figures 34 and35. Considering a different tool was used here, some specific results are different, however, a general direction and conclusion is the same -confirmed as well by Table 17.\nThe top three references overall are still Altman (1968), Ohlson (1980), and Beaver (1966), as per the mentioned circumstances, no surprise here, an expected result. This time though, we are analyzing the entire dataset, which has resulted in five clusters.\nConsidering that the network has 353 nodes (that is references) number of links (that is edges) is substantial, with the link strength also indicating a compact network. The reason for this lies in the fact that four out of five clusters predominantly cover one thematic niche -VP (1 (c) in Table 14). In the yellow cluster are references related to authors who generally made a significant contribution to the development of Sorted according to reference relevance in terms of PageRank (approximating importance, assuming more important nodes have more links to them from other sources while taking into account that not every link has the same weight [161]) calculated by Bibliometrix, in decreasing order 1 Reference label from Figure 33 2 Clusters from Figure 33 -2 (blue), 1 (red)\n3 Subject branches as defined in Subsection 3.2 -if stated as 'ref', the reference is not part of document corpus, but rather only a reference, and is thus not clustered into any branch V P Valuation of an entrepreneurial venture/Prediction of performance and/or bankruptcy AIF AI and accounting, auditing and detecting financial frauds F P Financial planning and other aspects of financial management bankruptcy prediction models, while the green and blue clusters cover newer references with the application of AI in the VP domain. The red cluster includes references with credit scoring topics, also dominantly leaning on the VP niche. Finally, the purple cluster (cluster with identification 5) is weakly linked to the rest of the network and in fact within itself as well (some links are not seen as per the defined variable of the VOSviewer to draw 1000 lines). It is a cluster with references covering newer topics in the intersection of AI-entrepreneurship-finance, such as for example crowdfunding, and the application of text analysis methods in finance.\nThe previous conclusions are confirmed by the References Co-citation Density Map in Figure 35. The lower left quadrant stands out the most and coincides with the yellow cluster in Figure 34. As said, these are influential references related to the development of bankruptcy prediction models throughout history.\nOut of all references in Table 17 only one is not from the blue cluster in Figure 33, Tsai CF (2008), a reference that is most relevant in the red cluster, as seen from Table 16. Confirming the high relevance of the blue cluster in the entire network, foundational for the entire network, with the additional knowledge so to speak slowly making its place -also indicating how difficult it is to compete with the classics. In terms of year of publication and reference source, the situation in Table 17 closely Sorted according to reference relevance in terms of PageRank (approximating importance, assuming more important nodes have more links to them from other sources while taking into account that not every link has the same weight [161]) calculated by Bibliometrix, in decreasing order 1 Reference label from Figure 33 2 Clusters from Figure 33 -2 (blue), 1 (red)\n3 Subject branches as defined in Subsection 3.2 -if stated as 'ref', the reference is not part of document corpus, but rather only a reference, and is thus not clustered into any branch V P Valuation of an entrepreneurial venture/Prediction of performance and/or bankruptcy SEF Sources of entrepreneurial finance resembles information found in Table 15 with conclusions being matched, aside from one mentioned exception.\nWhen results from Tables 15,16, and 17 are combined, the trend from economics, prediction, etc. to the same in the strong (depending on the branch) context of computing is clearly seen, there is a transformation ongoing, the result of which is still in expectancy it would seem. Although not a focus of this research, every transformation brings its challenges, especially if digital technology is involved, and with such an emphasis in mind, an obvious lack of security and privacy content (as per authors own keywords) is present, thus being a warranted research area in the context of individual, societal, social, and entrepreneurial aspect.\nReferences are classified in different clusters, as per co-citation criteria and tool settings of course, with the obvious absence of the purple cluster 5. The dominance of cluster 4, a cluster we could say corresponds to the blue cluster in Figure 33, is obvious, with 6/10 of a presence. Outside of references in this cluster, a number of references were in this larger network dispersed among clusters 2, 3, and 1 -identifying a subdivision and dispersion as per co-citation. However, as per branches of interest in Subsection 3.2, and as defined for the research, the prevalence of branches 1(c) and 3(b), with the highest document count as well, is obvious, especially so for 1(c) with its almost unanimous presence -confirming conclusions from the analysis in Figure 33. Half of the references are not a part of the research corpus, but are present in a reference form only, indicating relevance of both the old and the new, but also the relevance of a broader knowledge.\nFig. 34 References Co-citation Network (determined according to the number of times references have been co-cited, i.e. cited together in a third item). The figure was generated by VOSviewer [133,134] (logo was removed so as to make the figure content larger and easier to see) with the citation data ending in July 2023 -this analysis has a different ending date, as because of the problems with Bibliometrix [107,123] we had to again export the same data from Web of Science into textual file (for the VOSviewer [133,134], since analysis was conducted in BibTeX, but the VOSviewer does not support that file format). Compared to citations output by Bibliometrix [107,123], and within the top 10 references, there is almost no difference in reference citation count, only one citation difference here and there, with one position exchange for neighboring references which in Bibliometrix had the same citation count -results from both tools are almost identical. Sub-network colors denote clusters, while node size depends on the number of citations, the bigger the citation count the bigger the node itself, with edges representing co-citations, with thicker links representing stronger co-citation. Node labels, naturally, denote a reference (author, year, source). Co-citations are fully counted (every cooccurrence is equal in weight), and as now plotting the full network, the minimum number of citations was set to 20, as outliers are of no interest here -results of which was 353 focus references. The resulting network has 5 clusters, 37069 links, and a total link strength of 146241.\nIn order to present the most relevant direct citations and permeating topics, a Historiograph analysis was performed, as seen in Figure 36. In spite of various different subdivisions of research, there is only one topic as per Historiograph analysis, in the research document corpus.\nIt crystallized sometime around 1993, with Coats (1993), and two significant documents in 1994, Wilson, and Altman. From then onward it jumped through a number of more prominent periods, with the most relevant documents being, Dimitras (1996), Zhang (1999), Min (2005), Shin (2005), Kumar (2007), Tsai (2008), and Barboza (2017).\nIn order to obtain central keywords, themes for the definition of the topic, it is useful to sort historiograph documents by date, in both orders, to ascertain the beginning and what is happening at the cutting edge -alongside which citation count in Fig. 35 References Co-citation Density Map (determined according to the number of times references have been co-cited, i.e. cited together in a third item). The figure was generated by VOSviewer [133,134] (logo was removed so as to make the figure content larger and easier to see) with the citation data ending in July 2023 -this analysis has a different ending date, as because of the problems with Bibliometrix [107,123] we had to again export the same data from Web of Science into textual file (for the VOSviewer [133,134], since analysis was conducted in BibTeX, but the VOSviewer does not support that file format). Compared to citations output by Bibliometrix [107,123], and within the top 10 references, there is almost no difference in reference citation count, only one citation difference here and there, with one position exchange for neighboring references which in Bibliometrix had the same citation count -results from both tools are almost identical. Labels, naturally, denote a reference (author, year, source). Co-citations are fully counted (every co-occurrence is equal in weight), and as now plotting the full network, the minimum number of citations was set to 20, as outliers are of no interest here -results of which was 353 focus references. The resulting network has 5 clusters, 37069 links, and a total link strength of 146241. A density map represents a heat map where areas shifted to the blue are cold and weak in citations, while areas shifted to the red are hot and strong in citations.\ndescending order is also useful, so as not to miss anything in the middle. Thus by analyzing metadata of the top 10 in every group of the three, it is possible to get the gist of the matter.\nBy doing the aforementioned one will observe the following aspects of note: financial distress, neural networks, bankruptcy prediction, financial diagnosis, business failure, industrial application, probabilistic approach, deep learning, machine learning, corporate governance, bench-marking, credit scoring, data mining, experimental approach, ensemble approach, statistical and intelligent techniques, support vector machine, etc.\nTherefore a sentence like definition of the topic might be: Artificial intelligence in the service of entrepreneurial finance with application; almost a fitting title for this research article, and quite the resemblance with the journal Expert Systems with Applications published so many of the documents in the corpus and in the Historiograph analysis (16/30). 34 with 353 references and 5 clusters.\n1 Reference label from Figure 34 2 Clusters from Figure 34 -5 (purple), 4 (yellow), 3 (blue), 2 (green), 1 (red) " }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_11", "fig_0", "fig_11", "fig_0", "fig_11", "fig_11", "fig_11", "fig_0", "fig_11", "fig_11", "fig_0", "fig_0", "fig_11", "fig_0" ], "heading": "Social Structure Data Analysis", "publication_ref": [ "b159", "b154", "b155", "b160", "b154", "b155", "b159" ], "table_ref": [ "tab_22", "tab_23", "tab_24", "tab_25", "tab_24", "tab_24", "tab_25", "tab_26", "tab_27", "tab_26", "tab_24", "tab_27", "tab_25", "tab_27" ], "text": "Before we close, and take into account the whole research there is one more thing that needs to be done, and that is social structure analysis, so as to shine a light upon those that have produced the research, and this is found in Figure 37. Institutions outputting the most documents are Zhejiang Normal University, National Central University, Hefei University of Technology, City University of Hong Kong, and Islamic Azad University -with Harbin Institute of Technology, and Chinese Academy of Sciences not being as relevant as in previous analysis on number of documents, a discrepancy potentially caused by analyzing with different tools, which makes a case for analyzing with different tools in order to ascertain the issue more completely. Around Zhejiang Normal University, National Central University, and the Hefei University of Technology established the biggest collaboration clusters: purple, red, and green. Which is a logical consequence of high document output. Out of all the others, the blue cluster, with Chinese Culture University, is closely following the top three. Link collaboration strength is not of the highest value in the biggest clusters, as seen from pink, and gray clusters -indicating other factors to a collaboration Fig. 36 Historiograph. This kind of analysis, although not often seen in the literature, is a useful tool in assessing the number of different topics and their core authors and documents, within a bibliographic corpus and in a chronological sequence. In this instance, there is one topic pervading the corpus. Labels denote the author and the year of the topic document, with documents being sorted in ascending order, from left to right. In this chronological sequence core documents are related via direct citations, this relationship is presented by edges connecting nodes. These edges are directed, and therefore more appropriately called arcs, since they need to depict chronological relationship, and as such directed from right to left. The chosen number of nodes was 30, so as to have an appropriate sample for the core knowledge of the topic, with labels being in a short identification (first author, year of publication) format.\nlink strength, aside from document output, potentially factors like national, religious, possibly social, or perhaps state or project factors, etc.\nThere is also a question of the bridges between clusters, widening a collaboration further. The bridges themselves can represent a strong collaboration, e.g. Chinese Academy of Sciences and University of the Chinese Academy of Sciences (university under control of the Chinese Academy of Sciences), or Nankai University and Asia University -indicating that for bridges of such strength perhaps a factor beyond a scientific one is present, which could possibly have a negative influence on science and the advancement thereof. It is also evident that a large number of institutions are Chinese, or with a Chinese aspect, with the bridges also being dominated by such universities, indicating Chinese relevance to collaboration networks.\nIf we rank institutions by PageRank, taking into account number but also prestige, approximating importance, the situation is somewhat different and is presented in Table 18. With such important calculation, City University of Hong Kong is at the top, with Zhejiang Normal University and Rutgers State University following, indicating the same issue as with the analysis of document output, multiple factors need to be taken into consideration if one is to ascertain the situation, and here it seems that some institutions are in a more prestigious, so to say, collaboration than others. When clusters are observed 4 (purple in Figure 37) and 1 (red in Figure 37) are both of frequency 3, together being 6/10 of a presence, making those clusters in demand in the PageRank eyes -while cluster 3 (green in Figure 37) is following with frequency of 2.\nBy looking more closely into bridges and shortest paths, that is betweenness of which more is presented in Table 19, the top three institutions are Zhejiang Normal University, Asia University, and City University of Hong Kong, with 7/10 universities being Chinese one, confirming previous analysis of high relevance of Chinese universities, it seems that these universities are on collaboration points of influence. While Asia University, Dongguk University, and Rutgers State University are making an international presence so to speak. On the cluster side, clusters 4 (purple in Figure 37) and 3 (green in Figure 37) are almost completely dominant, with 4 having a frequency of 6, and 3 having a frequency of 3, indicating that these clusters are holding collaboration network afloat, and if one looks at Figure 37 this is correct.\nIt seems that Zhejiang Normal University, Asia University, City University of Hong Kong, Dongguk University, and Rutgers State University are those that are of most relevance generally, both in terms of collaboration influence and in terms of being at the crossroads of collaboration. In terms of the intersection of clusters, it is the purple (4 in the tables) cluster that is a constant, and it seems the most prominent of the three forces, as this is the cluster without which the collaboration network would be most disturbed -it is a cluster comprised of a mix of international universities with strong Chinese presence, while the green cluster (3 in the tables) is tightly following.\nThe aforementioned is significant because if other institutions that are on the margins of collaboration in Figure 37 want to improve on that particular point, they can either expand their own cluster on an individual basis, or link with institution/cluster of dominance and more quickly expand its reach, thus making those clusters/institutions that are of highest relevance desirable and influential. univ southampton 10\nSorted according to PageRank (approximating importance, assuming more important nodes have more links to them from other sources while taking into account that not every link has the same weight [161]), in decreasing order 1 Paired inversely (decreasing → increasing) with PageRank calculated by Bibliometrix, so as to enable ease of use and reduce complexity 2 Clusters from Figure 37 -the network consists of a total of 14 clusters, paired through the institution's name\nThe next analysis is by country, presented in Figure 38. This analysis was performed on a smaller number of countries, 50 to be exact, as smaller analyses can Fig. 37 Collaboration Network by Institution. With this analysis, one determines clusters and strengths of collaboration's -relatedness of items is calculated by the number of co-authored documents. The network was created for 100 nodes, so as to delve beyond just core clusters, and still make the figure useful, while removing isolated nodes and with the minimum number of edges being 1 -this will ensure not to focus on those that are of no interest on the one side, and to consider all those that are collaborating. The clustering algorithm used was Walktrap, a random walk based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community). [156,157] As for the visual matter, the colored blob represents the institution with a label denoting the institution's shortened name. The color of the blobs defines cluster, while the size of the blob informs about the institution's document output, the bigger the blob, the more documents an institution has produced. Institutions are collaborating, and this has been represented by graph edges, the thicker the edge is, the more co-authorships there are, and the stronger the link is.\ngive insight that a larger picture cannot. By looking only at those most prominent, one can ascertain the relationships between those elements only, without other items interfering. Therefore by comparing smaller and larger networks, we can determine how strongly a country is in a particular cluster, and what are countries leanings, while also determining the most influential players with a small network, and general collaboration with a large network.\nBy observing Figure 38 it is obvious that there are two big collaboration clusters, green and blue, and that there are countries around which it would seem much of the collaboration revolves, mainly China and the United States of America. Therefore on this level of detail, in a situation where these countries are involved, collaborations will most likely behave in accord with the presented manner.\nIt is also clear that collaboration between the green and blue clusters is substantial, with many paths for jumping from one cluster to the other, there is no country without which collaboration would be severely diminished, as one might go the other route and establish collaboration that way. Indeed there is an elephant in the room that can't be ignored, namely collaboration between China and the USA, which is so strong that it dwarfs every other collaboration in comparison -indicating an enormous amount of cooperation between these two countries, regardless of the involvement of other countries. Sorted according to betweenness (measuring the number of times an institution is on the shortest path in between other institutions, showing network bridges [162]), in decreasing order 1 Paired inversely (decreasing → increasing) with betweenness calculated by Bibliometrix, so as to enable ease of use and reduce complexity 2 Clusters from Figure 37 -the network consists of a total of 14 clusters, paired through the institution's name\nIf one looks at the clusters and countries' geographical location, and perhaps even geopolitical aspects, it seems that the network resembles the divide between East and the West (there is a significant presence of the West in the green cluster also), with Russia surprisingly being in the blue cluster -which might change over time? Is it possible that collaboration network by country has geopolitical significance? Could such an analysis perhaps be a tool in a prediction of future events, or is a scientific collaboration only a result of the events of the past, and what influence does such an element have on science?\nIn addition to China's and the USA's clusters, we have the issue of Lithuania, whose collaboration was not of a nature where the country would belong to a larger cluster, and therefore sits alone with collaboration to Denmark and Sweden, indicating a geographical link, and collaboration of a lower intensity. The last cluster of the network is the red one, with four countries quite close to one another in terms of geography, with Slovakia interestingly enough being the center of the cluster. This cluster is linked mostly to the blue cluster, with a number of connections to the green one also -an extension of sorts it seems to the blue cluster trying to make collaborations with others as well.\nBy turning to PageRank, taking into account not only the number of links but the relevance of the sources as well, presented in Table 20, aligning situation follows, with China and the USA, followed by the United Kingdom, being in the top three, respectively -with the United Kingdom having the high number of collaborations within and without its own blue cluster, an indication perhaps of the history and the Commonwealth. These are the countries that are as per PageRank involved in most so to say prestigious collaborations, and are thus a desirable collaboration unit of interest with potentially other benefits.\nThe number one cluster is 3 (green in Figure 38), with the next two positions belonging to cluster 2 (blue in Figure 38), however afterward there is a series of cluster 3 appearances starting with India following immediately after United Kingdom, with cluster 3 making 6/10 of a presence. It is therefore cluster 3 that is generally it would seem more influential, yet cluster 2 has 2/3 of a presence in the top three, an indication of the strength of both clusters, the influence of which is perhaps derived through disparate methods.\nBefore we present the global network of countries, a look at betweenness is warranted, presented in Table 21, and especially so as here it is more difficult to ascertain link-bridges via network analysis. Unsurprisingly United Kingdom is the top country here, positioning itself between the greats, likely a consequence of a Commonwealth and present environment. With China and the USA following, Italy and Spain not making the list, while the Netherlands and United Arab Emirates are significant it would seem in terms of presenting themselves as a bridge for collaboration -some are therefore in a good company but not as central, while others are not as influential but are central for collaboration fostering.\nThe overview now more clearly shows the dominant cluster, and it is the green one, number 3, with 6/10 of a presence, yet in PageRank analysis of Table 20 cluster 2 is strong in rank, with placements 1 (United Kingdom) and 3 (USA) being of its set of countries, while cluster 3 is by betweenness more at the back than before. It seems that cluster 3 has strength in numbers so to speak, while cluster 2 has strength in top positions, and so they are collaborating together, which seems logical for the reason of science, necessity, competition, etc.\nWestern countries are dominant, as in Table 20 there are 7 of them and in Table 21 there are 6 of them. This presence can also be seen in Figure 38 by observing the whole picture, an indication of past success and present circumstances, and no doubt a result of collaboration positioning. The question however is, what the future holds, is leading of the West still forthcoming, or is the trend changing, partially or more fully? What we can gather from this research is that future collaboration will be of a more multilateral nature.\nThe following analysis is the final analysis of the paper, presented in Figure 39, Table 22 and Table 23, and builds upon the previous analysis with a broader context in mind -here we analyze all the countries, 92 countries in total, and the entire collaboration network, with the results being especially interesting when compared to the analysis conducted on 50 nodes.\nThere are still two overly dominant clusters of collaboration, blue and red, with no other cluster being larger than three, indicating a generally large amount of collaboration divided primarily into two camps -and if there are other factors to this division it is difficult to state, yet geographical, and most like geopolitical, factors are obvious.\nComparing those two most prominent clusters, the red cluster with USA and China is larger, with 37 countries in total, while the blue cluster with India has 31 countries in total, at least in part a consequence of countries repositioning, from the previous analysis those would primarily be China, USA, and the United Kingdom -with substantial collaboration between these three clearly seen. With this analysis, one determines clusters and strength of collaboration -relatedness of items is calculated by the number of co-authored documents. The network was created for 50 nodes, so as to firstly determine the network in a situation where the choice of belonging to a cluster is more binary, at the same time removing isolated nodes and with the minimum number of edges being 1 -this will ensure not to focus on those that are of no interest on the one side, and to consider all those that are collaborating. The clustering algorithm used was Walktrap, a random walk based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community). [156,157] Similarly to a collaboration network for institutions, a colored blob represents a country with a label denoting the name. The color of the blob defines cluster, while the size of the blob informs about country document output, the bigger the blob, the more documents a country has produced. Countries are collaborating, and this has been represented by graph edges, the thicker the edge is, the more co-authorships there are, and the stronger the link is.\nThe global collaboration differs as compared to local, that is as compared only to those countries of the highest weight, presented in Figure 38. By trying to map clusters, the blue cluster from the previous analysis would correspond to the red, while the green cluster would correspond to the blue one here, and now by analyzing the changes we are coming to some striking realizations, with some being more easily explainable than others.\nFor example, China has made a move, from the blue (green in the previous analysis) cluster to the red (blue in the previous analysis) cluster, and is now in the same cluster as the USA, a move to the cluster of the USA and the United Kingdom, which would indicate a difference in local policy as compared to global policy. The same change was made by Korea and Japan as well, where the change could be perhaps explained by geopolitical reasons. There are however changes where additional information would be needed, as in the example of Norway, although it could be argued that Norway, together with some other countries, was at the edge, and so when additional nodes entered calculation, the country was more closely aligned in terms of collaboration with the other cluster.\nCooperation between these two largest clusters is strong, especially with China, the USA, and the United Kingdom on the one side, and with France, India, and the United Arab Emirates on the other, etc. Aside from the cluster of Moldova and Romania, the Sorted according to PageRank (approximating importance, assuming more important nodes have more links to them from other sources while taking into account that not every link has the same weight [161]), in decreasing order 1 Paired inversely (decreasing → increasing) with PageRank calculated by Bibliometrix, so as to enable ease of use and reduce complexity 2 Clusters from Figure 38 -the network consists in total of 4 clusters, paired through country name (USA denoting the United States of America) collaboration network is it seems strongly connected, with collaboration being easily accessible.\nBoth clusters have a number of co-clusters on their side of the network, with the leaning being towards the red cluster, while Hungary is for example even more dislocated than in the previous analysis, standing alone. There is also an obvious extreme of Israel, standing behind the red cluster and linked to it by the collaboration with the USA, the only link this country has in this discipline and corpus of knowledge -with the overtone of using such analysis for ascertaining current state of affairs and possibly future events in the air, as it seems that science which should be independent in the search for the truth is showing signs of other influences, which need not be necessarily of a negative nature, yet there could be a negative influence.\nThe importance of collaborating countries ascertained by PageRank, as presented in Table 22, reveals almost identical results as in the previous analysis of Table 20, with Australia's somewhat diminished importance as per widening the scope, while Spain and Saudi Arabia have climbed the ladder. On the cluster side of things, both clusters, speaking of those largest ones naturally, are equal in presence, as China has changed the collaboration cluster, with cluster 1 (red in Figure 39) holding the top three positions -an equilibrium of sorts, with the red cluster holding the high ground, while the blue cluster (with identification 2) following in a series.\nBy bringing link-bridges into the foreground with betweenness calculation of Table 23, the situation is similar, but with the Netherlands and Canada not making the list this time, at least in the top 10, while Spain and Italy have shown themselves as highly relevant in terms of centrality -as compared to Table 21. The top three countries are the same, with this time all belonging to the cluster of the United Kingdom, red cluster in Figure 39 with identification 1 in the Table 23, thus further improving the collaboration strength of the blue cluster from the previous analysis.\nThe frequency of clusters is the same for both previous and current analyses, with a change happening in the positioning of clusters, and this time United Kingdom's cluster holds the front and back of the queue, while India's cluster has taken command of the middle. As difficult as it is to make a conclusion about the more relevant countries and clusters, the data shows that the United Kingdom's cluster always commands top positions in terms of both PageRank relevance and centrality, while the other hand, while cluster second to that one is strong in terms of collaboration, yet it is always around the middle of the list of top 10 -consequently indicating that whether close or far, the blue cluster in Figure 38 which approximately maps to the red cluster in Figure 39 represents a cluster that is more relevant as per document output, importance, and centrality to the collaboration network -in essence, there is a reason why this is so, collaboration is a result of activities preceding, with which we have dealt with in other sections of the paper." }, { "figure_ref": [ "fig_0", "fig_12" ], "heading": "From Bibliometrics to Showing Intelligent Behavior", "publication_ref": [ "b161", "b154", "b155", "b11", "b11", "b11", "b162", "b162", "b163", "b164", "b165", "b166", "b167", "b168", "b168", "b168" ], "table_ref": [], "text": "Can machines present intelligent behavior? A thesis postulated by Turing in the 1950s, as opposed to whether there is an actual intelligence, real mind, a consciousness [163] which are questions for philosophy and theology, are extremely difficult to determine, Fig. 39 Collaboration Network by Country -all countries. With this analysis, one determines clusters and strength of collaboration -relatedness of items is calculated by the number of co-authored documents. A network was created for all the nodes with the following setup, so as to generate a general picture, removing isolated nodes and with the minimum number of edges being 1, this will ensure no focus on those that are of no interest on the one side, and to consider all those that are collaborating (92 countries in total as per Bibliometrix calculation). The clustering algorithm used was Walktrap, a random walk based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community). [156,157] Colored blob represents a country with a label denoting the name. The color of the blob defines cluster, while the size of the blob informs about country document output, the bigger the blob, the more documents a country has produced. Countries are collaborating, and this has been represented by graph edges, the thicker the edge is, the more co-authorships there are, and the stronger the link is.\nand impossible to prove. However if one is dealing only with inputs and outputs, and has a focus on the result, the matter at hand is much more lighter, although difficult nonetheless.\nBy looking into AI and economics, the situation found is somewhat dire, although not unexpected, as the interdisciplinary link between new and innovative technology, and a field not accustomed to such rapid change, is difficult to make. The literature is filled with AI terms, and depending on the sub-field, algorithms implementing various techniques and methods, there is nonetheless a lack of empirical research into these, so as to critically appraise how well these financial technologies compare in terms of expected output. [12] The Most pressing issues are pitfalls of Machine Learning and AI in the process of prediction for the reason of biases -happening \"mostly in the areas of insurance, credit scoring and mortgages.\" [12] There is therefore a need for a turnaround, or a rethink, of how to employ these new technologies that are reshaping finance in a dramatic way. [12] Technology has actually always rapidly advanced in innovation from the dawn of digital computing in the 1940s [164], and it seems that the speed of that innovation/knowledge discovery is increasing -as either new technology or new knowledge discovered, often by that technology, is almost ubiquitous. Computer experts are accustomed to that, but rarely does a field of research embroil in these conditions. It is sometimes said that computing has not yet matured, as it constantly evolves at such speed, however after over 80 years of progress, whether in software or hardware, that argument is more difficult to make, as it seems that it is a feature of a field being uplifted not only by the influence from inside the field itself, but often from the outside as well -with the latest great leap being quantum computing, brought on the map by interference of quantum mechanics. [164][165][166] The foundational paradigm of AI is randomization, with its weights, probabilities, and outputs. On that foundation other ideas are grafted, thus making the advance in algorithms, techniques, and methods. Whether evolutionary computation, machine learning, or some other sub-field of AI, randomization is hard to avoid, as it is that factor that brings dynamic behavior and makes learning, pattern recognition, and greater adaptability possible. Beyond AI, randomization made possible two most widely known breakthroughs in quantum computing, Grover's search algorithm [167] and Shor's factorization algorithm [168], only bringing randomized algorithms to the center stage more than ever. If quantum computers are to be a reality, it seems that randomized algorithms will be a force to wield.\nTherefore in order to tackle the aforementioned problems of AI and finance, with the benefit not only for the wider economic field, but also for all interested parties, whether a computer expert, or someone entirely from another field, to bridge that gap, and further application of artificial intelligence in entrepreneurship as well, here we will review and establish this foundational paradigm in a concrete way. The most appropriate algorithm then to focus on is a Monte Carlo randomized algorithm. This type of algorithm corresponds well to the uncertain nature of difficult problems, with the added benefit of securing confidence in the solution.\nMonte Carlo algorithm represents a series of steps, and in a full sense an algorithm, that can in polynomial time, and with arbitrary probability, find an optimal solution for a given problem -such probability never reaches 1, but as we get ever closer to one, the confidence that optimal solution has been found grows. [169] By generalizing on the algorithm, we are coming to a paradigm of thought, a method for the creation of the Monte Carlo algorithm, whose implementation will depend on the problem being solved, and the steps are as follows.\nI An algorithm needs to work on the data, and that data needs to be organized, therefore we need to choose an appropriate data structure so as to tackle the problem. II Decide on the distribution through which you will obtain the desired solution and arbitrary confidence thereof. This distribution is used for generating random numbers, with which one makes decisions. The distribution typically used is uniform, as it often conforms to the problem well. III As per the chosen distribution, calculate the probability that a non-optimal solution has been achieved if an algorithm has been run only once. IV As per probability calculated for one run of the algorithm, calculate confidence that the optimal solution was output by an algorithm for n runs.\nThe question is, how would one go about implementing such a procedure, and the best way to demonstrate this is to show an example of the algorithm in existence. There is a \"quintessential problem of algorithmics and, more generally, of computer science\", namely Minimum Feedback Arc Set (MFAS). [170] The problem has many applications, in hardware design, machine learning, deadlock prevention, and cell apoptosis, just to name a few, and in spite of the importance and many attempts to find efficient and always optimal algorithm, this was not achieved. [170] The problem is NP-complete, NP-hard, and APX-hard, with the definition of the problem as follows: \"for a given directed graph G = (V, A), find the smallest subset A ′ ⊂ A such that G ′ = (V, A⧹A ′ ) is acyclic.\" [170] Let us observe Figure 40 and illustrate the issue of MFAS. In this figure, we have three people: John, Ellen and Tim. They are friends, and one day, they met, cordially greeted each other and struck up a conversation. After a while, when they exchanged enough information, it was decided that when they got home, they would use a landline to continue this communication, however, this communication would not proceed at will. John will communicate to Ellen three messages, but only when Tim sends him two. Ellen will communicate to Tim one message, but only when John sends her three. Tim will communicate to John two messages, but only when Ellen sends him one message. They agreed all was well, and then they said goodbye to each other. When they came home they were ready to continue communication, but nothing was happening. Soon they all realized that a mistake had been made in designing the protocol for communication, if things stay as they are, there will be no communication at all, never. In order to transmit their messages, every one of them first needs to receive information, but that will never happen, as they are in a deadlock, waiting for each other. As communication would not be happening, they all went out of their homes, to the previous meeting place. When they met, they again greeted each other, with a bit of a laugh, and a question, what now? Tim said, nothing, this protocol is of no use, we need something that we will use. So they were thinking, discussing, and suggesting new ways of communicating, but no good idea struck. After a while, John said, wait, the old protocol is not that bad, and the problem of starting our communication can be resolved -Ellen and Tim were all ears -to which John continued, all we need to do is for Tim to start our communication, and everything else will fall into place. Ellen was curious and asked why was that John. Because in such a way we will retain our communication and make the least amount of upset to our communication network, John replied. Tim and Ellen were amazed at John's solution to the problem, they all agreed, yes, this is it, let's go home, and after goodbyes, they did." }, { "figure_ref": [], "heading": "John Tim", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_12", "fig_15", "fig_15", "fig_1" ], "heading": "Ellen data flow", "publication_ref": [ "b169", "b170", "b174", "b169", "b175", "b170", "b176", "b177", "b178", "b179", "b11", "b11", "b8", "b8", "b110", "b18", "b180", "b181", "b182", "b183", "b184", "b185", "b1", "b1", "b18", "b26", "b98", "b26", "b98", "b26", "b93", "b104", "b26", "b93", "b104", "b26", "b93", "b104", "b9" ], "table_ref": [], "text": "Operator One Operator Two Operator Three\nIf we again look at Figure 40 we can clearly observe that John's solution to the problem is the right way to solve the conundrum. As there is only one message that will be missed in the data flow at the beginning of their communication, any other solution would lose substantially more, and the communication channel would suffer to a greater extent. So this is the optimal solution to their problem, with the least amount of upset to the network. This same situation can be observed in many areas of science and the world we live in, which is strongly interconnected.\nWhile dealing with complex systems it is clearly not an option to look at the network and determine manually what would be the best solution, nor would it be a good option to try to aimlessly guess what to do. Fortunately, this problem can be solved algorithmically [171] and then executed on a computing machine -the process can be visually observed in Figure 41. Step 1\nStep 2\nStep 3\nStep In order for the Monte Carlo algorithm for MFAS to work, input for the algorithm needs to be a multi-graph 13 , and it will soon be clear why that is. On that multigraph the algorithm is breaking arcs by choosing them as per uniform distribution, which means that the probability of breaking a set of arcs between any pair of nodes, A ′ ij ⊆ A ′ ⊂ A|i ̸ = j, is inversely proportional to the probability that an arc from that set will be chosen. If one looks at step 2 of Figure 41, the highest probability of being chosen has the arc set {α, β}, however we are choosing arcs uniformly, therefore in the next iteration, step 3, an arc from the set {γ, α}, will be chosen, upon which we will again choose an arc, a ij ⊆ A ′ ij |i ̸ = j, and this time it will be from the set {β, γ}. So what would happen if we were not only choosing these arcs but also breaking them, or reversing them? As it can be seen from the figure, in step 4, by making a cut (β, γ) we have broken the arc set, in this instance with only one arc, without which our graph has become acyclic, and this is exactly what we wanted. Even though this arc is in a minority when we consider other sets, and this is an idealized case, nevertheless because we are breaking arcs uniformly, every individual arc has the same probability of being broken, and therefore probability of breaking a cycle is the greatest at places where the number of arcs is minimal, just at the right spot.\nWhen the algorithm arrives at that final cut and has broken all cycles, it would be of high use if one knew how good of a solution the Monte Carlo algorithm has outputand fortunately, this is one of the strengths of Monte Carlo algorithms. This truly can be done, and it is done by calculating probabilities. In this particular instance, dealing with the Monte Carlo algorithm for minimum Feedback Arc Set, the probability of the algorithm producing a suboptimal solution is P t f < 2 -t [172], where t represents the number of times the algorithm has been run. Therefore after some t times probability that at least one of the solutions output by the algorithm is an optimal one is, classified as success, P t s ≥ 1 -2 -t . For a visual representation of how the probability of success quickly grows as the algorithm is repeatedly run one can consult Figure 42.\nIn a real-world situation, convergence towards an optimal solution might be a bit more difficult, if for no other reason, than because of the quality of the random number generator, and the dynamic nature of finding a solution, however optimal or close to the optimal solution will be found, it's a matter of probability. This is pure Monte Carlo 14 , and for pseudocode one can look into [176], with a more easier to grasp version being in [171].\nYet the algorithm can be modified, and even improved, in convergence towards optimal solution. By borrowing the idea from Ant Colony Optimization [177], and implementing a learning mechanism, it is possible to further navigate in a more efficient manner towards a solution, and by applying probability calculation vertically (on the first run in every series of runs), instead of horizontally (throughout the series of runs) it is possible to retain confidence in a solution measure. [172] What brings us to all the marvels of AI -a different approach, but in a sense the same, an extension of the same idea, how to make algorithms navigate themselves. growth, and inverse proportionality thereof -Monte Carlo algorithm probability parabola. Cyan colored bar graph shows when the algorithm produced the optimal solution, as the probability on individual run is 1 2 expected number of optimal solutions cumulatively needs to be around that value, as is the case here -in this way it is possible to verify whether the algorithm functions as designed, for inputs with known solutions. If the bar is raised, optimum is found, no optimum otherwise.\nthis to a new level, with algorithm structure resembling nature, and with far more dynamic computation, various kernels, with ever-changing paths towards the desired goal, an approximation algorithm in its essence. [178][179][180][181] As conducted bibliometric analysis shows, AI is entering in, Finance and Entrepreneurship are being changed, and from the looks of it, this trend will continue to an even greater degree. The question of how to, and where to, implement these technologies is not an easy one, especially more so as society is not just numbers, it's a very complex environment, and if experts do not find a way of aligning with the essence of society and life, merging of AI and other areas will not work, as AI needs to be controlled, in the service of, and not being in control. On top of that, this technology is not simple, and it takes substantial effort to achieve a system that satisfies algorithmic, data, interface, user, societal, and legal requirements. In order to put a brick in that wall, and repair the breach exposed in [12], this short review is given, and with a valiant effort, via the collaboration of experts from disparate fields, the results should be achieved. could future research endeavors head. Aside from the research perspective, there is also a question of application, entrepreneurship, and investing. The issue of application and entrepreneurship is clear, at least in regard to dominant branches and methods, with already possible avenues for those less prominent areas determined by other relevant methods and branches of economics.\nAnswers to the questions investors would ask are somewhat more elusive, primarily in regard to whether one is seeking a long-term or short-term investment, and then in terms of either investing in a branch where the computing landscape is more known or in a branch that leans more to an uncharted territory -with the primary focus of the investment possibly being technological, or perhaps some sort of a diversification. When problems presented in [12] and brought to the fore as well, there seem to be three medium to high gain paths. The first is focusing on a correct implementation in an economic context with a compelling product, while the second is heading beyond the current state to new technologies or new branches and the first bricks that will build the future. The third option leverages the old and gradual innovation, either by combining innovation with the most prominent branches, or less innovation with branches not so heavily saturated with computing. The first and the third paths are likely of the most appeal to investors, if for nothing else then for the time span which should here be of a shorter nature and such a project price coupled with that fact.\nFollowing the research results, and in accordance with the objectives of the research, in the continuation of the article we provide the following implications for future field development:\n• Stronger development of entrepreneurship research in the FinTech sector and further expansion of AI in the segment of alternative sources of financing for entrepreneurs (crowdfunding, peer-to-peer lending, robo-advisors). One example of interesting future research is examining how robo-advisors can help (1) business angels in making investment decisions regarding the financing of entrepreneurial ventures, and\nportfolio entrepreneurs in making decisions about business expansion and new investment opportunities [9]. Our research shows that there is a smaller number of works in this topic niche and that the field started development only recently. • Progress of research on AI as a support for preventing and detecting financial fraud.\nIn the context of entrepreneurship, frauds are becoming more and more common when making financial payments in business-to-business operations (for example, false information about the bank account number of a business partner by attackers). Likewise, by integrating AI, blockchain and smart contracts, it is perhaps possible to overcome the shortcomings of auditing and financial reporting and to act positively in the direction of preventing financial frauds related to auditing, while keeping in mind privacy, security and societal concerns. [9,111] • The emergence of the field of blockchain in entrepreneurship. It is a new area of research since the application of AI methods in this area has yet to be explored (for example, the application of blockchain in business scaling and automation to improve the performance of an entrepreneurial venture). In the future, blockchain can become one of the key resources of entrepreneurs because it can contribute to the rationalization of business (through savings in production) but also in financial accounting, compliance requirements, and auditing [19]. On the other hand, this technology had its own share of frauds, privacy concerns, security issues, etc. that need to be dealt with -the blockchain market is also quite volatile. [182][183][184][185][186][187] • The more common application of ensemble in finance and entrepreneurship. Despite the advantages of deep learning, classic machine learning approaches (such as decision trees, Random Forest, SVM, k-NN, and Bayesian models) are still widely used.\nThe use of the ensemble approach in finance and entrepreneurship is still in its infancy, although according to the first findings, they show good performance [2]. • Stronger development of predictive analytics in business planning for entrepreneurs (for example, firm-level price forecasting). There are few works in this area since most of the existing papers are related to macroeconomic and microeconomic forecasting (for example, forecasting the price of oil or electricity, and forecasting stock prices on the capital market). Knowledge related to the performance of individual AI methods from the macroeconomic domain should be transferred to the area of business planning. According to Nazareth and Reddy (2023), LSTM models show outstanding performance in predicting financial time series on stock markets [2] the possibilities of implementing these models in the field of business planning of entrepreneurs have yet to be explored. • Expansion of the application of AI to improve communication strategies and impression management in obtaining financial resources for an entrepreneurial venture.\nVarious NLP techniques play a key role in this area, and neuroscience in combination with AI also has potential. Research examples refer to the development of interpretive models that explain the reactions of the human brain during exposure to communication or presentation directed by an entrepreneur towards a potential investor and the examination of how these reactions affect the final financing decision. Investors can use this knowledge to rationalize funding decisions, and entrepreneurs to adjust behavior and ensure success in financing [19]. Our research shows fewer works in this topic niche and a growing interest in the area.\nThere are a number of issues that puzzle the mind, which are both potential future hurdles as well as topics for future research. When we were gathering documents for the research it was evident that there was a lack of cooperation between computer experts and economic experts, which then in turn resulted in research of a lesser quality, as economic authors lack the computing expertise that is not so easily acquired. [27,99] Such author multidisciplinary cooperation would use artificial intelligence state-ofthe-art technologies and methods to test and build new entrepreneurial theories in a rigorous, relevant, and impactful way -an effort must be therefore taken so as to achieve cooperation between computer and economic researchers, or the research will continue to suffer. [27,99] An evident lack of a necessary policy framework for the application of AI is also an issue. The question of how to apply these technologies in a morally acceptable way, dealing with entrepreneurship and finance, is not completely solved nor is it entirely yet clear how that should look like a logical consequence of revolutionary technology, and a path for further discussion and research. [27,94,105] Aside from necessary laws there is also a need for education, as people need to be acquainted with the new state of affairs, and after that adapt to it, a transition period is a must. [27,94,105] One of the areas where this sensitive issue is particularly clear, both in research and practice, is using AI to identify facial expressions and emotion detection in entrepreneurial finance, as possible misconduct by such a technology and manipulation could be extraordinarily negative. [27,94,105] It is a necessity to take a constructive and critical stance and to take into consideration constraints, risks, and implications of AI usage in entrepreneurial finance, e.g. AI has biases, makes mistakes, etc., researchers and practitioners need to be aware of that, thus the issues can be tackled, with transparency -otherwise we are doomed to blindly trust the algorithms to eventually lead us in a wrong way, algorithm's result needs to be checked and the final decision must be on a human being if we lose control, we will be controlled. [10] This research was conducted in an encompassing, methodological, rigorous, and indepth way. There are however a number of constraints, some are a result of the reach Bibliometrics can take, while others are technological or generally scientific. These need to be taken into consideration when one thinks about the research results.\n1. Bibliometrix has it seems a number of bugs. When one uses BibTeX file some science categories are not being recognized, while author counting is not working properly/precisely. 2. The database of the research is Web of Science. Other databases, like e.g. Scopus (we were planning initially on including Scopus also, there were however a number of issues that for the reason of time and complexity of the entire research we were not able to resolve -Elsevier was very quick to respond and quite cooperative; if Scopus was on the other hand also included the research would be even larger, it is therefore for the time being including WoS only a proper step in an iterative nature of science) are not included in the analysis. 3. Documents that are included in the Bibliometrics analysis are in the English language and are journal articles -conferences and books are not included as the focus was only on the most encompassing, impactful and relevant documents, generally speaking, those are journal articles. Books are not the primary driving force of research and peer review and are not typical avenues for Bibliometrics research. 4. Data for the last two to three years is not yet complete, with a number of documents also missing the final date of publication, this is therefore restrictive on the interpretation and projection of future events. 5. As the database included in the research is WoS it is difficult to generalize, this constraint nevertheless results in gathering the most impactful documents, which in turn suggests that results from Scopus should generally align. Aside from various suggestions, interpretations, and methodological improvements made to Bibliometrics, there is also a specific improvement that should be specially mentioned, namely amortized h-index. With this measure, we hope that the age shortcoming of h-index can be inhibited, and thus obtain a more realistic result about the item being measured. With amortized h-index we can therefore ascertain how an item fares with passed years, or decades, as compared to other items that are of a different time span, and with that get information on the successes/failures of the young, or successes/failures of the old -on the details about amortized h-index please consult Appendix A.\nEntrepreneurial finance and Fin-tech sector have seen, and are yet to see, substantial advances in terms of AI, especially by the application of various even disparate approaches to a field accustomed to doing things the old way. The change is it seems inevitable, it is up to us (society at large) to determine what will be the result of that change, with one part of that process being research, and discussion -the part in which Academia is proficient -the part with which this transformation should be backed. The data is sorted in ascending order according to year, which is not a prerequisite for amortized h-index calculation. 1 Year of an item, e.g. journal, author, etc., years can have any time span between them and they can be the same as well, e.g. 2010, 2014, 2015, 2015,..., 2023. The current year does not need to be an ongoing real-world year, it is up to the expert to choose the appropriate year for the instance analyzed, although an ongoing real-world year would be the typical case.\n2 Calculated as the difference of the last and current year, with the addition of one more year, so as to include the time passed in an incomplete year and make a conservative estimate.\n3 Calculated as a quotient of the highest citable years and current citable years, e.g. for 1993 calculation would be 10 8 = 1.25. 4 As per maximum value of pondering scalar set -thus making the last observed year a fixed point for comparative analysis. 5 Calculated as a product of normalized pondering scalar and h-index, e.g. for the year 1997 calculation would be 0.25 × 21 = 5.25. position. It is also possible to give preference to items that are more recent, as per the idea behind amortized h-index, and rank items with that in mind, while other items do not change position. One can also additionally calculate an average between the amortized measure and the one not amortized, and try to resolve the issue that way.\nIf an extreme year is a part of the data then it might be preferable to conduct analysis with and without an extreme data point (an instance where the current year would not be fixed, e.g. one might want to ascertain the historical state), so as to ascertain the impact of the extreme year, and produce an objective and relevant result in terms of items relation. There are a number of constraints that also need to be taken into consideration when one performs and interprets results with amortized h-index, these are as follows. 1. History of science can be substantially different as compared to today, in terms of the number of authors, journals, citation practices, institutional practices, working environment, etc., all these and other factors can influence the metric observed. 2. A high influence is given to recent items, for which time will show far less in terms of relevance. 3. A low influence is given to old items, for which time is not a friend in an environment of substantial positive quantitative change. " }, { "figure_ref": [], "heading": "", "publication_ref": [ "b15", "b17", "b18", "b22", "b106", "b121", "b132", "b133", "b136" ], "table_ref": [], "text": "Acknowledgments. For the research the following was also used: Latex 15 , TexLive 16 , MikTeX 17 , Draw.io 18 , GIMP 19 , Texstudio 20 , LibreOffice 21 , JabRef 22 , Rstudio 23 , Bibliometrix [107,123], VOSviewer [133,134], Publish or Perish [137], Linux Mint 24 ." }, { "figure_ref": [], "heading": "Discussion and Research Implications", "publication_ref": [ "b186" ], "table_ref": [], "text": "As a conclusion of the paper, here we will discuss the most important results, and link those results to articulated implications for parties of interest. Additionally to the aforementioned, research constraints will also be revealed and commented upon, while the section will be concluded with recommendations for improving Bibliometrics, and directions for future research.\nWith this bibliometric analysis we have given insight into the literature on entrepreneurship, finance and artificial intelligence, and this has been done through many facets. As to the question of what is of high quality and what is not, a question as difficult as that can not be answered by bibliometric analysis. Such a question can only be answered by reading through, and inferring how much significance the text is to the subject of life and to the subject of science. If it bears no significance on these, it is of no use. Yet, if insight into the field of AI-entrepreneurship-finance is a question, the job has been done, information is here, and the conclusions are as follows, itemized as per research objectives and cross-referenced with relevant parts of the paper which deal in part or entirely with the specific research objective, and also presented throughout this section of the paper. 1. To determine the publication productivity and evolution of scientific knowledge on the intersection of AI-entrepreneurship-finance -achieved in Subsection 4. To determine AI methods (method, algorithm, technique) used in the study of certain topics at the intersection of AI-entrepreneurship-finance, so as to determine the current state, and project future possibilities -achieved in Subsection 4.8. 5. To reach a more profound insight into the research field, and to reflect on the emerging research directions and the promising AI methods for future applications in entrepreneurial finance, with implications for the scientific community, computer experts, entrepreneurs, and investors in entrepreneurship -achieved in Subsection 4.6, Subsection 4.8, Subsection 4.9, Subsection 4.10, and Section 5. 6. To give recommendations for future improvement of bibliometric methodologyachieved in Appendix A as well as in the application of suggested amortized h-index in the research, as seen in presented analyses of the paper.\nConsidering methods and branches in Table 14, the data there, perhaps more than any other part of the paper, speaks about the current state of research, gaps, and potential future directions. One can clearly see the most prominent branches and the entirety of computing methods that were used throughout these sub-divisions of economics. It is thus more obvious where are current successes and where should or During various Bibliometrics analyses, one often deals with counting documents, or with citations, and it would be useful if the tools would have an option for filtering surveys, reviews, Bibliometrics, and meta-analysis articles -this option would allow for a fine-grained look into contributions, and contributions about contributions, with making interpretations easier and likely more precise. Such an option would be useful in other analyses, e.g. conceptual clustering, etc., and would improve the entire process in a substantial way.\nThe rise of AI, coupled with big data, has given birth to the Fin-tech sector, comprising of digital technology innovations with new business models for finance backed by that technology. [188] The following list therefore presents possible research projects to take and make a contribution to the field -preferably by collaboration of computing and economics experts, and complementary knowledge thereof. Entrepreneurs are tasked with multiple and various jobs that require hours upon hours to complete. Perhaps one of the greatest benefits of AI is its ability to allow small business owners to greatly reduce the time needed to complete tasks, especially those that often feel burdensome. Since the days of the industrial revolution, people have been concerned that machines would take their jobs and humans would become redundant. However, many jobs machines perform relieve humans of mundane tasks so they can focus their efforts on situations that require human intelligence and caring. For example, humans can operate help lines, but many workers would argue that such mind-numbing jobs do not take advantage of human skills and intelligence. AI systems can interact with customers at base levels so that humans can focus on those who truly need more personalized and interactive assistance." }, { "figure_ref": [], "heading": "Declarations", "publication_ref": [], "table_ref": [], "text": "The authors declare no conflict of interest." }, { "figure_ref": [], "heading": "Appendix A Amortized h-index", "publication_ref": [], "table_ref": [], "text": "Here we will describe how one can calculate amortized h-index, or in general terms a measure for calculating influence through time if the relationship between variables is exponentially inversely proportional (having in mind the neighboring sequence of years stretching to the current year, e.g. 2010, 2011, 2012,..., 2023) -presented in Table A1.\nAs the year 2000 is a fixed point (the current year for example calculation) the h-index is unchanged, that is the value for the amortized h-index is the same, while all the other values are amortized accordingly. The amortized h-index for the year 1991 is 5.1, as opposed to 51 originally -year by year, the original value is very close to the fixed point of 5, which is therefore taken into account with amortized value. The year 1999 is half a point higher from the fixed point, while the year by year also being very close -however this instance is of a far younger nature than 1991, and is therefore more influential; with similar reasoning for the year 1998 being valid, where amortized value is somewhat lower, a consequence of lower original value and passing of more time, as per fixed point.\nThere is a possibility that a number of or all of amortized h-indexes turn equal, as a consequence of original values, time spans and pondering distribution presenting such a case. If there is a strong need for prioritization, such a turn of events can be dealt with. Every h-index needs to be decreased by one point until all amortized h-indexes are unique -this kind of estimate then presents a potential recent state with which one can then rank items. This kind of move can also be performed just on equivalent items, with the ranking then valid only between those, other items do not change As far as pondering distribution is concerned, it is possible that there are instances where some other distribution would be more appropriate, and should therefore replace the distribution suggested in this paper. Thus with an amortized h-index we can more precisely ascertain the relation between various items where the observed metric depends on time -amortization can also be generalized with time replaced by another characteristic." }, { "figure_ref": [], "heading": "Appendix B Thematic Evolution of Concepts Most Highly Cited Documents", "publication_ref": [], "table_ref": [], "text": "As an upgrade to the thematic evolution of concepts, we have additionally performed an analysis of the most highly cited documents per period of interest from Figures 3, 31, and 32, including analysis for peak years. Thus the reader can get an insight into the most relevant documents together with meta-data, which is useful in itself. Performed analyses can be seen in Tables B2, B3, B4, B5, B6, B7, B8. Aside from relevance, information about the influence on the specific cluster, that is the theme, is presented as well. In this way it is possible to, in a concrete way, link the evolution of themes and those who have built that significance via multiple perspectives enabled by document metadata." }, { "figure_ref": [], "heading": "Appendix C Artificial Intelligence Method Occurrence Heat Map", "publication_ref": [], "table_ref": [], "text": "As an addition to the analysis in Table 14 heat map of the data was made, presented in Figure C1. With this helper figure one can more easily ascertain the relevance of every branch, as per document corpus, and also the relevance of every method, as per use in a specific branch -with totals also being revealing. 14. The color of the map is tied to the numeric value and represents the intensity of occurrence. Bright yellow denotes no value, while if the color is shifted to yellow, occurrence is low, and if the color is shifted to red, occurrence is high.\n107 " }, { "figure_ref": [], "heading": "Appendix D Historiograph Topic Documents", "publication_ref": [], "table_ref": [], "text": "An addition to the analysis in Figure 36, here we present historiograph documents as well as their metadata, in Tables D9 andD10. With these, the reader can in more detail ascertain both the documents and the defined topic of the entire research corpus.\nAs the historiograph includes 30 items the presented data is divided into two parts and sorted as per local citation count." } ]
While the application of Artificial Intelligence in Finance has a long tradition, its potential in Entrepreneurship has been intensively explored only recently. In this context, Entrepreneurial Finance is a particularly fertile ground for future Artificial Intelligence proliferation. To support the latter, the study provides a bibliometric review of Artificial Intelligence applications in (1) entrepreneurial finance literature, and (2) corporate finance literature with implications for Entrepreneurship. Rigorous search and screening procedures of the scientific database Web of Science Core Collection resulted in the identification of 1890 relevant journal articles subjected to analysis. The bibliometric analysis gives a rich insight into the knowledge field's conceptual, intellectual, and social structure, indicating nascent and underdeveloped research directions. As far as we were able to identify, this is the first study to map and bibliometrically analyze the academic field concerning the relationship between Artificial Intelligence, Entrepreneurship, and Finance, and the first review that deals with Artificial Intelligence methods in Entrepreneurship. According to the results, Artificial Neural Network, Deep Neural Network and Support Vector Machine are highly represented in almost all identified topic niches. At the same time, applying Topic Modeling, Fuzzy Neural Network and Growing Hierarchical Self-organizing Map is quite rare. As an element of the research, and before final remarks, the article deals as well with a discussion of certain gaps in the relationship between Computer Science and Economics. These gaps do represent problems in the application of Artificial Intelligence in Economic Science. As a way to at least in part remedy this situation, the foundational paradigm and the bespoke demonstration of the Monte Carlo randomized algorithm are presented.
Artificial Intelligence in the Service of Entrepreneurial Finance: Knowledge Structure and the Foundational Algorithmic Paradigm
[ { "figure_caption": "Fig. 33Fig.3Average Citation Per Elapsed Years (mean citation per article for a particular year divided by the number of citable years, e.g. for 2021 average citation per article is 8.5 and there are 3 citable years, therefore one has 8.5 3 = 2.83). Average Citations started very slowly, then there was an explosion that was an indicator of future events. The last three years are irrelevant for interpretation, as citations need cca. at least two years in order to accumulate to substantial amount, which is general knowledge in the scientific community, and especially known among journal editors, with some fields needing more time than others. Until 2007 the situation was somewhat erratic, afterward, it seemed as if a calm came, and the field had matured. However, this is deceptive, as the influence of average citation clearly reveals (as citations close to publication date are exceedingly more difficult to accumulate than those that will arrive later on -calculation was performed as per amortization described in Appendix A, with the result scaled by a factor of 7 so as to improve readability and try to reveal points of interest).", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 44Fig. 4 Sankey Diagram for Countries, Keywords and Affiliations.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 55Fig. 5 Sankey Diagram for Sources, Keywords and Source Citations.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 Fig. 767Fig. 6 Sankey Diagram for Keywords Plus, Keywords and References.", "figure_data": "", "figure_id": "fig_3", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Fig. 2323Fig.23 Most Local (within our own data) Cited Documents. In order to make the figure more compact, the publication source was transferred to the figure legend below -with the link presented as well, so as to make the transition from the analysis to the publication as easy as possible. Entries are sorted according to the total citation sum. The average yearly citation was calculated in a conservative manner by adding one additional year to the time span of the calculation, e.g. for Kumar PR, 2007 equation is", "figure_data": "", "figure_id": "fig_5", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Fig. 2424Fig.24 Most Local (within our own data) Cited References. In order to make the figure more compact, the reference source was transferred to the figure legend below -with the link presented as well, so as to make the transition from the analysis to the reference as easy as possible. Entries are sorted according to total citation sum, with the analysis being performed on 62517 reference corpus. The average yearly citation was calculated in a conservative manner by adding one additional year to the time span of the calculation, e.g. for Altman EI, 1968 equation is", "figure_data": "", "figure_id": "fig_6", "figure_label": "24", "figure_type": "figure" }, { "figure_caption": "Fig. 2626Fig.26Terms Cumulative Frequency Over Time -calculated over the authors' own keywords.", "figure_data": "", "figure_id": "fig_7", "figure_label": "26", "figure_type": "figure" }, { "figure_caption": "Fig. 2727Fig.27Trending Terms over Time. The list was generated using authors' own keywords for the entire data document lifespan, with the resulting data starting in 1999 through the term, expert systems. In order for a word to appear in the figure, occurrence needs to equal at least 5, so as to collect terms most relevant, with the maximum number of words per year being 4, as a high number of words results in nonrelevant terms or terms already appearing in the analysis -furthermore, a high number of terms makes the analysis overly crowded with terms, and difficult to analyze and perceive what are the results indicating. The blue blob represents the frequency of occurrence for a particular term, while the vertical light blue line represents the period of occurrence (start, focus point, end; Q1, median, Q3, respectively).", "figure_data": "", "figure_id": "fig_8", "figure_label": "27", "figure_type": "figure" }, { "figure_caption": "Fig. 3131Fig.31Thematic Evolution of Concepts -from 1991 until 2014, with the last time slice, 2015 until 2023, representing the future to come. Time periods were selected for the reason of being one of the periods of interest from analysis in Figure3. As we are interested in concepts, authors' abstracts were used for the analysis, since by choosing bi-grams we were able to achieve improved descriptiveness of the clusters, compared to keywords, with the number of words being 1000 and minimum cluster frequency set at 20, so as to have high confidence in the results and pick those terms that are prevalent. The minimum weight index was set to the lowest value of 0.02, wanting to explore indepth, while the clustering algorithm used was Walktrap, a random walk-based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community).[156,157] Colored rectangles represent clusters that are observed during specified periods, period is noted above the clusters, with terms representing an overarching theme of the cluster -size of the cluster represents significance during the period, the bigger the cluster, the more importance for the period the cluster has. Clusters are typically linked to other clusters, either in the vicinity, that is right next to, or to the ones farther apart, jumping the periods, depending on which clusters the influence was exhorted. The thickness of the edge represents how strong the influence is, with thicker the edge, the stronger the influence. For most highly cited documents per period with data on the specific peaks one should consult Appendix B. A word of caution, analysis in for example Figure28has been conducted with authors' own keywords, while thematic evolution of concepts was performed with abstracts, therefore one is not to compare terms in such a situation, but themes and ideas.", "figure_data": "", "figure_id": "fig_9", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Neural Network; DNN -Deep Neural Network; BPNN -Back Propagation Neural Network; DT -Decision Tree; EL -Ensemble Learning; GA -Genetic Algorithm; RNN -Recurrent Neural Network; RF -Random Forest; GB -Gradient Boost; TM -Topic Modeling; SVM -Support Vector Machine; CNN -Convolutional Neural Network; SOM -Self-organizing Map; CBR -Case-based Reasoning; PSO -Particle Swarm Optimization; GP -Genetic Programming; LSTM -Long Short-term Memory; FNN -Feed-forward Neural Network; FZNN -Fuzzy Neural Network; GHSOM -Growing Hierarchical Self-organizing Map 1 Ordinal number identifying the particular branch.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3838Fig.38Collaboration Network by Country -a subset of countries. With this analysis, one determines clusters and strength of collaboration -relatedness of items is calculated by the number of co-authored documents. The network was created for 50 nodes, so as to firstly determine the network in a situation where the choice of belonging to a cluster is more binary, at the same time removing isolated nodes and with the minimum number of edges being 1 -this will ensure not to focus on those that are of no interest on the one side, and to consider all those that are collaborating. The clustering algorithm used was Walktrap, a random walk based algorithm known for its ability to capture with quality community structure of a network (based on the idea that short random walks belong to the same community).[156,157] Similarly to a collaboration network for institutions, a colored blob represents a country with a label denoting the name. The color of the blob defines cluster, while the size of the blob informs about country document output, the bigger the blob, the more documents a country has produced. Countries are collaborating, and this has been represented by graph edges, the thicker the edge is, the more co-authorships there are, and the stronger the link is.", "figure_data": "", "figure_id": "fig_11", "figure_label": "38", "figure_type": "figure" }, { "figure_caption": "Fig. 4040Fig. 40 Illustrative Example of the Minimum Feedback Arc Set problem -communication between three parties. Each person prepares his information based on the information received. The question however is, how to start communication by causing a minimal amount of upset in the data flow.", "figure_data": "", "figure_id": "fig_12", "figure_label": "40", "figure_type": "figure" }, { "figure_caption": "Fig. 4141Fig.41 Monte Carlo for the Minimum Feedback Arc Set -one algorithm run for illustrative example in Figure40. The input for the algorithm is the multi-graph. The algorithm chooses and breaks arcs in a uniform fashion until arcs have been broken and the graph is made acyclic.", "figure_data": "", "figure_id": "fig_15", "figure_label": "41", "figure_type": "figure" }, { "figure_caption": "107/ 2121.40 MOLL J, 2019, BRIT ACCOUNT REV THE ROLE OF INTERNET-RELATED TECHNOLOGIES IN SHAP-ING THE WORK OF ACCOUNTANTS: NEW DIRECTIONS FOR ACCOUNTING RESEARCH Sorted according to total citation in descending order, rounded in two decimal places Of the clusters for the accompanying period, SALINAS D, 2020, INT J FORECAST highest contribution was to demand forecasting, short-term memory and data analytics clusters; WONG LW, 2020, INT J INF MANAGE highest contribution was to supply chain and enterprises smes; DUBEY R, 2020, INT J PROD ECON highest contribution was to artificial intelligence and data analytics clusters; RAUT RD, 2019, J CLEAN PROD very highly contributed to data analytics cluster; ZHU Y, 2019, INT J PROD ECON highest contribution was to machine learning cluster; MOLL J, 2019, BRIT ACCOUNT REV highest contribution was to future research and artificial intelligence clusters 1 Average citation per year 2 Abbreviated document metadata and title 3 Link of a Digital Object Identifier 4 Even tough the whole period consists of two consecutive years, both of them are high peak years, and are therefore of interest for separate analysis 5 SALINAS D, 2020, INT J FORECAST, WONG LW, 2020, INT J INF MANAGE and DUBEY R, 2020, INT J PROD ECON are the first three for 2020, respectively113", "figure_data": "", "figure_id": "fig_16", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparison of the present study against related bibliometric work -Part I 1 Continuation of the analysis presented in the table can be seen in", "figure_data": "Intersection AI-FinanceTopic ScopeMethodsDatabaseSubject AreaSample 2 Review 3do Prado et al. (2016) [97]CreditMultivariateWoSCCNo limit393Credit Risk andRisk andData AnalysisBankruptcyBankruptcyTechniquesPrediction multi-Predictionvariate techniqueslist and timelineShi and Li (2019) [99]CorporateIntelligentWoSCCNo limit413Only BankruptcyBankruptcytechniquesPrediction intelli-Predictiongent methods listGoodell et al. (2021) [9]FinanceArtificial Intel-ScopusOnly Social283Only AI methodsligenceSciences, nolist, in FinanceComputer(mentioning someSciencelimitations)Ahmed et al. (2022) [102]FinanceArtificial Intel-ScopusOnly Eco-348Noneligencenomics andFinance, noComputer Sci-enceChaklader et al. (2023) [104]FinTech com-Artificial Intel-ScopusNo limit302NonepaniesligenceNazareth and Reddy (2023) [2]FinanceMachineScienceDirect No limit126Machine LearningLearningin FinanceChen et al. (2023) [105]FinanceExplainableWoSCCOnly Busi-2733NoneArtificial Intel-ness, BusinessligenceFinance andEconomics, noComputer Sci-enceData has been grouped by publication year, in ascending order1", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "No. of documents", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of the present study against related bibliometric work -Part II 1 First part of the analysis presented in the table can be seen in Table1", "figure_data": "Intersection AI-EntrepreneurshipTopic ScopeMethodsDatabaseSubject AreaSample 2 Review 3Li et al. (2022) [14]EntrepreneurialArtificial Intel-WoSCCNo limit123NonemanagementligenceBlanco-González-Tejero et al. (2023) [93]EntrepreneurshipArtificial Intel-Dimensions.ai No data520NoneligenceGupta et al. (2023) [94]SustainableArtificial Intel-ScopusNo limit482NoneentrepreneurshipligenceIntersection AI-Entrepreneurship-FinanceThe Present StudyEntrepreneurialArtificial Intel-WoSCCNo limit1890Artificialfinance and corpo-ligenceIntelligence inrate finance withentrepreneurialimplications forand corporateentrepreneurshipfinanceData has been grouped by publication year, in ascending order1", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Keywords and Search Results in the Phase of Preliminary Search and Initial Screening of the Research Field -Part I 1 Continuation of the analysis presented in the table can be seen in", "figure_data": "Google ScholarNo.Search DateSearch KeywordsDocuments 2Selected 313/16/2023artificial intelligence; finance; entrepreneur1273523/17/2023artificial intelligence; finance; venture911933/22/2023artificial intelligence; finance; business1133643/22/2023artificial intelligence; funding; entrepreneur771253/23/2023artificial intelligence; funding; venture67563/25/2023artificial intelligence; funding; business74373/25/2023machine learning; finance; entrepreneur961983/27/2023machine learning; finance; venture871593/27/2023machine learning; finance; business10141103/28/2023machine learning; funding; entrepreneur7223113/28/2023machine learning; funding; venture619123/29/2023machine learning; funding; business8761053223Sorted according to search date, in ascending order1", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "No. of documents in the initial screening after removing duplicates 3 No. of selected relevant documents Flowchart detailing the bibliometric methodology steps and procedures in the present study (WoSCC denotes Web of Science Core Collection; SCI-EXPANDED denotes Science Citation Index Expanded; SSCI denotes Social Sciences Citation Index; A&HCI denotes Arts and Humanities Citation Index; ESCI denotes Emerging Sources Citation Index)", "figure_data": "Phase 1 Defining the Objective and Scope of the Study1 2Defining the general objective of the study Defining the scope of the studyTo explore and review the conceptual, intellectual and social structure of scientific knowledge on the intersections of AI-entrepreneurship-finance3Defining keywords for the preliminary searchartificial intelligence OR machine learning entrepreneur OR venture OR business AND finance OR funding AND4Conducting the preliminary search of Google Scholar and WoSCC1,472 results of preliminary search after conference papers, theses, reports) removing duplicates (journal articles, books,Phase 2 Preliminary Search and Initial Screening of the Research Field5Initial screening of 1,472 keywords, and abstract) documents (reading the title,Selection of 266 relevant documentsRe-reading abstracts andDetermination of the research field as6keywords of 266 relevant documents and inspecting the fullpropulsive and comprehensive enough for bibliometric analysis; identifying severaltext if necessaryintertwined topics7Defining keywords and search queries for Phase 3Identification of 105 keywords and 11 search queries8Searching WoSCC by using criteria TopicFinding a total of 6,148 documentsApplying search filters in WoSCC:Filter by document type:4,792 documents remaininga) Document type: article, reviewPhase 3 Search, Collecting, and Screening the Data for9article, early access b) Language: EnglishFilter by language: 4,699 documents remainingBibliometric Analysisc) WoS Index: SCI-EXPANDED,Filter by WoS Index:SSCI, A&HCI, ESCI4,644 documents remainingScreening of 4,644 documents10(reading the title, keywords,Selection of 2,694 relevant documentsand abstract)11Removing duplicates and retracted papersThe final collection of 1,890 documents for bibliometric analysisBibliometric Analysis and Phase 4 Running the12Performing the bibliometric analysis in Bibliometrix (R package) and VOSviewerObtaining results of performance analysis, creating visual solutions science mapping, network analysis; identification of clusters;Reporting the Findings13Reporting findings and interpreting resultsIdentification of emerging directions, problems, and implications in the research fieldFig. 1", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Keywords and Search Results in the Phase of Preliminary Search and Initial Screening of the Research Field -Part II 1 Web of Science Core Collection 2", "figure_data": "No.Search DateSearch KeywordsDocuments 3Selected 413/29/2023artificial intelligence AND finance AND192entrepreneur23/29/2023artificial intelligence AND finance AND20020business33/29/2023machine learning AND finance AND busi-20021ness41943G +W1472266", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Search Queries and Number of Records in Procedures of Searching, Collecting, and Screening the Data for Bibliometric Analysis -Part I 1", "figure_data": "Topic 2Search Query 3", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Search Queries and Number of Records in Procedures of Searching, Collecting, and Screening the Data for Bibliometric Analysis -Part II 1", "figure_data": "Topic 2Search Query 3", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Search Queries and Number of Records in Procedures of Searching, Collecting, and Screening the Data for Bibliometric Analysis -Part III 1", "figure_data": "Filtering ProcedureTopic 2Document Type Language English WoS Index 3Screening 41 (a)3853773691211 (b)5635565522771 (b)1461451431201 (c)10219979867401 (c)4554504452242129124121393 (a)3012952881623 (a)5405245193703 (a)160159158673 (b)6176066024423 (b)4754664611324792469946442694D804B1890", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Data Coverage (quality of population per variable)", "figure_data": "Key 1DescriptionMissing (no.)Missing (%)Sample Quality 2AUAuthor00.00ExcellentCRCited References00.00ExcellentDTDocument Type00.00ExcellentSOJournal00.00ExcellentLALanguage00.00ExcellentNRNo. of Cited References00.00ExcellentTITitle00.00ExcellentTCTotal Citation00.00ExcellentABAbstract10.05Good", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Descriptive Statistics on Authors and Collaborations", "figure_data": "AuthorsCollaborationsNo.Single doc.One author (per doc.)Coauthors (per doc.)International (%)38792252662.9125.08", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Core Sources by Bradford's Law -a well-known law in science that approximates exponentially diminishing returns in a subsequent action for substantially larger corpus (frequently stated as 1 : n : n 2 )[129,130], sometimes called Pareto Distribution[131]. Below we are giving the first zone for a Bradford's Law calculation in regard to the number of articles for sources of those articles.", "figure_data": "No. of Articles (NA) Source", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "The list of countries is sorted according to the total number of documents for each country, that is the sum of single-country publications and multiple-country publications. Most Cited Countries, total citation sum. So as to make the figure compact, the United Kingdom was abbreviated as the UK and the United States of America was abbreviated as USA. Most Cited Countries by Average Article Citation. So as to make the figure compact, the United States of America was abbreviated as USA.", "figure_data": "Portugal Turkey9 8 105947.1%Single Country PublicationsBrazil Germany10 9 113247.4%Multiple Country Publications2,000Finland Italy36.4% 1252 8Number of DocumentsNew York University Georgia State University City University of Hong Kong Sogang University Renmin University of China University of Sydney Zhejiang Normal University Chinese Culture University Korea Advanced Institute of Science & Technology Oriental Institute of Technology 45 50 55 60 54 47 47 46 45 43 H-index New York University GSU CUHK SU RUC US ZNU CCU KAIST OIT 40 38 37 35 35 NYU NYU ZNU CUHK RUC GSU US SU KAIST DU IDRBT ZNU Zhejiang Normal University CUHK City University of Hong Kong RUC Renmin University of China GSU Georgia State University US University of Sydney SU Sogang University KAIST Korea Advanced Institute of Science & Technology DU Dongguk University IDRBT Institute for Development and Research in Banking Technology 100 200 300 400 China USA Korea India Spain UK Italy France Iran Australia Turkey Germany Greece Poland CZ Romania Canada 18.6% 24.3% 12.9% 22.7% 22.2% 59.2% 30.6% 33.3% 7.3% 18.2% 15.2% 20.7% 25.0% 21.4% 20.0% 4.0% 31.8% 162 74 58 56 29 34 28 38 27 28 23 21 22 20 24 123 52 11 42 1 Number of Documents Countries 1991 1993 1995 1997 1999 2001 2003 2005 2007 2009 2011 2013 2015 2017 2019 Publication Years United States of America United Kingdom Turkey Spain Korea Italy Iran India France China 0.2 0.4 0.6 0.8 1 Korea Malaysia Germany Norway Qatar Cyprus Turkey France USA Spain 42.7 39 35.4 33.7 33.4 32.1 31.9 31.7 30.6 Countries 20 40 60 80 100 120 140 160 180 Kumar PR, 2007 Affiliations 0 200 400 600 800 1,000 1,200 Min JH, 2005 187 62 500 2021 50.1 160 11 8.42 Number of Citations Fig. 21 32 34 36 38 40 42 44 46 48 50 52 54 538 2023 143 Shin KS, 2005 7.52 •10 4 Number of Citations 131 Zhang GQ, 1999 5.24 1.2 China USA Korea 12584 6791 4262 Wilson RL, 1994 Tsai CF, 2008 124 120 4.13 7.5 Documents 114 Altman EI, 1994 3.8 1,400 1,600 1,800 India Spain UK France 2270 2205 1654 1340 111 Barboza F, 2017 Dimitras AI, 1996 102 15.85 3.64 Countries Total Citation Sum 3.18 Jo HK, 1997 86 Average Yearly CitationAverage Article Citation58 Fig. 22 62 59", "figure_id": "tab_14", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Artificial Intelligence Method Occurrence by Topic Niches ON 1 ND 2 Topic Niches 3", "figure_data": "ANNDNNBPNNDTELGARNNRFGBTMSVMCNNSOMCBRPSOGPLSTMFNNFZNNGHSOM", "figure_id": "tab_17", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Most Relevant References Overall in Co-citation Network from Figure33", "figure_data": "Reference 1Cluster 2Branch 3Altman EI, 1968, J FINANC2refOhlson JA, 1980, J ACCOUNTING RES2refBeaver WH, 1966, J ACCOUNTING RES2refKumar PR, 2007, EUR J OPER RES2VP, FPMin JH, 2005, EXPERT SYST APPL2VPShin KS 2005, EXPERT SYST APPL2VPZmijewski ME, 1984, J ACCOUNTING RES 2refTam KY, 1992, MANAGE SCI2refWilson RL, 1994, DECIS SUPPORT SYST2VP, AIFZhang GQ, 1999, EUR J OPER RES2VP", "figure_id": "tab_18", "figure_label": "15", "figure_type": "table" }, { "figure_caption": "Most Relevant References for Cluster Red (identification 1) in Co-citation Network from Figure33", "figure_data": "Reference 1Cluster 2Branch 3Tsai CF, 2008, EXPERT SYST APPL1VPNanni L, 2009, EXPERT SYST APPL1VPWest D, 2005, COMPUT OPER RES1VPWest D, 2000, COMPUT OPER RES1refAlfaro E, 2008, DECIS SUPPORT SYST1VPHuang Z, 2004, DECIS SUPPORT SYST1refBreiman L, 2001, MACHINE LEARNING 1refHuang CL, 2007, EXPERT SYST APPL1refBarboza F, 2017, EXPERT SYST APPL1SEF, VPKim MJ, 2010, EXPERT SYST APPL1VP", "figure_id": "tab_19", "figure_label": "16", "figure_type": "table" }, { "figure_caption": "Most Relevant References Overall in Co-citation Network from Figure34Sorted according to reference relevance in terms of Total Link Strength (full counting method used for links) calculated by VOSviewer, in decreasing order. As VOSviewer depicts more information about the references these are included, as it complements previous data and it makes the reference more easily identifiable. Data from the table are retrieved from analysis in Figure", "figure_data": "Reference 1Cluster 2Branch 3Altman EI, 1968, J FINANC, V234refOhlson JA, 1980, J ACCOUNTING RES, V184refBeaver WH, 1966, J ACCOUNTING RES, V44refKumar PR, 2007, EUR J OPER RES, V1803VP, FPMin JH, 2005, EXPERT SYST APPL, V282VPShin KS, 2005, EXPERT SYST APPL, V282VPZmijewski ME, 1984, J ACCOUNTING RES, V22 4refZhang GQ, 1999, EUR J OPER RES, V1164VPTsai CF, 2008, EXPERT SYST APPL, V341VPTam KY, 1992, MANAGE SCI, V384ref", "figure_id": "tab_20", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "Subject branches as defined in Subsection 3.2 -if stated as 'ref', the reference is not part of document corpus, but rather only a reference, and is thus not clustered into any branch V P Valuation of an entrepreneurial venture/Prediction of performance and/or bankruptcy F P Financial planning and other aspects of financial management", "figure_data": "", "figure_id": "tab_21", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Most Relevant Institutions by CollaborationNetwork PageRank (collaboration network in Figure37)", "figure_data": "Rank 1InstitutionCluster 21city univ hong kong42zhejiang normal univ33rutgers state univ44asia univ15dongguk univ36southwestern univ finance and econ 47natl cent univ18kyung hee univ69tianjin univ technol110", "figure_id": "tab_22", "figure_label": "18", "figure_type": "table" }, { "figure_caption": "Most Relevant Institutions by CollaborationNetwork Betweenness (collaboration network in Figure37)", "figure_data": "Rank 1InstitutionCluster 21zhejiang normal univ32asia univ13city univ hong kong44univ chinese acad sci45dongguk univ36rutgers state univ47renmin univ china48chinese acad sci39southwestern univ finance and econ 410hefei univ technol4", "figure_id": "tab_23", "figure_label": "19", "figure_type": "table" }, { "figure_caption": "Most Relevant Countries by CollaborationNetwork PageRank (collaboration network in Figure38)", "figure_data": "Rank 1CountryCluster 21china32usa23united kingdom24india35france36australia37spain38saudi arabia39italy210canada2", "figure_id": "tab_24", "figure_label": "20", "figure_type": "table" }, { "figure_caption": "Most Relevant Countries by CollaborationNetwork Betweenness (collaboration network in Figure38)", "figure_data": "Rank 1CountryCluster 21united kingdom22china33usa24france35india36australia37netherlands28u arab emirates39canada210saudi arabia3Sorted according to betweenness (measuring the number oftimes an institution is on the shortest path in between otherinstitutions, showing network bridges [162]), in decreasingorder1 Paired inversely (decreasing → increasing) with betweennesscalculated by Bibliometrix, so as to enable ease of use andreduce complexity2 Clusters from Figure 38 -the network consists in total of 4clusters, paired through country name (USA denoting UnitedStates of America, and U ARAB EMIRATES denoting UnitedArab Emirates)", "figure_id": "tab_25", "figure_label": "21", "figure_type": "table" }, { "figure_caption": "Most Relevant Countries by Collaboration Network PageRank (collaboration network in Figure 39, all countries included)", "figure_data": "Rank 1CountryCluster 21china12usa13united kingdom14india25france26spain27saudi arabia28australia29italy110canada1Sorted according to PageRank (approximating importance,assuming more important nodes have more links to them fromother sources while taking into account that not every link hasthe same weight [161]), in decreasing order1 Paired inversely (decreasing → increasing) with PageRankcalculated by Bibliometrix, so as to enable ease of use andreduce complexity2 Clusters from Figure 39 -the network consists of a total of19 clusters, paired through country name (USA denoting theUnited States of America)", "figure_id": "tab_26", "figure_label": "22", "figure_type": "table" }, { "figure_caption": "Most Relevant Countries by CollaborationNetwork Betweenness (collaboration network in Figure39, all countries included) Clusters from Figure39-the network consists of a total of 19 clusters, paired through country name (the USA denoting the United States of America, and U ARAB EMIRATES denoting the United Arab Emirates)", "figure_data": "Rank 1CountryCluster 21united kingdom12china13usa14india25spain26france27saudi arabia28australia29u arab emirates210italy1Sorted according to betweenness (measuring the number oftimes an institution is on the shortest path in between otherinstitutions, showing network bridges [162]), in decreasingorder1 Paired inversely (decreasing → increasing) with betweennesscalculated by Bibliometrix, so as to enable ease of use andreduce complexity2", "figure_id": "tab_27", "figure_label": "23", "figure_type": "table" }, { "figure_caption": "Amortized h-index Example Calculation", "figure_data": "Year 1h-indexCitablePonderingNormalized 4AmortizedYears 2Scalar 3 (PS)PSh-index 5199151101.00000.10005.100019924491.11110.11114.888819934281.25000.12505.250019943671.42850.14285.142819953461.66660.16665.666619962752.00000.20005.400019972142.50000.25005.250019981433.33330.33334.666619991125.00000.50005.500020005110.00001.00005.0000", "figure_id": "tab_29", "figure_label": "A1", "figure_type": "table" }, { "figure_caption": "Most Highly Cited Documents Per Period of Interest fromFigures 3,31 and 32 Sorted according to total citation in descending order, rounded in two decimal places Of the clusters for the accompanying period, ZHANG GQP, 2000, IEEE TRANS SYST MAN CYBERN PART C-APPL REV has contributed to neural networks cluster exclusively; ATIYA AF, 2001, IEEE TRANS NEURAL NETW very highly contributed to neural networks cluster; AHN BS, 2000, EXPERT SYST APPL highest contribution was to neural networks and rough sets clusters; CHARALAMBOUS C, 2000, ANN OPER RES highest contribution was to financial data and neural networks clusters 1 Average citation per year ZHANG GQP, 2000, IEEE TRANS SYST MAN CYBERN PART C-APPL REV and AHN BS, 2000, EXPERT SYST APPL are the first two, respectively 109Table B4 Most Highly Cited Documents Per Period of Interest from Figures 3, 31 and 32 REAL-VALUED GENETIC ALGORITHM TO OPTIMIZE THE PARAMETERS OF SUPPORT VECTOR MACHINE FOR PREDICTING BANKRUPTCY Sorted according to total citation in descending order, rounded in two decimal places Of the clusters for the accompanying period, KUMAR PR, 2007, EUR J OPER RES very highly contributed to neural network cluster; MIN JH, 2005, EXPERT SYST APPL very highly contributed to neural network cluster; SHIN KS, 2005, EXPERT SYST APPL highest contribution was to neural network cluster; WEST D, 2005, COMPUT OPER RES exclusively contributed to neural network cluster; KIRKOS E, 2007, EXPERT SYST APPL very highly contributed to financial statements cluster; WU CH, 2007, EXPERT SYST APPL highest contribution was to neural network and genetic algorithm clusters 1 Average citation per year 2 Abbreviated document metadata and title 3 Link of a Digital Object Identifier 4 MIN JH, 2005, EXPERT SYST APPL and SHIN KS, 2005, EXPERT SYST APPL are the first two for 2005, respectively; KUMAR PR, 2007, EUR J OPER RES is the first for 2007 110 Table B5 Most Highly Cited Documents Per Period of Interest from Figures 3, 31 and 32 THE APPLICATION OF DATA MINING TECHNIQUES IN FINANCIAL FRAUD DETECTION: A CLASSIFICATION FRAMEWORK AND AN ACADEMIC REVIEW OF LITERATURE No Extreme Peaks in this Period Sorted according to total citation in descending order, rounded in two decimal places Of the clusters for the accompanying period, PAN WT, 2012, KNOWLEDGE-BASED SYST highest contribution was to neural network and network model clusters; PALIWAL M, 2009, EXPERT SYST APPL highest contribution was to neural network and support vector clusters; NGAI EWT, 2011, DECIS SUPPORT SYST exclusively contributed to neural network cluster 1 Average citation per year", "figure_data": "2005 -2008 (Top Three)Total/Average 1 CitationDocument Reference 2 /DOI 3609/35.82KUMAR PR, 2007, EUR J OPER RES 2000 -2004 (Top Three) BANKRUPTCY PREDICTION IN BANKS AND FIRMS VIA STATISTI-Total/Average 1 CitationDocument Reference 2 /DOI 3 CAL AND INTELLIGENT TECHNIQUES -A REVIEW1002/41.75 512/26.95MIN JH, 2005, EXPERT SYST APPL ZHANG GQP, 2000, IEEE TRANS SYST MAN CYBERN PART C-APPL REV BANKRUPTCY PREDICTION USING SUPPORT VECTOR MACHINE WITH OPTIMAL CHOICE OF KERNEL FUNCTION PARAMETERS 2009 -2014 (Top Three)343/14.91 456/24.00 Total/Average 1 CitationNEURAL NETWORKS FOR CLASSIFICATION: A SURVEY AN APPLICATION OF SUPPORT VECTOR MACHINES IN ATIYA AF, 2001, IEEE TRANS NEURAL NETW SHIN KS, 2005, EXPERT SYST APPL Document Reference 2 /DOI 3980/81.67BANKRUPTCY PREDICTION FOR CREDIT RISK USING NEURAL BANKRUPTCY PREDICTION MODEL PAN WT, 2012, KNOWLEDGE-BASED SYST244/10.17NETWORKS: A SURVEY AND NEW RESULTS AHN BS, 2000, EXPERT SYST APPL A NEW FRUIT FLY OPTIMIZATION ALGORITHM: TAKING THE 2005, 2007 -peak years (Top Three Each, respectively) 4 FINANCIAL DISTRESS MODEL AS AN EXAMPLE221/11.63 496/33.07THE INTEGRATED METHODOLOGY OF ROUGH SET THEORY AND PALIWAL M, 2009, EXPERT SYST APPL WEST D, 2005, COMPUT OPER RES ARTIFICIAL NEURAL NETWORK FOR BUSINESS FAILURE PREDIC-TION NEURAL NETWORKS AND STATISTICAL TECHNIQUES: A REVIEW NEURAL NETWORK ENSEMBLE STRATEGIES FOR FINANCIAL DECISION APPLICATIONS OF APPLICATIONS283/16.65 489/37.62KIRKOS E, 2007, EXPERT SYST APPL 2000 -peak year (Top Three) 4 NGAI EWT, 2011, DECIS SUPPORT SYST77/3.21CHARALAMBOUS C, 2000, ANN OPER RES DATA MINING TECHNIQUES FOR THE DETECTION OF FRAUDU-257/15.12LENT FINANCIAL STATEMENTS COMPARATIVE ANALYSIS OF ARTIFICIAL NEURAL NETWORK MODELS: APPLICATION IN BANKRUPTCY PREDICTION WU CH, 2007, EXPERT SYST APPL", "figure_id": "tab_30", "figure_label": "B3", "figure_type": "table" }, { "figure_caption": "Most Highly Cited Documents Per Period of Interest from Figures 3, 31 and 32 Sorted according to total citation in descending order, rounded in two decimal places Of the clusters for the accompanying period, LESSMANN S, 2015, EUR J OPER RES highest contribution was to artificial intelligence and neural network clusters; BARBOZA F, 2017, EXPERT SYST APPL highest contribution was to neural network and predictive performance clusters; GENG R, 2015, EUR J OPER RES highest contribution was to neural network and data mining clusters; MALEKIPIRBAZARI M, 2015, EXPERT SYST APPL exclusively contributed to neural network cluster; WANG M, 2017, ENG APPL ARTIF INTELL highest contribution was to curve auc and neural network clusters; ABELIAN J, 2017, EXPERT SYST APPL highest contribution was to neural network and artificial intelligence clusters 1 Average citation per year 2 Abbreviated document metadata and title 3 Link of a Digital Object Identifier 4 LESSMANN S, 2015, EUR J OPER RES and GENG R, 2015, EUR J OPER RES are the first two for 2015, respectively; BARBOZA F, 2017, EXPERT SYST APPL is the first for 2017 112 Table B7 Most Highly Cited Documents Per Period of Interest from Figures 3, 31 and 32", "figure_data": "2019 -2020 (Top Three)Total/Average 1 CitationDocument Reference 2 /DOI 3332/83.002015 -2018 (Top Three) SALINAS D, 2020, INT J FORECASTTotal/Average 1 Citation DEEPAR: PROBABILISTIC FORECASTING WITH AUTOREGRES-Document Reference 2 /DOI 3 SIVE RECURRENT NETWORKS468/52.00 220/55.00 279/39.86 160/40.00 209/23.22WONG LW, 2020, INT J INF MANAGE LESSMANN S, 2015, EUR J OPER RES TIME TO SEIZE THE DIGITAL EVOLUTION: ADOPTION OF BENCHMARKING STATE-OF-THE-ART CLASSIFICATION ALGO-BLOCKCHAIN IN OPERATIONS AND SUPPLY CHAIN MANAGE-RITHMS FOR CREDIT SCORING: AN UPDATE OF RESEARCH MENT AMONG MALAYSIAN SMES BARBOZA F, 2017, EXPERT SYST APPL DUBEY R, 2020, INT J PROD ECON MACHINE LEARNING MODELS AND BANKRUPTCY PREDICTION BIG DATA ANALYTICS AND ARTIFICIAL INTELLIGENCE PATH-GENG R, 2015, EUR J OPER RES WAY TO OPERATIONAL PERFORMANCE UNDER THE EFFECTSPREDICTION OF FINANCIAL DISTRESS: AN EMPIRICAL STUDY OF OF ENTREPRENEURIAL ORIENTATION AND ENVIRONMENTALLISTED CHINESE COMPANIES USING DATA MINING DYNAMISM: A STUDY OF MANUFACTURING ORGANISATIONS2015, 2017 -peak years (Top Three Each, respectively) 4 2019, 2020 -peak years (Top Three Each, respectively) 4,5204/22.67 148/29.60MALEKIPIRBAZARI M, 2015, EXPERT SYST APPL RAUT RD, 2019, J CLEAN PRODRISK ASSESSMENT IN SOCIAL LENDING VIA RANDOM FORESTS LINKING BIG DATA ANALYTICS AND OPERATIONAL SUSTAIN-135/19.29ABILITY PRACTICES FOR SUSTAINABLE BUSINESS MANAGE-WANG M, 2017, ENG APPL ARTIF INTELL MENT126/25.20 117/16.71GREY WOLF OPTIMIZATION EVOLVING KERNEL EXTREME ZHU Y, 2019, INT J PROD ECON LEARNING MACHINE: APPLICATION TO BANKRUPTCY PREDIC-FORECASTING SMES' CREDIT RISK IN SUPPLY CHAIN FINANCE TION WITH AN ENHANCED HYBRID ENSEMBLE MACHINE LEARNING ABELIAN J, 2017, EXPERT SYST APPL APPROACHA COMPARATIVE STUDY ON BASE CLASSIFIERS IN ENSEMBLEMETHODS FOR CREDIT SCORING", "figure_id": "tab_31", "figure_label": "B6", "figure_type": "table" }, { "figure_caption": "Most Highly Cited Documents Per Period of Interest from Figures 3, 31 and 32 2021 -2023 (Top Three) Total/Average 1 Citation Document Reference 2 /DOI 3 Sorted according to total citation in descending order, rounded in two decimal places Of the clusters for the accompanying period, MIKALEF P, 2021, INF MANAGE exclusively contributed to artificial intelligence cluster; KUMAR V, 2021, J BUS RES very highly contributed to artificial intelligence cluster; VRONTIS D, 2022, INT J HUM RESOUR MANAG exclusively contributed to artificial intelligence cluster 1 Average citation per year Las period of the research, relevant for analysis in regard to ascertaining potential future trend, however substantially lacking consistency and completeness of the data so as to be relevant for peak year analysis Table D9 Historiograph (Figure 36) Documents -Part I First 15 Historiograph Documents Included Local/Global 1 Citation Document Reference 2 /DOI 3 Sorted according to local citation in descending order. For the reason of the substantial amount of data, this analysis has two parts, with the continuation being in Table D10. 1 Local citations are those from the research data, while global citations are those in the entire Web of Science database (included in the exported data from Web of Science) 2 Abbreviated document metadata and title 3 Link of a Digital Object Identifier 115 Table D10 Historiograph (Figure 36) Documents -Part II Last 15 Historiograph Documents Included Local/Global 1 Citation Document Reference 2 /DOI 3", "figure_data": "65/156Kim MJ, 2010, EXPERT SYST APPL187/609ENSEMBLE WITH NEURAL NETWORKS FOR BANKRUPTCY PRE-Kumar PR, 2007, EUR J OPER RES DICTION65/468BANKRUPTCY PREDICTION IN BANKS AND FIRMS VIA STATISTI-CAL AND INTELLIGENT TECHNIQUES -A REVIEW Lessmann S, 2015, EUR J OPER RES160/512Min JH, 2005, EXPERT SYST APPLBANKRUPTCY PREDICTION USING SUPPORT VECTOR MACHINEWITH OPTIMAL CHOICE OF KERNEL FUNCTION PARAMETERS143/456Shin KS, 2005, EXPERT SYST APPLAN APPLICATION OF SUPPORT VECTOR MACHINES INBANKRUPTCY PREDICTION MODEL131/348Zhang GQ, 1999, EUR J OPER RESARTIFICIAL NEURAL NETWORKS IN BANKRUPTCY PREDICTION:GENERAL FRAMEWORK AND CROSS-VALIDATION ANALYSIS113/37.67 124/356 120/304Wilson RL, 1994, DECIS SUPPORT SYST MIKALEF P, 2021, INF MANAGE BANKRUPTCY PREDICTION USING NEURAL NETWORKS ARTIFICIAL INTELLIGENCE CAPABILITY: CONCEPTUALIZATION, MEASUREMENT CALIBRATION, AND EMPIRICAL STUDY ON ITS Tsai CF, 2008, EXPERT SYST APPLIMPACT ON ORGANIZATIONAL CREATIVITY AND FIRM PERFOR-USING NEURAL NETWORK ENSEMBLES FOR BANKRUPTCY PRE-MANCE DICTION AND CREDIT SCORING91/30.33 114/440KUMAR V, 2021, J BUS RES Altman EI, 1994, J BANK FINANCINFLUENCE OF NEW-AGE TECHNOLOGIES ON MARKETING: A CORPORATE DISTRESS DIAGNOSIS -COMPARISONS USING LIN-RESEARCH AGENDA EAR DISCRIMINANT-ANALYSIS AND NEURAL NETWORKS (THE89/44.50 111/279ITALIAN EXPERIENCE) VRONTIS D, 2022, INT J HUM RESOUR MANAG ARTIFICIAL INTELLIGENCE, ROBOTICS, ADVANCED TECHNOLO-Barboza F, 2017, EXPERT SYST APPLGIES AND HUMAN RESOURCE MANAGEMENT: A SYSTEMATIC MACHINE LEARNING MODELS AND BANKRUPTCY PREDICTION102/314REVIEW Dimitras AI, 1996, EUR J OPER RESA SURVEY OF BUSINESS FAILURES WITH AN EMPHASIS ON PRE-Incomplete and Erratic Data 4 DICTION METHODS AND INDUSTRIAL APPLICATIONS86/190Jo HK, 1997, EXPERT SYST APPLBANKRUPTCY PREDICTION USING CASE-BASED REASONING,NEURAL NETWORKS, AND DISCRIMINANT ANALYSIS79/221West D, 2005, COMPUT OPER RESNEURAL NETWORK ENSEMBLE STRATEGIES FOR FINANCIALDECISION APPLICATIONS79/171Nanni L, 2009, EXPERT SYST APPLAN EXPERIMENTAL COMPARISON OF ENSEMBLE OF CLASSI-FIERS FOR BANKRUPTCY PREDICTION AND CREDIT SCORING74/260Min SH, 2006, EXPERT SYST APPLHYBRID GENETIC ALGORITHMS AND SUPPORT VECTORMACHINES FOR BANKRUPTCY PREDICTION72/179Alfaro E, 2008, DECIS SUPPORT SYSTBANKRUPTCY FORECASTING: AN EMPIRICAL COMPARISON OFADABOOST AND NEURAL NETWORKS67/212Shin KS, 2002, EXPERT SYST APPLA GENETIC ALGORITHM APPLICATION IN BANKRUPTCY PRE-DICTION MODELING", "figure_id": "tab_32", "figure_label": "B8", "figure_type": "table" } ]
Robert Kudelić; Tamara Šmaguc; Sherry Robinson; Edward I Altman; J Financ; James A Ohlson; J Accounting Res; William H Beaver; P Ravi Kumar; Eur J Oper Res; Jae H Min; Mark E Zmijewski
[ { "authors": "A T G Tapeh; M Z Naser", "journal": "Archives of Computational Methods in Engineering", "ref_id": "b0", "title": "Artificial intelligence, machine learning, and deep learning in structural engineering: A scientometrics review of trends and best practices", "year": "2022" }, { "authors": "N Nazareth; Y V R Reddy", "journal": "Expert Systems with Applications", "ref_id": "b1", "title": "Financial applications of machine learning: A literature review", "year": "2023" }, { "authors": "A Bahrammirzaee", "journal": "Neural Computing and Applications", "ref_id": "b2", "title": "A comparative survey of artificial intelligence applications in finance: artificial neural networks, expert system and hybrid intelligent systems", "year": "2010" }, { "authors": "P Hamet; J Tremblay", "journal": "Metabolism", "ref_id": "b3", "title": "Artificial intelligence in medicine", "year": "2017" }, { "authors": "G N Kouziokas", "journal": "Transportation Research Procedia", "ref_id": "b4", "title": "The application of artificial intelligence in public administration for forecasting high crime risk transportation areas in urban environment", "year": "2017" }, { "authors": "Z Ye", "journal": "Science of The Total Environment", "ref_id": "b5", "title": "Tackling environmental challenges in pollution controls using artificial intelligence: A review", "year": "2020" }, { "authors": "J F Arinez; Q Chang; R X Gao; C Xu; J Zhang", "journal": "Journal of Manufacturing Science and Engineering", "ref_id": "b6", "title": "Artificial intelligence in advanced manufacturing: Current status and future outlook", "year": "2020" }, { "authors": "C Dirican", "journal": "Procedia -Social and Behavioral Sciences", "ref_id": "b7", "title": "The impacts of robotics, artificial intelligence on business and economics", "year": "2015" }, { "authors": "J W Goodell; S Kumar; W M Lim; D Pattnaik", "journal": "Journal of Behavioral and Experimental Finance", "ref_id": "b8", "title": "Artificial intelligence and machine learning in finance: Identifying foundations, themes, and research clusters from bibliometric analysis", "year": "2021" }, { "authors": "M Obschonka; D B Audretsch", "journal": "Small Business Economics", "ref_id": "b9", "title": "Artificial intelligence and big data in entrepreneurship: a new era has begun", "year": "2020" }, { "authors": "T S Perry", "journal": "IEEE Spectrum", "ref_id": "b10", "title": "Eight graphs that explain software engineering salaries in 2023 tech professionals' pay by programming skills, job functions", "year": "2023" }, { "authors": "A K V N Biju; A S Thomas; J Thasneem", "journal": "Quality & Quantity: International Journal of Methodology", "ref_id": "b11", "title": "Examining the research taxonomy of artificial intelligence, deep learning & machine learning in the financial sphere-a bibliometric analysis", "year": "2023" }, { "authors": "W L Gordon; J R Key", "journal": "Journal of Systems Management", "ref_id": "b12", "title": "Artificial intelligence in support of small business information needs", "year": "1987" }, { "authors": "X Li; Y Long; M Fan; Y Chen", "journal": "Systems Research and Behavioral Science", "ref_id": "b13", "title": "Drilling down artificial intelligence in entrepreneurial management: A bibliometric perspective", "year": "2022" }, { "authors": "R A Friedenberg; R L Hensler", "journal": "AI Magazine", "ref_id": "b14", "title": "Strategy and business planning for artificial intelligence companies: A guide for entrepreneurs", "year": "1986" }, { "authors": "D Chalmers; N G Mackenzie; S Carter", "journal": "Entrepreneurship Theory and Practice", "ref_id": "b15", "title": "Artificial intelligence and entrepreneurship: Implications for venture creation in the fourth industrial revolution", "year": "2020" }, { "authors": "S Nambisan", "journal": "Entrepreneurship Theory and Practice", "ref_id": "b16", "title": "Digital entrepreneurship: Toward a digital technology perspective of entrepreneurship", "year": "2017" }, { "authors": "D Iurchenko; J S Petty; S Jain", "journal": "Journal of Small Business Management", "ref_id": "b17", "title": "Collective entrepreneurship makes strange bedfellows: Examining framing activity in construction of the equity crowdfunding market", "year": "2023" }, { "authors": "G Giuggioli; M M Pellegrini", "journal": "International Journal of Entrepreneurial Behavior & Research", "ref_id": "b18", "title": "Artificial intelligence as an enabler for entrepreneurs: a systematic literature review and an agenda for future research", "year": "2022" }, { "authors": "J C Kaminski; C Hopp", "journal": "Small Business Economics", "ref_id": "b19", "title": "Predicting outcomes in crowdfunding campaigns with textual, visual, and linguistic signals", "year": "2020" }, { "authors": "P P Oo; L Jiang; A Sahaym; A Parhankangas; R Chan", "journal": "Journal of Business Venturing", "ref_id": "b20", "title": "Actions in words: How entrepreneurs use diversified and changing speech acts to achieve funding success", "year": "2023" }, { "authors": "A Zemankova", "journal": "IEEE", "ref_id": "b21", "title": "Artificial Intelligence in Audit and Accounting: Development, Current Trends, Opportunities and Threats -Literature Review", "year": "2019" }, { "authors": "A Krishna; A Agrawal; A Choudhary", "journal": "IEEE", "ref_id": "b22", "title": "Predicting the Outcome of Startups: Less Failure, More Success", "year": "2016" }, { "authors": "P Koumbarakis; T Volery", "journal": "Journal of Small Business Management", "ref_id": "b23", "title": "Predicting new venture gestation outcomes with machine learning methods", "year": "2022" }, { "authors": "R Zhang; Z Tian; K J Mccarthy; X Wang; K Zhang", "journal": "Journal of Forecasting", "ref_id": "b24", "title": "Application of machine learning techniques to predict entrepreneurial firm valuation", "year": "2022" }, { "authors": "N Syam; A Sharma", "journal": "Industrial Marketing Management", "ref_id": "b25", "title": "Waiting for a sales renaissance in the fourth industrial revolution: Machine learning and artificial intelligence in sales research and practice", "year": "2018" }, { "authors": "M Lévesque; M Obschonka; S Nambisan", "journal": "Entrepreneurship Theory and Practice", "ref_id": "b26", "title": "Pursuing impactful entrepreneurship research using artificial intelligence", "year": "2020" }, { "authors": "B K Wong; J A Monaco", "journal": "Information & Management", "ref_id": "b27", "title": "Expert system applications in business: A review and analysis of the literature (1977-1993)", "year": "1995" }, { "authors": "T Tamai; M Fujita", "journal": "International Journal of Computer Applications in Technology", "ref_id": "b28", "title": "Development of an expert system for credit card application assessment", "year": "1989" }, { "authors": "P J Elmer; D M Borowski", "journal": "Financial Management", "ref_id": "b29", "title": "An expert system approach to financial analysis: The case of s&l bankruptcy", "year": "1988" }, { "authors": "M J Shaw; J A Gentry", "journal": "Financial Management", "ref_id": "b30", "title": "Using an expert system with inductive learning to evaluate business loans", "year": "1988" }, { "authors": "M Klein", "journal": "Engineering Costs and Production Economics", "ref_id": "b31", "title": "Finsim expert; a kb/dss for financial analysis and planning", "year": "1989" }, { "authors": "V Srinivasan; Y H Kim", "journal": "Financial Management", "ref_id": "b32", "title": "Designing expert financial systems: A case study of corporate credit management", "year": "1988" }, { "authors": "R K Chhikara", "journal": "American Journal of Agricultural Economics", "ref_id": "b33", "title": "The state of the art in credit evaluation", "year": "1989" }, { "authors": "T Yamaguchi", "journal": "Future Generation Computer Systems", "ref_id": "b34", "title": "A technical analysis expert system in the stock market", "year": "1989" }, { "authors": "J K Lee; H S Kim; S C Chu", "journal": "Expert Systems", "ref_id": "b35", "title": "Intelligent stock portfolio management system", "year": "1989" }, { "authors": "S L Loo", "journal": "Expert Systems", "ref_id": "b36", "title": "Collating real-time market indicators through a temporal paradigm", "year": "1989" }, { "authors": "H Braun; J S Chandler", "journal": "Decision Sciences", "ref_id": "b37", "title": "Predicting stock market behavior through rule induction: An application of the learning-from-example approach", "year": "1987" }, { "authors": "S Dutta; S Shekhar", "journal": "Neural Networks", "ref_id": "b38", "title": "Using neural networks for generalization problems", "year": "1988" }, { "authors": "J K Shim; J S Rice", "journal": "Journal of Systems Management", "ref_id": "b39", "title": "Expert systems applications to managerial accounting", "year": "1988" }, { "authors": "R Avi; F ; R S ", "journal": "Modelling, Measurement and Control C", "ref_id": "b40", "title": "Auditing computerized accounting information: Expert systems and artificial intelligence for integrated test facilities and statistical sampling", "year": "1985" }, { "authors": "C W Dungan; J S Chandler", "journal": "Expert Systems", "ref_id": "b41", "title": "Auditor: a microcomputer-based expert system to support auditors in the field", "year": "1985" }, { "authors": "M A Vasarhelyi", "journal": "Markus Wiener Pub", "ref_id": "b42", "title": "Artificial intelligence in accounting and auditing: The use of expert systems", "year": "1989" }, { "authors": "M D Akers; G L Porter; E J Blocher; W G Mister", "journal": "Management Accounting", "ref_id": "b43", "title": "Expert systems for management accountants", "year": "1986" }, { "authors": "J F Dillard; J F Mutchler", "journal": "Expert Systems", "ref_id": "b44", "title": "Expertise in assessing solvency problems", "year": "1987" }, { "authors": "G Zhang; M Y Hu; B E Patuwo; D C Indro", "journal": "European Journal of Operational Research", "ref_id": "b45", "title": "Artificial neural networks in bankruptcy prediction: General framework and cross-validation analysis", "year": "1999" }, { "authors": "R Pacheco; A Martins; R M Barcia; S Khator", "journal": "IEEE", "ref_id": "b46", "title": "A hybrid intelligent system applied to financial statement analysis", "year": "1996" }, { "authors": "H Jo; I Han", "journal": "Expert Systems with Applications", "ref_id": "b47", "title": "Integration of case-based forecasting, neural network, and discriminant analysis for bankruptcy prediction", "year": "1996" }, { "authors": "R Lacher; P K Coats; S C Sharma; L Fant", "journal": "European Journal of Operational Research", "ref_id": "b48", "title": "A neural network for classifying the financial health of a firm", "year": "1995" }, { "authors": "E I Altman; G Marco; F Varetto", "journal": "Journal of Banking & Finance", "ref_id": "b49", "title": "Corporate distress diagnosis: Comparisons using linear discriminant analysis and neural networks (the italian experience)", "year": "1994" }, { "authors": "R L Wilson; R Sharda", "journal": "Decision Support Systems", "ref_id": "b50", "title": "Bankruptcy prediction using neural networks", "year": "1994" }, { "authors": "S.-M Huang; C.-F Tsai; D C Yen; Y.-L Cheng", "journal": "Expert Systems with Applications", "ref_id": "b51", "title": "A hybrid financial analysis model for business failure prediction", "year": "2008" }, { "authors": "K C Lee; I Han; Y Kwon", "journal": "Decision Support Systems", "ref_id": "b52", "title": "Hybrid neural network models for bankruptcy predictions", "year": "1996" }, { "authors": "F Varetto", "journal": "Journal of Banking & Finance", "ref_id": "b53", "title": "Genetic algorithms applications in the analysis of insolvency risk", "year": "1998" }, { "authors": "M J Cerullo; V Cerullo", "journal": "Computer Fraud & Security", "ref_id": "b54", "title": "Using neural networks to predict financial reporting fraud: Part 1", "year": "1999" }, { "authors": "R Wheeler; S Aitken", "journal": "Springer", "ref_id": "b55", "title": "Multiple Algorithms for Fraud Detection", "year": "2000" }, { "authors": "T E Mckee", "journal": "Journal of Emerging Technologies in Accounting", "ref_id": "b56", "title": "A meta-learning approach to predicting financial statement fraud", "year": "2009" }, { "authors": "M.-J Kim; T.-S Kim", "journal": "Springer", "ref_id": "b57", "title": "A Neural Classifier with Fraud Density Map for Effective Credit Card Fraud Detection", "year": "2002" }, { "authors": "Y Kou; C.-T Lu; S Sirwongwattana; Y.-P Huang", "journal": "IEEE", "ref_id": "b58", "title": "Survey of fraud detection techniques", "year": "2004" }, { "authors": "J T Quah; M Sriganesh", "journal": "Expert Systems with Applications", "ref_id": "b59", "title": "Real-time credit card fraud detection using computational intelligence", "year": "2008" }, { "authors": "W.-F Yu; N Wang", "journal": "IEEE", "ref_id": "b60", "title": "Research on Credit Card Fraud Detection Model Based on Distance Sum", "year": "2009" }, { "authors": "S R Das; M Y Chen", "journal": "Management Science", "ref_id": "b61", "title": "Yahoo! for amazon: Sentiment extraction from small talk on the web", "year": "2007" }, { "authors": "W.-T Pan", "journal": "Knowledge-Based Systems", "ref_id": "b62", "title": "A new fruit fly optimization algorithm: Taking the financial distress model as an example", "year": "2012" }, { "authors": "D Belanche; L V Casaló; C Flavián", "journal": "Industrial Management & Data Systems", "ref_id": "b63", "title": "Artificial intelligence in FinTech: understanding robo-advisors adoption among customers", "year": "2019" }, { "authors": "M Palmié; J Wincent; V Parida; U Caglar", "journal": "Technological Forecasting and Social Change", "ref_id": "b64", "title": "The evolution of the financial technology ecosystem: An introduction and agenda for future research on disruptive innovations in ecosystems", "year": "2020" }, { "authors": "N R Mosteanu; A Faccia", "journal": "Quality -Access to Success", "ref_id": "b65", "title": "Digital systems and new challenges of financial management -fintech, xbrl, blockchain and cryptocurrencies", "year": "2020" }, { "authors": "A Ashta; H Herrmann", "journal": "Strategic Change", "ref_id": "b66", "title": "Artificial intelligence and fintech: An overview of opportunities and risks for banking, investments, and microfinance", "year": "2021" }, { "authors": "T Hendershott; X M Zhang; J L Zhao; Z E Zheng", "journal": "Information Systems Research", "ref_id": "b67", "title": "FinTech as a game changer: Overview of research frontiers", "year": "2021" }, { "authors": "H Yuan; R Y Lau; W Xu", "journal": "Decision Support Systems", "ref_id": "b68", "title": "The determinants of crowdfunding success: A semantic text analytics approach", "year": "2016" }, { "authors": "J.-Y Yeh; C.-H Chen", "journal": "Journal of Enterprise Information Management", "ref_id": "b69", "title": "A machine learning approach to predict the success of crowdfunding fintech project", "year": "2020" }, { "authors": "W Wang; L He; Y J Wu; M Goh", "journal": "Computers in Human Behavior", "ref_id": "b70", "title": "Signaling persuasion in crowdfunding entrepreneurial narratives: The subjectivity vs objectivity debate", "year": "2021" }, { "authors": "P J Wilson", "journal": "", "ref_id": "b71", "title": "Expert systems in business", "year": "1987" }, { "authors": "S Dutta; S Shekhar", "journal": "IEEE", "ref_id": "b72", "title": "Bond rating: a nonconservative application of neural networks", "year": "1988" }, { "authors": "A J Surkan; J C Singleton", "journal": "IEEE", "ref_id": "b73", "title": "Neural networks for bond rating improved by multiple hidden layers", "year": "1990" }, { "authors": "G Kumar; S Jain; U P Singh", "journal": "Archives of Computational Methods in Engineering", "ref_id": "b74", "title": "Stock market forecasting using computational intelligence: A survey", "year": "2021" }, { "authors": "F E Tay; L Cao", "journal": "Omega", "ref_id": "b75", "title": "Application of support vector machines in financial time series forecasting", "year": "2001" }, { "authors": "B K Wong; Y Selvi", "journal": "Information & Management", "ref_id": "b76", "title": "Neural network applications in finance: A review and analysis of literature (1990-1996", "year": "1998" }, { "authors": "F Ecer", "journal": "Economic Research-Ekonomska Istraživanja", "ref_id": "b77", "title": "Comparing the bank failure prediction performance of neural networks and support vector machines: The turkish case", "year": "2013" }, { "authors": "C.-F Tsai", "journal": "Expert Systems", "ref_id": "b78", "title": "Financial decision support using neural networks and support vector machines", "year": "2008" }, { "authors": "W Huang; Y Nakamori; S.-Y Wang", "journal": "Computers & Operations Research", "ref_id": "b79", "title": "Forecasting stock market movement direction with support vector machine", "year": "2005" }, { "authors": "L Cao; F E Tay", "journal": "Neural Computing & Applications", "ref_id": "b80", "title": "Financial forecasting using support vector machines", "year": "2001" }, { "authors": "K Kim", "journal": "Neurocomputing", "ref_id": "b81", "title": "Financial time series forecasting using support vector machines", "year": "2003" }, { "authors": "S P Das; S Padhy", "journal": "International Journal of Computer Applications", "ref_id": "b82", "title": "Support vector machines for prediction of futures prices in indian stock market", "year": "2012" }, { "authors": "Z Huang; H Chen; C.-J Hsu; W.-H Chen; S Wu", "journal": "Decision Support Systems", "ref_id": "b83", "title": "Credit rating analysis with support vector machines and neural networks: a market comparative study", "year": "2004" }, { "authors": "S.-T Li; W Shiue; M.-H Huang", "journal": "Expert Systems with Applications", "ref_id": "b84", "title": "The evaluation of consumer loans using support vector machines", "year": "2006" }, { "authors": "X.-F Hui; J Sun", "journal": "Springer", "ref_id": "b85", "title": "An Application of Support Vector Machine to Companies", "year": "2006" }, { "authors": "E Osuna; R Freund; F Girosi", "journal": "IEEE", "ref_id": "b86", "title": "An improved training algorithm for support vector machines", "year": "1997" }, { "authors": "F Z Xing; E Cambria; R E Welsch", "journal": "Artificial Intelligence Review", "ref_id": "b87", "title": "Natural language based financial forecasting: a survey", "year": "2017" }, { "authors": "I E Fisher; M R Garnsey; M E Hughes", "journal": "Intelligent Systems in Accounting, Finance and Management", "ref_id": "b88", "title": "Natural language processing in accounting, auditing and finance: A synthesis of the literature with a roadmap for future research", "year": "2016" }, { "authors": "S R Das", "journal": "Foundations and Trends® in Finance", "ref_id": "b89", "title": "Text and context: Language analytics in finance", "year": "2014" }, { "authors": "L Purda; D Skillicorn", "journal": "Contemporary Accounting Research", "ref_id": "b90", "title": "Accounting variables, deception, and a bag of words: Assessing the tools of fraud detection", "year": "2014" }, { "authors": "K Bochkay; S V Brown; A J Leone; J W Tucker", "journal": "", "ref_id": "b91", "title": "Textual analysis in accounting: What's next? Contemporary Accounting Research", "year": "2022" }, { "authors": "C Blanco-González-Tejero; B Ribeiro-Navarrete; E Cano-Marin; W C Mcdowell", "journal": "International Journal on Semantic Web and Information Systems", "ref_id": "b92", "title": "A systematic literature review on the role of artificial intelligence in entrepreneurial activity", "year": "2023" }, { "authors": "B B Gupta; A Gaurav; P K Panigrahi; V Arya", "journal": "123 Technological Forecasting and Social Change", "ref_id": "b93", "title": "Analysis of artificial intelligence-based technologies and approaches on sustainable entrepreneurship", "year": "2023" }, { "authors": "M D Fethi; F Pasiouras", "journal": "European Journal of Operational Research", "ref_id": "b94", "title": "Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey", "year": "2010" }, { "authors": "K Omoteso", "journal": "Expert Systems with Applications", "ref_id": "b95", "title": "The application of artificial intelligence in auditing: Looking back to the future", "year": "2012" }, { "authors": "J W Do Prado", "journal": "Scientometrics", "ref_id": "b96", "title": "Multivariate analysis of credit risk and bankruptcy research data: a bibliometric study involving different knowledge fields (1968-2014)", "year": "2016" }, { "authors": "H A Alaka", "journal": "Expert Systems with Applications", "ref_id": "b97", "title": "Systematic review of bankruptcy prediction models: Towards a framework for tool selection", "year": "2018" }, { "authors": "Y Shi; X Li", "journal": "Heliyon", "ref_id": "b98", "title": "A bibliometric study on intelligent techniques of bankruptcy prediction for corporate firms", "year": "2019" }, { "authors": "F Königstorfer; S Thalmann", "journal": "Journal of Behavioral and Experimental Finance", "ref_id": "b99", "title": "Applications of artificial intelligence in commercial banks -a research agenda for behavioral finance", "year": "2020" }, { "authors": "A Thakkar; K Chaudhari", "journal": "Archives of Computational Methods in Engineering", "ref_id": "b100", "title": "A comprehensive survey on portfolio optimization, stock price and trend prediction using particle swarm optimization", "year": "2020" }, { "authors": "S Ahmed; M M Alshater; A E Ammari; H Hammami", "journal": "Research in International Business and Finance", "ref_id": "b101", "title": "Artificial intelligence and machine learning in finance: A bibliometric review", "year": "2022" }, { "authors": "A Gómez", "journal": "Archives of Computational Methods in Engineering", "ref_id": "b102", "title": "A survey on quantum computational finance for derivatives pricing and VaR", "year": "2022" }, { "authors": "B Chaklader; B B Gupta; P K Panigrahi", "journal": "Journal of Business Research", "ref_id": "b103", "title": "Analyzing the progress of FINTECH-companies and their integration with new technologies for innovation and entrepreneurship", "year": "2023" }, { "authors": "X.-Q Chen", "journal": "Finance Research Letters", "ref_id": "b104", "title": "Explainable artificial intelligence in finance: A bibliometric review", "year": "2023" }, { "authors": "N Donthu; S Kumar; D Mukherjee; N Pandey; W M Lim", "journal": "Journal of Business Research", "ref_id": "b105", "title": "How to conduct a bibliometric analysis: An overview and guidelines", "year": "2021" }, { "authors": "M Aria; C Cuccurullo", "journal": "Journal of Informetrics", "ref_id": "b106", "title": "bibliometrix : An r-tool for comprehensive science mapping analysis", "year": "2017" }, { "authors": "M Gorman; W A Sahlman", "journal": "Journal of Business Venturing", "ref_id": "b107", "title": "What do venture capitalists do", "year": "1989" }, { "authors": "L Alemany; J J Andreoli", "journal": "Cambridge University Press", "ref_id": "b108", "title": "Entrepreneurial Finance: The Art and Science of Growing Ventures", "year": "2018" }, { "authors": "J Y Abor", "journal": "Springer International Publishing AG", "ref_id": "b109", "title": "Entrepreneurial Finance for MSMEs: A Managerial Approach for Developing Markets", "year": "2016" }, { "authors": "S Kumar; W M Lim; U Sivarajah; J Kaur", "journal": "Information Systems Frontiers", "ref_id": "b110", "title": "Artificial intelligence and blockchain integration in business: Trends from a bibliometric-content analysis", "year": "2022" }, { "authors": "S Kumar; N Pandey; W M Lim; A N Chatterjee; N Pandey", "journal": "Journal of Business Research", "ref_id": "b111", "title": "What do we know about transfer pricing? insights from bibliometric analysis", "year": "2021" }, { "authors": "S Kumar; D Sharma; S Rao; W M Lim; S K Mangla; Past", "journal": "Annals of Operations Research", "ref_id": "b112", "title": "present, and future of sustainable finance: insights from big data analytics through machine learning of scholarly research", "year": "2022" }, { "authors": "F Osareh", "journal": "Libri", "ref_id": "b113", "title": "Bibliometrics, citation analysis and co-citation analysis: A review of literature 1", "year": "1996" }, { "authors": "A Pritchard", "journal": "Journal of documentation", "ref_id": "b114", "title": "Statistical bibliography or bibliometrics", "year": "1969" }, { "authors": "R N Broadus", "journal": "Scientometrics", "ref_id": "b115", "title": "Toward a definition of \"bibliometrics", "year": "1987" }, { "authors": "O Ellegaard; J A Wallin", "journal": "Scientometrics", "ref_id": "b116", "title": "The bibliometric analysis of scholarly production: How great is the impact?", "year": "2015" }, { "authors": "S Hire; S Sandbhor; K Ruikar", "journal": "Archives of Computational Methods in Engineering", "ref_id": "b117", "title": "Bibliometric survey for adoption of building information modeling (BIM) in construction industry-a safety perspective", "year": "2021" }, { "authors": "", "journal": "Clarivate", "ref_id": "b118", "title": "Web of Science Core Collection", "year": "2023" }, { "authors": "J Kumari; E Kumar; D Kumar", "journal": "Archives of Computational Methods in Engineering", "ref_id": "b119", "title": "A structured analysis to study the role of machine learning and deep learning in the healthcare sector with big data analytics", "year": "2021" }, { "authors": "Y Guo; Z Hao; S Zhao; J Gong; F Yang", "journal": "Journal of Medical Internet Research", "ref_id": "b120", "title": "Artificial intelligence in health care: Bibliometric analysis", "year": "2020" }, { "authors": "M Aria; C Cuccurullo", "journal": "", "ref_id": "b121", "title": "bibliometrix: An R-tool for comprehensive science mapping analysis", "year": "2023" }, { "authors": "E Garfield; I H Sher", "journal": "Journal of the American Society for Information Science", "ref_id": "b122", "title": "KeyWords plus™-algorithmic derivative indexing", "year": "1993" }, { "authors": "J Zhang", "journal": "Journal of the Association for Information Science and Technology", "ref_id": "b123", "title": "Comparing keywords plus of WOS and author keywords: A case study of patient adherence research", "year": "2015" }, { "authors": " Openai", "journal": "", "ref_id": "b124", "title": "Improving language understanding with unsupervised learning", "year": "2018" }, { "authors": "A Vaswani", "journal": "", "ref_id": "b125", "title": "Attention is all you need", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b126", "title": "", "year": "2017" }, { "authors": "D Bahdanau; K Cho; Y Bengio", "journal": "", "ref_id": "b127", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "S C Bradford", "journal": "Engineering", "ref_id": "b128", "title": "Sources of information on specific subjects", "year": "1934" }, { "authors": "B C Brookes", "journal": "Journal of Information Science", "ref_id": "b129", "title": "sources of information on specific subjects\" by s.c. bradford", "year": "1985" }, { "authors": "B C Arnold", "journal": "Springer", "ref_id": "b130", "title": "Pareto and Generalized Pareto Distributions", "year": "2008" }, { "authors": "J E Hirsch", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b131", "title": "An index to quantify an individual's scientific research output", "year": "2005" }, { "authors": "N Van Eck; L Waltman; J Van Den Berg; U Kaymak", "journal": "IEEE Computational Intelligence Magazine", "ref_id": "b132", "title": "Visualizing the computational intelligence field [application notes", "year": "2006" }, { "authors": "A Perianes-Rodriguez; L Waltman; N J Van Eck", "journal": "Journal of Informetrics", "ref_id": "b133", "title": "Constructing bibliometric networks: A comparison between full and fractional counting", "year": "2016" }, { "authors": "A Viana-Lora; M G N Lo Andreu", "journal": "Humanities and Social Sciences Communications", "ref_id": "b134", "title": "Bibliometric analysis of trends in COVID-19 and tourism", "year": "2022" }, { "authors": "A J Lotka", "journal": "Journal of the Washington Academy of Sciences", "ref_id": "b135", "title": "The frequency distribution of scientific productivity", "year": "1926" }, { "authors": "A.-W Harzing; Publish; Perish", "journal": "", "ref_id": "b136", "title": "", "year": "1999" }, { "authors": "A Ikarashi", "journal": "Nature", "ref_id": "b137", "title": "Japanese research is no longer world class -here's why", "year": "2023" }, { "authors": "J W Lewis", "journal": "Encyclopedia Britannica", "ref_id": "b138", "title": "", "year": "2023" }, { "authors": "A Gopnik", "journal": "Encyclopedia Britannica", "ref_id": "b139", "title": "United states", "year": "2023" }, { "authors": "W Yu; -I; C Lee; Y I Lew; H.-B Im; B.-H. South Hahn; Korea", "journal": "Encyclopedia Britannica", "ref_id": "b140", "title": "", "year": "2023" }, { "authors": "S A Wolpert", "journal": "Encyclopedia Britannica", "ref_id": "b141", "title": "", "year": "2023" }, { "authors": "M J Viguera", "journal": "Encyclopedia Britannica", "ref_id": "b142", "title": "", "year": "2023" }, { "authors": "R Price; Lii", "journal": "S. Philosophical Transactions of the Royal Society of London", "ref_id": "b143", "title": "A demonstration of the second rule in the essay towards the solution of a problem in the doctrine of chances", "year": "1764" }, { "authors": "S B Mcgrayne", "journal": "Yale University Press", "ref_id": "b144", "title": "The Theory That Would Not Die: How Bayes' Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy", "year": "2011" }, { "authors": "D R Bellhouse", "journal": "Statistical Science", "ref_id": "b145", "title": "The reverend thomas bayes, FRS: A biography to celebrate the tercentenary of his birth", "year": "2004" }, { "authors": "T Bayes; Lii", "journal": "S. Philosophical Transactions of the Royal Society of London", "ref_id": "b146", "title": "An essay towards solving a problem in the doctrine of chances. By the late Rev. Mr. Bayes, F. R. S. communicated by Mr. Price, in a letter to John Canton", "year": "1763" }, { "authors": "V V Acharya; M Richardson", "journal": "Critical Review", "ref_id": "b147", "title": "CAUSES OF THE FINANCIAL CRISIS", "year": "2009" }, { "authors": "J J Hopfield", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b148", "title": "Neural networks and physical systems with emergent collective computational abilities", "year": "1982" }, { "authors": "C C Tappert", "journal": "IEEE", "ref_id": "b149", "title": "Who Is the Father of Deep Learning?", "year": "2019" }, { "authors": "T Kohonen", "journal": "", "ref_id": "b150", "title": "Automatic formation of topological maps of patterns in a selforganizing system", "year": "1981" }, { "authors": "T Kohonen", "journal": "Biological Cybernetics", "ref_id": "b151", "title": "Self-organized formation of topologically correct feature maps", "year": "1982" }, { "authors": "T Kohonen", "journal": "", "ref_id": "b152", "title": "The self-organizing map", "year": "1990" }, { "authors": "Z Wu; J M Mcgoogan", "journal": "JAMA", "ref_id": "b153", "title": "Characteristics of and important lessons from the coronavirus disease 2019 (COVID-19) outbreak in china", "year": "2020" }, { "authors": "P Pons; M Latapy", "journal": "", "ref_id": "b154", "title": "Computing communities in large networks using random walks", "year": "2005" }, { "authors": "Z Yang; R Algesheimer; C J Tessone", "journal": "Scientific Reports", "ref_id": "b155", "title": "A comparative analysis of community detection algorithms on artificial networks", "year": "2016" }, { "authors": "D Dhall; R Kaur; M Juneja", "journal": "Springer International Publishing", "ref_id": "b156", "title": "Machine Learning: A Review of the Algorithms and Its Applications", "year": "2019" }, { "authors": "A Das; P Rad", "journal": "", "ref_id": "b157", "title": "Opportunities and challenges in explainable artificial intelligence (xai): A survey", "year": "2020" }, { "authors": "C Zhang; Y Lu", "journal": "Journal of Industrial Information Integration", "ref_id": "b158", "title": "Study on artificial intelligence: The state of the art and future prospects", "year": "2021" }, { "authors": "M Bianchini; M Gori; F Scarselli", "journal": "ACM Transactions on Internet Technology", "ref_id": "b159", "title": "Inside PageRank", "year": "2005" }, { "authors": "D R White; S P Borgatti", "journal": "Social Networks", "ref_id": "b160", "title": "Betweenness centrality measures for directed graphs", "year": "1994" }, { "authors": "A M I Turing", "journal": "Mind LIX", "ref_id": "b161", "title": "Computing Machinery and Intelligence", "year": "1950" }, { "authors": "G Dyson", "journal": "Nature", "ref_id": "b162", "title": "The dawn of computing", "year": "2012" }, { "authors": "E S Brunette; R C Flemmer; C L Flemmer", "journal": "IEEE", "ref_id": "b163", "title": "A review of artificial intelligence", "year": "2009" }, { "authors": "L Gyongyosi; S Imre", "journal": "Computer Science Review", "ref_id": "b164", "title": "A survey on quantum computing technology", "year": "2019" }, { "authors": "K.-A Brickman", "journal": "Physical Review A", "ref_id": "b165", "title": "Implementation of grover's quantum search algorithm in a scalable system", "year": "2005" }, { "authors": "T Monz", "journal": "Science", "ref_id": "b166", "title": "Realization of a scalable shor algorithm", "year": "2016" }, { "authors": "R Kudelić; N Ivković; T Šmaguc", "journal": "Springer", "ref_id": "b167", "title": "A Brief Overview of Randomized Algorithms", "year": "2023" }, { "authors": "R Kudelić", "journal": "Springer International Publishing", "ref_id": "b168", "title": "Feedback Arc Set: A History of the Problem and Algorithms", "year": "2022" }, { "authors": "R Kudelić", "journal": "Springer Nature", "ref_id": "b169", "title": "A Short Sketch of Solid Algorithms for Feedback Arc Set", "year": "2023" }, { "authors": "R Kudelić; N Ivković", "journal": "Expert Systems with Applications", "ref_id": "b170", "title": "Ant inspired monte carlo algorithm for minimum feedback arc set", "year": "2019" }, { "authors": "R Kudelić", "journal": "Springer International Publishing", "ref_id": "b171", "title": "Feedback Arc Set", "year": "2022" }, { "authors": "R Kudelić", "journal": "Springer International Publishing", "ref_id": "b172", "title": "Papers and Algorithms", "year": "2022" }, { "authors": "R Kudelić", "journal": "Springer International Publishing", "ref_id": "b173", "title": "Having the Right Tool", "year": "2022" }, { "authors": "R Kudelić", "journal": "Applied Soft Computing", "ref_id": "b174", "title": "Monte-Carlo randomized algorithm for minimal feedback arc set problem", "year": "2016" }, { "authors": "M Dorigo; M Birattari; T Stutzle", "journal": "IEEE Computational Intelligence Magazine", "ref_id": "b175", "title": "Ant colony optimization", "year": "2006" }, { "authors": "S.-C Wang", "journal": "Springer US", "ref_id": "b176", "title": "Artificial Neural Network", "year": "2003" }, { "authors": "P G Asteris; V G Mokos", "journal": "Neural Computing and Applications", "ref_id": "b177", "title": "Concrete compressive strength using artificial neural networks", "year": "2019" }, { "authors": "Q Zhang; H Yu; M Barbiero; B Wang; M Gu", "journal": "Light: Science & Applications", "ref_id": "b178", "title": "Artificial neural networks enabled by nanophotonics", "year": "2019" }, { "authors": "S Bubeck", "journal": "", "ref_id": "b179", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "L Phan; S Li; K Mentzer", "journal": "Computer Information Systems Journal Articles (Issues in Information Systems)", "ref_id": "b180", "title": "Blockchain Technology and The Current Discussion on Fraud", "year": "2019" }, { "authors": "M R Islam", "journal": "IEEE", "ref_id": "b181", "title": "A Review on Blockchain Security Issues and Challenges", "year": "2021" }, { "authors": "Q Feng; D He; S Zeadally; M K Khan; N Kumar", "journal": "Journal of Network and Computer Applications", "ref_id": "b182", "title": "A survey on privacy protection in blockchain system", "year": "2019" }, { "authors": "J Liu; A Serletis", "journal": "Open Economies Review", "ref_id": "b183", "title": "Volatility in the Cryptocurrency Market", "year": "2019" }, { "authors": "E Bouri; C K M Lau; B Lucey; D Roubaud", "journal": "Finance Research Letters", "ref_id": "b184", "title": "Trading volume and the predictability of return and volatility in the cryptocurrency market", "year": "2019" }, { "authors": "P Katsiampa", "journal": "Research in International Business and Finance", "ref_id": "b185", "title": "An empirical investigation of volatility dynamics in the cryptocurrency market", "year": "2019" }, { "authors": "M F Dixon; I Halperin; P Bilokon", "journal": "Springer International Publishing", "ref_id": "b186", "title": "Machine Learning in Finance: From Theory to Practice", "year": "2020" } ]
[ { "formula_coordinates": [ 25, 100.07, 266.98, 77.21, 13.47 ], "formula_id": "formula_0", "formula_text": "10 × 1 3 2 ≈ 1) [136]" } ]
2023-11-22
[ { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Introduction", "publication_ref": [ "b58", "b79", "b105", "b110", "b79", "b58", "b79", "b105", "b110", "b105", "b50", "b110", "b69", "b62", "b15", "b32", "b1", "b40", "b92", "b36", "b60", "b92", "b36", "b60", "b69", "b0", "b108", "b105", "b79", "b74" ], "table_ref": [], "text": "Text is the written form of human language and is considered one of the oldest and most powerful creations of human civilization. Text is the most effective and reliable means of communication, collaboration, and documentation. Our surroundings are rich with textual information that assists in interpreting the world around us. Automatic natural scene text detection and recognition have become an active research problem due to numerous reallife practical applications (Lin et al., 2020;Park et al., 2010;Wojna et al., 2017;Yi and Tian, 2012). A common source of natural scene text is the signboard which contains crucial information such as the name of the business organization and its current address. However, retrieving such information automatically from the signboard is a challenging problem due to the complex background with interference, high variation and diversity in natural scene text, and imperfect lighting during imaging (Park et al., 2010). With the introduction of sophisticated deep learning-based techniques, we are witnessing a dramatic improvement in the field of computer vision and natural language processing (NLP). In this research work, we aim to develop deep learning-based models for efficiently detecting, recognizing, and parsing address information from Bangla signboard.\nWith the recent advent in the field of computer vision using deep learning-based techniques, natural scene text detection and recognition have become an active research problem due to numerous reallife practical applications (Lin et al., 2020;Park et al., 2010;Wojna et al., 2017;Yi and Tian, 2012).\nBusiness organization manually annotates on mapping platforms like Google Maps 1 or Open-StreetMap 2 so that the customer easily locate these organizations on mapping platforms. However, a significant number of business organizations in developing countries like Bangladesh remain unannotated in mapping platforms due to a lack of technical knowledge of the business owner. A popular service of Google is Google Street View 3 which provides interactively panoramas along the street of the metropolitan area. From the street imagery, the signboard of the different business organizations can be detected. An effective text detector and extractor from signboards facilitate automatic annotation of mapping platforms from street imagery (Wojna et al., 2017).\nAccording to a recent report by the World Health Organization (WHO), around 2.2 billion people are suffering from near or distance vision impairment where 1 billion people are facing moderate to severe blindness 4 . Different AI-driven technologies have been introduced to alleviate the day-to-day problems faced by visually impaired people (Kuriakose et al., 2022). Developing assistive navigation applications is an active research area among practitioners (Khan et al., 2021a). Extracting text information from signboards facilitates to development of camera-based navigation applications for visually impaired people (Yi and Tian, 2012). We can develop such a navigation application for smartphones or embedded devices such as smart glass.\nMapping platform users usually search for a location or organization by providing a raw input address text which needs to be efficiently processed to find the point of interest on the map. The raw input address text needs to be parsed to find out the segment of the address to effectively search on a mapping platform. Only rule-based techniques fail to parse all the variations of the user address input. Therefore, automatic parsing of raw input address text using natural language processing facilitates efficient searching on map applications.\nDuring the registration on an information system, a user provides their present or permanent address in a raw text field. However, before inserting the address text into the database, the address text needs to be parsed to find different address components. An address parsing using natural language processing facilitates automatic insertion on a database from raw input address text (Mokhtari et al., 2019).\nFigure 1 shows an overview of our research problem. We develop an end-to-end system for detecting, recognizing, and parsing the address information from the signboard. First, we have to detect the signboard from the raw natural scene image. The next step is to detect the address text region from the signboard. After detecting the address text region, the address text is recognized from the cropped address text. To minimize the recognition error, we need to conduct necessary post correction on the recognized address text. Finally, the corrected address text is parsed to identify the different segments of the address. (Long et al., 2021;Khan et al., 2021b;Chen et al., 2021). The Yolo-based model is the state-of-the-art object detection model for text detection (Haifeng and Siqi, 2020). In literature, different segmentation-based (Ahmed et al., 2019;Isthiaq and Saif, 2020) and segmentationfree techniques (Shi et al., 2016;He et al., 2016;Liu et al., 2018) have been deployed for text recognition from images. Due to the challenges involved in natural scene text, segmentation-free techniques like Connectionist Temporal Classification (CTC) based (Shi et al., 2016;He et al., 2016) or Encoder-Decoder (Liu et al., 2018) models outperform segmentation-based techniques for text recognition. However, segmentation-free techniques for text recognition remain nearly unexplored for lowresource languages like Bangla. Moreover, no correction model has been found to improve the performance of the text recognition model by post correction. Different deep learning-based sequenceto-sequence models (Mokhtari et al., 2019;Abid et al., 2018;Yassine et al., 2021) with RNN, LSTM, and Bi-LSTM units have been proposed for address parsing. However, the state-of-the-art transformerbased pre-trained models have not been explored for the address parsing problem. Though a significant amount of effort has been devoted to extracting address information from signboards for resourceful languages like English (Wojna et al., 2017), Korean (Park et al., 2010), or Japanese (NOMURA et al., 2014), little has been done for low-resource languages like Bangla.\nIn this research work, we develop deep learningbased models for efficiently detecting, recognizing, and parsing address information from Bangla signboards. The main objectives of our study are as follows:\n• To conduct an in-depth analysis of different segmentation-free CTC-based and Encoder-Decoder model architectures for Bangla address text recognition.\n• To design a novel address text correction model using a sequence-to-sequence transformer network to improve the performance of Bangla address text recognition by postcorrection.\n• To develop a Bangla address text parser using the state-of-the-art transformer-based pretrained language model.\nTo develop an end-to-end system for extracting and parsing address information from the signboard, we divide the whole system into different sub-problems: a signboard detection model for detecting signboard from the natural scene image; an address text detection model for detecting address region from a signboard; a custom Bangla address text recognition model to extract address text from the cropped address region; an address text correction model to improve the output of the address text recognition model by post-correction; and finally an address text parser model to classify each field of an address. Figure 2 shows an overview of our proposed end-to-end system. After detecting the address text portion from the signboard, the next step is to recognize the address text from the cropped address text portion image. We have proposed two different frameworks for Bangla address text recognition -the CTC-based framework and the Encoder-Decoder framework.\nWe have designed the CTC-based model which consists of three major components: convolutional layers for feature extraction; recurrent layers for sequence modeling; transcription layer with CTC loss for sequence prediction. Moreover, we have proposed an Encoder-Decoder model architecture consisting of three key components: a deep-stacked convolution neural network for feature extraction, an encoder network with a recurrent layer for sequence modeling, and finally a decoder network with an attention mechanism for transcription.\nWe have created a synthetically generated Bangla address text recognition dataset to train and evaluate different CTC-based and Encoder-Decoder model architectures. We have conducted a comprehensive evaluation of various CTC-based and Encoder-Decoder models for Bangla address text recognition to determine the most effective model architecture.\nTo improve the performance of the Bangla address text recognizer by post-correction, we propose an address text correction model using a transformer-based sequence-to-sequence model. To train and evaluate the address text correction model, we create a synthetically generated address correction dataset from the raw address corpus.\nFinally, we propose a state-of-the-art transformer-based pre-trained language model for Bangla address text parsing. We develop a novel Bangla address parsing dataset to train and evaluate the proposed Bangla address text parser model. Moreover, we train traditional sequence-to-sequence models with RNN, LSTM, and Bi-LSTM units to present a comparative analysis with a transformer-based pre-trained language model for Bangla address text parsing.\nThe main contributions of our research work are as follows:\n• We have created manually annotated datasets and synthetic datasets for detecting, extracting, correcting, and parsing address information from the natural scene.\n• We have conducted a performance analysis among different CTC-based and Encoder-Decoder models for Bangla address text recognition and found the best-performing model architecture.\n• We have introduced a novel address text correction model using a sequence-to-sequence transformer network to improve the performance of Bangla address text recognition by post-correction.\n• We have developed a Bangla address text parser using the state-of-the-art transformerbased pre-trained language model." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [], "table_ref": [], "text": "In this section, we explore the previous research works. We divide our research work into different sub-problems such as text detection, text recognition, and address parsing. After conducting a rigorous literature review, we have found that there are both traditional machine learning-based and sophisticated deep learning-based approaches to solving each sub-problem. We present a complete picture by classifying the previous research works into a hierarchical taxonomy. We introduce the related works in a top-down approach. We organize and present the related works into different categories: (1) Scene text detection to detect and localize the text area in natural scene text;\n(2) Scene text recognition to extract the textual information from the detected text area and convert them into linguistic symbols;\n(3) End-to-end approach to performing both detection and recognition of scene text in a single pipeline; (4) Addressing parsing to classify different segments of the address text; and finally (5) Previous works related to information extraction from the signboard. We introduce each category by discussing the related works from different perspectives. Moreover, we present existing previous works in Bangla language under each category." }, { "figure_ref": [], "heading": "Scene Text Detection", "publication_ref": [ "b62", "b58", "b24", "b54", "b65", "b73", "b98", "b77", "b12", "b111", "b42", "b46", "b56", "b55", "b23", "b80" ], "table_ref": [], "text": "A significant amount of effort has been devoted to solving the scene text detection problem, especially for the English language text. There are different approaches found in the literature for scene text detection and localization (Naosekpam and Sahu, 2022;Long et al., 2021;Khan et al., 2021b;Lin et al., 2020). We divide these related existing works into four different sub-categories: (1) Scene text detection using statistical features such as SWT (Epshtein et al., 2010;Li and Lu, 2012) and MSER (Matas et al., 2004;Neumann and Matas, 2011);\n(2) Scene text detection using traditional machine learning-based approaches such as sliding windowbased (Wang et al., 2011;Pan et al., 2009) and connected component-based (Chen et al., 2011;Yin et al., 2013); (3) Scene text detection using hybrid approaches (Jaderberg et al., 2014;Khatib et al., 2015), and scene text detection using sophisticated deep learning-based approaches such as object detection-based (Liao et al., 2017(Liao et al., , 2018) ) and segmentation-based (Deng et al., 2018;Qin et al., 2019)." }, { "figure_ref": [], "heading": "Scene Text Detection Using Statistical Feature-based Approaches", "publication_ref": [ "b113", "b57", "b24", "b54", "b65", "b73", "b10", "b12" ], "table_ref": [], "text": "For a very long time, scene text detection is an active research area in the field of computer vision. Early approaches apply traditional image features to detect the scene text. Text is localized from the background image by considering that the colors of the pixel will be similar for a single character and the background colors are different from the text colors (Zhong et al., 1995). However, if we consider real-life scenarios, such a hypothesis fails to detect scene text on a complex background. Moreover, a split-and-merge approach has been found to segment the text area by considering the text with similar font size and color as connected components (Lienhart and Stuber, 1996). Such constraints based on the color component and font size make these methods highly dependent on the standard scenarios. Innovative features such as stroke width transform (SWT) (Epshtein et al., 2010;Li and Lu, 2012) and maximally stable extremal regions (MSER) (Matas et al., 2004) are utilized for statistical feature-based scene text detection. SWT calculates the distance between edge pixel pairs to obtain stroke width information, which is used for bottom-up integration of pixels with similar stroke widths, combined into connected components, grouped into letter candidates, and text lines and then finally clustered into chains based on specific criteria. MSER is used to detect individual characters as Extremal Regions (ER) based on color similarity and computation complexity (Neumann and Matas, 2011). However, such approaches are challenging to apply for detecting colorful text in the real-life natural scene.\nTo address the difficulties in detecting minor and blurry texts and symbols in natural scene images, a combination of the edge-enhanced MSER and Canny edge detector (Canny, 1986) has been suggested (Chen et al., 2011). The complete procedure comprises geometric checks, pairing letters, constructing lines of text, and dividing them into individual words. Although statistical feature-based methods for detecting scene text are effective in controlled settings, they are ineffective in more challenging real-world scenarios commonly encountered in natural environments." }, { "figure_ref": [], "heading": "Scene Text Detection Using Machine", "publication_ref": [ "b14", "b34", "b49", "b77", "b78", "b11", "b38", "b111", "b29" ], "table_ref": [], "text": "Learning-based Approaches\nThe conventional machine learning methods for detecting text in scenes involve using Sliding Window (SW) (Chen and Yuille, 2004;Hanif et al., 2008;Koo and Cho, 2011;Pan et al., 2009Pan et al., , 2010) ) based and Connected Component Analysis (CCA) techniques (Chen et al., 2001;Huang et al., 2013;Yin et al., 2013;Gomez and Karatzas, 2015). These methods locate candidate text regions by extracting handcrafted features and subsequently classify them to determine the actual text regions. However, the slide window-based methods apply a top-down approach, whereas the CCA-based technique works in a bottom-up strategy." }, { "figure_ref": [], "heading": "Sliding Window-based Scene Text Detection", "publication_ref": [ "b14", "b47", "b34", "b77", "b49", "b78" ], "table_ref": [], "text": "In a top-down approach, sliding window-based scene text detection first slides a multi-scale window over the input image to identify the candidate regions and then applies a pre-trained classifier to determine whether the sub-window contains text or not. To distinguish between text and non-text regions, textural characteristics such as white space information, responses from different types of filters, and wavelet coefficients are utilized. Chen and Yuille consider such textural features to design weak classifiers and then apply the Adaboost algorithm to obtain a final strong classifier (Chen and Yuille, 2004). Moreover, Support Vector Machine (SVM) (Kim et al., 2003) vost, 2009;Hanif et al., 2008). Extracting text lines is approached as an optimization problem that minimizes energy and accounts for potential interference between text lines. Promising outcomes have been achieved using Conditional Random Field (CRF) and Markov Random Field (MRF) based energy minimization methods (Pan et al., 2009;Koo and Cho, 2011;Pan et al., 2010). The sliding window method addresses the text region identification problem by using a classifier to calculate the likelihood of text utilizing a feature vector extracted from the individual local region.\nIdentified neighboring text regions are finally combined to find text blocks. However, such a bruteforce approach is usually slow due to its reliance on local decision-making, even though it is effective for text regions with distinct textural properties." }, { "figure_ref": [], "heading": "Connected Component Analysis-based Scene Text Detection", "publication_ref": [ "b11", "b38", "b111", "b29", "b11", "b38", "b111", "b29" ], "table_ref": [], "text": "In a bottom-up manner, connected component analysis methods (Chen et al., 2001;Huang et al., 2013;Yin et al., 2013;Gomez and Karatzas, 2015) extract components from the input image using different clustering algorithms or edge detection techniques, eliminate non-textual regions of the image by applying different heuristic approaches or pre-trained classifiers, and finally group the neighboring textual regions using geometric properties.\nBy considering that the text region has a rectangular shape and horizontal alignment, Chen et al. propose a technique for text detection that involves extracting candidate text lines and using an SVM classifier to identify the actual text lines from the extracted candidates (Chen et al., 2001). By modifying SWT to incorporate the color component, Huang. et al. introduce the Stroke Feature Transform (SFT) algorithm which calculates the stroke width map to extract the candidate text regions and finally, classify the actual text regions using text covariance descriptors (Huang et al., 2013). Modifying MSRT to improve performance for images with poor quality, Yin et al. propose an algorithm for parent-children elimination (Yin et al., 2013). The algorithm eliminates the extremal region from the MSER tree when the aspect ratio of the extremal region does not match a given range. To improve the performance of text candidate construction, Gomez et al. introduce grouping hypotheses based on different image features (Gomez and Karatzas, 2015).\nCompared to sliding window-based scene text detection, connected component analysis methods generate less number of extracted candidate components ensuring lower computational cost. However, each connected component analysis method considers hypotheses on the possible location of the text region candidate and these hypotheses fail to cover complex real-life scenarios." }, { "figure_ref": [], "heading": "Scene Text Detection Using Hybrid Approaches", "publication_ref": [ "b23", "b61", "b107", "b112", "b81", "b63", "b13", "b106", "b89", "b48", "b114", "b95", "b117", "b56", "b55", "b37", "b31" ], "table_ref": [], "text": "In (Deng et al., 2018;Long et al., 2015;Yao et al., 2016;Zhang et al., 2016;Qin and Manduchi, 2017;Long et al., 2018;Wang et al., 2019a;Chen et al., 2019;Naosekpam et al., 2022;Wang et al., 2019a;Yang et al., 2018;Wang et al., 2019b;Shao et al., 2021;Kobchaisawat et al., 2020); and (2) Scene text detection using object detection techniques by considering a regression problem (Zhong et al., 2017;Tian et al., 2016;Zhu et al., 2017;Liao et al., 2017Liao et al., , 2018;;Huang et al., 2015;Gupta et al., 2016;Naosekpam et al., 2021)." }, { "figure_ref": [], "heading": "Scene Text Detection Using Segmentation Approaches", "publication_ref": [ "b23", "b61", "b107", "b112", "b81", "b63", "b13", "b106", "b89", "b48", "b23", "b61", "b107", "b112", "b81", "b63", "b13", "b61", "b107", "b112", "b81", "b23", "b63", "b13", "b106", "b89", "b48", "b35", "b106", "b89", "b48" ], "table_ref": [], "text": "Segmentation-based approaches address the scene text detection problem as a pixel-wise classification to identify the text and non-text segments in the given image. There are two different types of segmentation-based text detection approaches found in the literature: (1) Semantic segmentationbased (Deng et al., 2018;Long et al., 2015;Yao et al., 2016;Zhang et al., 2016;Qin and Manduchi, 2017;Long et al., 2018;Wang et al., 2019a;Chen et al., 2019;Naosekpam et al., 2022); and (2) Instance segmentation-based (Wang et al., 2019a;Yang et al., 2018;Wang et al., 2019b;Shao et al., 2021;Kobchaisawat et al., 2020) techniques.\nSeveral scene text recognition techniques (Deng et al., 2018;Long et al., 2015;Yao et al., 2016;Zhang et al., 2016;Qin and Manduchi, 2017;Long et al., 2018;Wang et al., 2019a;Chen et al., 2019;Naosekpam et al., 2022) are introduced using semantic segmentation approaches. Using a Fully Convolutional Neural Network (FCNN) (Long et al., 2015), the segmentation map is first generated from the input image and then the bounding boxes around the text segments are obtained with necessary post-processing. Yao et al. adapt the FCNN to produce three types of global score maps: one for character categories, another for text and non-text areas, and a third for linking orientation (Yao et al., 2016). Moreover, a word partition method has been introduced to generate the bounding boxes around the text segments. More semantic segmentation approaches modifying FCNN has been found in (Zhang et al., 2016;Qin and Manduchi, 2017). However, these methods show poor performance during bounding box prediction for closely adjacent words. To address the problem, PixelLinks (Deng et al., 2018) has been introduced to highlight the text margin. Moreover, TextSnake (Long et al., 2018) predicts text region and center lines to detect scene text. Another method called Progressive Scale Expansion Network (PSENet) (Wang et al., 2019a) addresses the problem related to closely adjacent words by predicting different scale kernels. The attention-based mechanism has been utilized to detect the scene text using the semantic segmentation-based approach which helps to avoid the overlapping closely adjacent text (Chen et al., 2019). Using an encoder-decoder neural network, UtextNet (Naosekpam et al., 2022) introduces a scene text recognition model utilizing the UNet-ResNet50 and a post-processing technique to predict the bounding boxes.\nIn the literature, we have found several techniques (Wang et al., 2019a;Yang et al., 2018;Wang et al., 2019b;Shao et al., 2021;Kobchaisawat et al., 2020) that consider scene text detection as a similar problem comparing instance segmentation. SPC-NET (Wang et al., 2019a) utilizes the Mask RCNN (He et al., 2017) model architecture to design the text context module and a re-score mechanism to improve the scene text detection. Inceptext (Yang et al., 2018) introduces an instance-aware segmentation mechanism to enhance the text detection performance for large-scale text. By using a feature pyramid enhancement module and feature fusion module, an improved version of the arbitrary text detection model has been proposed in (Wang et al., 2019b). Shao et al. propose a bi-directional feature pyramid network to improve text detection performance in blurry and low-contrast images (Shao et al., 2021). Border augmentation using a combination of polygon offsetting has been utilized in (Kobchaisawat et al., 2020) to detect scene text. However, segmentation-based text detection models require high inference time due to large model sizes and are less suitable for real-time text detection." }, { "figure_ref": [], "heading": "Scene Text Detection Using Object Detection Techniques", "publication_ref": [ "b114", "b95", "b117", "b56", "b55", "b37", "b31", "b28", "b27", "b85", "b35", "b114", "b95", "b117", "b59", "b115", "b82", "b83", "b84", "b8", "b56", "b55", "b37", "b115", "b31" ], "table_ref": [], "text": "By assuming the text as an object, researchers acknowledge that scene text detection is a subproblem of the general object detection problem. Therefore, scene text detection is considered a regression problem to predict bounding boxes around the text region. We categorize text detection into 2 different groups: (1) Two-stage detection approaches (Zhong et al., 2017;Tian et al., 2016;Zhu et al., 2017); and (2) One-stage detection approaches (Liao et al., 2017(Liao et al., , 2018;;Huang et al., 2015;Gupta et al., 2016;Naosekpam et al., 2021).\nIn the two-stage detection approach, the text region proposals are first generated by selective search and then applied classification to each text region proposal. R-CNN (Girshick et al., 2014) and its variants such as Fast R-CNN (Girshick, 2015), Faster R-CNN (Ren et al., 2015), and Mask R-CNN (He et al., 2017) are the popular two-stage object detection approaches among researchers. Zhong et al. propose a text detection model named Deep-Text by modifying R-CNN (Zhong et al., 2017). Tian et al. utilize the Fast R-CNN, an improvement of R-CNN, to design the Connectionist Text Proposal Network (CTPN) for text detection (Tian et al., 2016). Zhu et al. propose an improvement of CTPN by incorporating a vertical proposal mechanism (Zhu et al., 2017). However, these two-stage detection approaches are comparatively slow and are not applicable to real-time text detection scenarios.\nOne-stage detection approaches incorporate a single network for both region proposal generation and region proposal classification. SSD (Liu et al., 2016), EAST (Zhou et al., 2017), Yolo (Redmon et al., 2016) and its improved variations such as YoloV2 (Redmon and Farhadi, 2017), YoloV3 (Redmon and Farhadi, 2018), YoloV4 (Bochkovskiy et al., 2020), and YoloV55 are the popular one-stage object detection approaches found in the literature. Liao et al. propose a text detection model named Textboxes by modifying the Single-Shot Detection (SSD) kernel (Liao et al., 2017). Another improved version of Textboxess++ (Liao et al., 2018) considers the predicted text region as quadrilateral in place of rectangular bounding boxes. Densebox (Huang et al., 2015) is an improved version of the previously proposed EAST (Zhou et al., 2017) where a fully connected network has been employed to detect text scores and geometry at the pixel level. Based on the Yolobased object detection model, a fully convolutional regression network has been proposed to detect text in an image (Gupta et al., 2016). Naosekpam et al. introduce Yolov3 and Yolov4-based shallow networks to detect multi-lingual scene text (Naosekpam et al., 2021). Yolo-based models for scene text detection outperform all the previously proposed models." }, { "figure_ref": [], "heading": "Scene Text Recognition", "publication_ref": [ "b22", "b99", "b91", "b2", "b40", "b7", "b41", "b30", "b36", "b4", "b60" ], "table_ref": [], "text": "Text recognition from images is a fundamental research problem in the field of computer vision. A significant amount of effort has been devoted to recognizing text from scanned document images. Researchers have achieved incredible accuracy to recognize the text from scanned document images for resource-enriched languages like English and a significant number of optical character-recognizing applications have been emerging for practical usage. However, recognizing text from natural scenes is more challenging than recognizing text from scanned documents using Optical Character Recognizer (OCR). Natural scene images have a complex background with inconsistent and irregular font style, font size, and multi-color text. Moreover, noise, blur, distortion, and skewness are the common characteristics of natural scene imagery. Therefore, existing OCRs fail to recognize text from natural scenes. In the literature, we have found several previous research works which address natural scene text recognition. The related previous approaches for scene text recognition can be divided into two major categories: (1) Scene text recognition using classical machine learningbased approaches (Chen et al., 2004;De Campos et al., 2009;Wang and Belongie, 2010;Sheshadri and Divvala, 2012;Ali and Foroosh, 2016); and (2) Scene text recognition using deep learning-based approaches (Isthiaq and Saif, 2020;Bissacco et al., 2013;Jaderberg et al., 2016;Graves et al., 2007;He et al., 2016;Bai et al., 2018;Liu et al., 2018)." }, { "figure_ref": [], "heading": "Scene Text Recognition Using Machine", "publication_ref": [ "b116", "b66", "b52", "b109", "b75", "b93", "b86", "b91", "b104", "b67", "b64", "b21", "b99", "b22", "b2", "b5", "b97", "b30", "b36", "b92", "b26", "b51", "b17", "b4", "b60", "b30", "b36", "b92", "b26", "b51", "b17", "b4", "b60" ], "table_ref": [], "text": "Learning-based Approaches\nIn classical machine learning approaches, manually handcrafted features are extracted from the input image and then scene texts are recognized using these extracted features. The text recognition problem can be decomposed into a series of different sub-problems such as text binarization, segmentation of individual lines of text, segmentation of individual characters, recognition of individual characters, and finally merging them into full text. Several different methods have been found in the literature for the each of sub-problems such as text binarization (Zhou et al., 2010;Mishra et al., 2011;Lee and Kim, 2013), segmentation of individual line of text (Ye et al., 2003), segmentation of individual characters (Nomura et al., 2005;Shivakumara et al., 2011;Roy et al., 2009), recognition of individual characters (Chen et al., 2004;Sheshadri and Divvala, 2012) and finally merging them into full text (Weinman et al., 2007;Mishra et al., 2012). Statistical features such as SIFT (Lowe, 2004) and HOG (Dalal and Triggs, 2005) are extracted, SVM classifies each character and is followed by post-processing steps to generate the final text by merging individual characters. Wang and Belongie adapt the HOG features and deploy Nearest Neighbor (NN) for character classification (Wang and Belongie, 2010). Campos et al. conduct object categorization using a bag-of-visual-words (BoW) representation for text recognition (De Campos et al., 2009). More feature extraction techniques such as tensor decomposition (Ali and Foroosh, 2016), shape context (Belongie et al., 2002), or patch descriptors (Varma and Zisserman, 2002) have been utilized for character classification.\nMachine learning approaches require an intensive and complex pipeline of pre-processing steps to extract manually handcrafted low-level or midlevel potential features for character classification and post-processing steps to generate full text from the series of recognized characters. However, due to the limited representation constraints of traditional machine learning methods and the high complexity of pre-processing and post-processing pipelines, the machine learning-based text recognition methods hardly deal with the challenging characteristics found in natural scene imagery and show poor performance in scene text recognition. (2) Segmentation-free scene text recognition (Graves et al., 2007;He et al., 2016;Shi et al., 2016;Gao et al., 2017;Lee and Osindero, 2016;Cheng et al., 2017;Bai et al., 2018;Liu et al., 2018). Moreover, segmentationfree techniques utilize two different frameworks for scene text recognition: (1) CTC-based framework (Graves et al., 2007;He et al., 2016;Shi et al., 2016;Gao et al., 2017); and (2) Encoder-Decoder framework (Lee and Osindero, 2016;Cheng et al., 2017;Bai et al., 2018;Liu et al., 2018)." }, { "figure_ref": [], "heading": "Scene", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Segmentation-based Scene Text Recognition", "publication_ref": [ "b101", "b87", "b1", "b40", "b7", "b41" ], "table_ref": [], "text": "Segmentation-based scene text recognition requires a preprocessing pipeline to segment the text line, segment individual characters, then recognize each character with a character classifier, and finally proprocessing step to generate the final text. Wang et al. introduce a CNN-based model architecture to classify each character and provide Nonmaximal Suppression (NMS) algorithm to generate the final text (Wang et al., 2012). Sen et al. propose a U-net-based model architecture with a matra-removal strategy to recognize Bangla and Devanagari characters (Sen et al., 2022). More character recognition models are found in (Ahmed et al., 2019;Isthiaq and Saif, 2020;Bissacco et al., 2013;Jaderberg et al., 2016). However, each of the segmentation-based scene text recognition techniques requires localizing the individual character which is a challenging problem due to the complex characteristics of the natural scene image. Therefore, segmentation-based scene text recognition techniques provide poor performance for scene text recognition." }, { "figure_ref": [], "heading": "CTC-based Framework for Segmentation-free Scene Text Recognition", "publication_ref": [ "b30", "b36", "b92", "b26", "b36", "b92", "b26" ], "table_ref": [], "text": "The Connectionist Temporal Classification (CTC) decoding module originates from a speech recognition system. The CTC-based framework deals with sequential data. While applying a CTC-based framework for image data, we have to consider the input image as a sequence of frames of vertical pixels. Then, the CTC rules are applied to generate the target sequence from the per-frame prediction. Therefore, the CTC-based framework provides an end-to-end trainable network for text recognition.\nThe first attempt to apply the CTC-based framework for the OCR system is found in (Graves et al., 2007). The CTC-based framework is widely incorporated to solve the scene text recognition problem (He et al., 2016;Shi et al., 2016;Gao et al., 2017). The Convolutional Recurrent Neural Network (CRNN) is designed for a sequence of feature slice generation by including RNN layers after the stack of CNN layers and finally using CTC rules to generate the target sequence. He et al. introduce DTRN model using CRNN and CTC loss (He et al., 2016). Shi et al. modify the DTRN model by introducing a fully convolutional approach to generate the sequence of feature slices (Shi et al., 2016). Gao et al. replace RNN layers by adapting the stacked CNN layers to generate the feature slices (Gao et al., 2017)." }, { "figure_ref": [], "heading": "Encoder-Decoder Framework for Segmentation-free Scene Text Recognition", "publication_ref": [ "b94", "b3", "b51", "b17", "b4", "b60" ], "table_ref": [], "text": "An encoder-decoder framework is a popular approach for sequence-to-sequence learning problems (Sutskever et al., 2014). The encoder network takes the sequence data as input and creates the final latent state. Using the latent state as input the decoder network generates the output sequence in an auto-regressive manner. The encoder-decoder framework is very effective when the length of the output is variable, which is the requirement for the scene text recognition problem. Moreover, the performance of the encoder-decoder framework is improved by adapting the attention mechanism to jointly learn to align the input and output sequence properly (Bahdanau et al., 2014).\nA recursive recurrent neural network with an attention mechanism has been introduced in (Lee and Osindero, 2016) where the encoder network with a recursive convolutional layer learns the feature vector from the input image, then the attention layer ensures the best feature selection, and finally, the decoder network with recurrent neural network generates the character sequence. Cheng et al. propose an improved version of the attention mechanism by imposing localization supervision while calculating the attention score (Cheng et al., 2017). Moreover, an improved attention mechanism has been proposed to minimize misalignment problems in output sequence (Bai et al., 2018) and reduce the computational cost (Liu et al., 2018).\nSegmentation-free scene text recognition approaches with encoder-decoder and CTC-based framework simplify the recognition pipeline by drastically eliminating the complex preprocessing and post-processing steps and enable training the model without character level annotated dataset. The encoder-decoder and CTC-based frameworks require a larger dataset to train the model architecture, which is now possible by creating a large synthetic dataset6 ." }, { "figure_ref": [], "heading": "Address Parsing", "publication_ref": [ "b9", "b18", "b53", "b100", "b69", "b20", "b108" ], "table_ref": [], "text": "Automatic address parsing is an active research area with real-life practical applications such as efficient searching on the mapping platform, or automatic address insertion on a relational dataset. It is a challenging problem due to the large variety of user address input even in the same lan-guage. There are two different approaches for automatic address parsing: (1) Address parsing using machine learning-based approaches (Borkar et al., 2001;Churches et al., 2002;Li et al., 2014;Wang et al., 2016); and (2) Address parsing using deep learning-based approaches (Mokhtari et al., 2019;Craig et al., 2019;Yassine et al., 2021)." }, { "figure_ref": [], "heading": "Address Parsing Using Machine", "publication_ref": [ "b9", "b18", "b53", "b100", "b53", "b100" ], "table_ref": [], "text": "Learning-based Approaches\nTraditional rule-based machine-learning methods have been proposed for automatic address parsing in (Borkar et al., 2001;Churches et al., 2002). However, it has been found that rule-based methods require prior domain knowledge which is not available due to the complex domain of possible address.\nTo improve the performance of rule-based methods, probabilistic methods based on the Hidden Markov Model (HMM) and Conditional Random Field (CRF) have been introduced in (Li et al., 2014;Wang et al., 2016). Large-scale HMM-based parsing techniques are capable of working with large variations than rule-based methods (Li et al., 2014).\nA linear-chain CRF combined with a Stochastic Regular Grammar (SRG) is utilized to create a discriminative model to deal with the complex domain of possible user address input (Wang et al., 2016). However, probabilistic methods based on HMM and CRF heavily rely on structured data which is not available due to the variety of user input." }, { "figure_ref": [ "fig_2" ], "heading": "Address Parsing Using Deep Learning-based Approaches", "publication_ref": [ "b90", "b69", "b20", "b108" ], "table_ref": [], "text": "In recent years, the deep learning-based neural network has been utilized to solve the automatic address parsing problem. Sharma et al. propose a multi-layer feed-forward neural network that provides better results than rule-based or probabilistic models (Sharma et al., 2018). RNN-based models utilize the sequential properties of the address text and provide state-of-the-art results for automatic address parsing. Mokhtari et al. propose several sequence-to-sequence models with RNN, Bi-RNN, LSTM, or Bi-LSTM units and conduct a comparative study among them for automatic address parsing (Mokhtari et al., 2019). Several sequence-to-sequence models with RNN and its variations have been found in the literature (Craig et al., 2019;Yassine et al., 2021). (5) an address text parser model to classify each field of an address. Figure 3 shows an overview of the end-to-end system. We discuss the possible model architectures used to train and evaluate the model for each sub-problem." }, { "figure_ref": [], "heading": "Detection Models", "publication_ref": [], "table_ref": [], "text": "In " }, { "figure_ref": [], "heading": "Backbone", "publication_ref": [], "table_ref": [], "text": "The initial component of the Yolo-based model is referred to as the backbone, and its primary role is to extract features from the input image. This backbone is constructed using deep-stacked convolutional neural networks. In deep-stacked convolutional neural networks, the first layers perform the task of extracting low-level features from the entire " }, { "figure_ref": [], "heading": "Neck", "publication_ref": [], "table_ref": [], "text": "The neck, as the second element of the YOLObased model, plays a crucial role in extracting features necessary for detecting objects of various sizes. The neck consists of pooling layers and each pooling layer has a different kernel size. By taking output features from multiple layers of the backbone, the neck generates high-level feature maps. This feature map enables the YOLO-based model to effectively handle variations in scale among objects." }, { "figure_ref": [], "heading": "Prediction Head", "publication_ref": [], "table_ref": [], "text": "The final part of the Yolo-based model is the prediction head. It consists of different detection layers. Each detection layer is responsible for predicting the rectangular bounding boxes and the class-wise probabilities at different scales and aspect ratios. In this research work, we train and evaluate different recent Yolo-based object detection models such as YOLOv3, YOLOv4, and YOLOv5 for both the signboard detection model and the address text detection model using the Bangla Signboard Detection Dataset and Bangla Address Detection Dataset respectively." }, { "figure_ref": [], "heading": "Bangla Address Text Recognition Model", "publication_ref": [], "table_ref": [], "text": "After detecting the address text portion from the signboard, the next step is to recognize the address text from the cropped address text portion image. From Subsection 2.2.2, we have come to know that there are two different deep learningbased approaches for scene text detection such as segmentation-based and segmentation-free techniques where the segmentation-free technique is more effective to train an end-to-end text recognition model. For scene text recognition for segmentation-free deep learning-based approaches, there are two different frameworks such as the CTC-based framework and the Encoder-Decoder framework. In this research work, we train and evaluate different recent CTC-based and Encoder-Decoder models for Bangla address text recognition. " }, { "figure_ref": [], "heading": "CTC-based", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Recurrent Layer for Sequence Modeling", "publication_ref": [], "table_ref": [], "text": "We use a bidirectional recurrent layer on top of the previous convolutional layers for sequence modeling. The recurrent layer predicts the sequence of labels from feature vectors generated by the convolutional layers for rectangle regions. There are many advantages of using a recurrent layer for sequence modeling. We are generating a sequence of characters from the input image and the recurrent layer has a strong capability to address the contextual information of the sequential data. Using contextual information, the recurrent layer helps to generate the next character based on both the current feature vector and the previously predicted character sequence. The recurrent layer can deal with sequences of arbitrary length. Moreover, the recurrent layer can be trained with previous convolutional layers as a single neural network. To design the recurrent layer, there are different types of recurrent units such as RNN, Bi-RNN, LSTM, or Bi-LSTM. RNN fails to address the contextual information of the long sequence while LSTM has the capability to capture the contextual information on the long sequence. Moreover, the bidirectional recurrent layer is more effective than the unidirectional recurrent layer as the bidirectional recurrent layer can address the contextual information from both directions on the sequence. Therefore, in this research work, we design the recurrent layer using the Bi-LSTM units for sequence modeling. The recurrent layer generates a sequence of initial characters from the feature vectors where each character can be mapped with a feature vector and each feature vector represents a receptive field in the input image." }, { "figure_ref": [], "heading": "Transcription Layer with CTC for Sequence Prediction", "publication_ref": [], "table_ref": [], "text": "The return sequence of the recurrent layer contains a significant number of repeated characters as multiple receptive fields can be possible on the width of a single character in the input image. For example, \"-hh-eee-ll-lll-oo-\" is generated by the recurrent layers for \"hello\". We utilize the conditional probability defined by the CTC layers to remove the repetitive characters and generate the target sequence. Suppose, the predicted sequence of the recurrent layers is y = y 1 , ..., y T where T is the length of the sequence and l is the target sequence. β is a sequence-to-sequence mapping function from the output sequence of the recurrent layers to the target sequence. We calculate the conditional probability l given y using the following Equation 1.\np(l|y) = π:β(π)=l p(π|y)(1)\nWhere π ∈ L T and L T is the set of all possible target sequences.\nThe CTC layer learns the CTC rules to remove the repeated characters from the output of the recurrent layer to generate the final character sequence.\nIn this research work, we design different Bangla address text recognition models using the CTC-based approach: VGG+Bi-LSTM+CTC, RCNN+Bi-LSTM+CTC, ResNet+Bi-LSTM+CTC, and GRCL+Bi-LSTM+CTC. We train and evaluate each model using the Syn-Bn-OCR Dataset and find out the best-performing model." }, { "figure_ref": [ "fig_7" ], "heading": "Encoder-Decoder Model Architecture for Bangla Address Text Recognition", "publication_ref": [], "table_ref": [], "text": "We have designed an Encoder-Decoder model architecture for the Bangla address text recognition model. Figure 6 shows the proposed Encoder-Decoder model for Bangla address text recognition. The Encoder-Decoder model architecture consists of three key components: (1) A deep-stacked convolution neural network for feature extraction; An encoder network with a recurrent layer for sequence modeling; and A decoder network with an " }, { "figure_ref": [], "heading": "Encoder Network", "publication_ref": [], "table_ref": [], "text": "The encoder network consists of recurrent layers which are responsible for sequence modeling.\nFrom different recurrent layers, we use the Bidirectional LSTM (Bi-LSTM) which is more effective to capture the contextual information of the input from both directions. Moreover, the Bi-directional recurrent layer can handle long sequences effectively compared to the unidirectional recurrent layer. The output of the encoder network is an intermediate representation which will be the input to the decoder network." }, { "figure_ref": [], "heading": "Decoder Network", "publication_ref": [], "table_ref": [], "text": "The decoder network consists of an attention layer followed by an LSTM layer. In the attention layer, the input vector is multiplied with an attention weight vector before providing it as input to each LSTM cell. The attention weight vector is learned during the training phase. The output of the decoder layer is the final sequence of characters.\nIn this research work, we design different Bangla address text recognition models using the Encoder-Decoder approach with attention mechanism: VGG+Bi-LSTM+Attention, RCNN+Bi-LSTM+Attention, ResNet+Bi-LSTM+Attention, and GRCL+Bi-LSTM+Attention. We train and evaluate each model using the Syn-Bn-OCR Dataset and find out the best-performing model." }, { "figure_ref": [ "fig_8" ], "heading": "Address Text Correction Model", "publication_ref": [ "b96" ], "table_ref": [], "text": "To improve the performance of the Bangla address text recognition by post-correction, we propose an address text correction model to automatically correct the output address text sequence of the Bangla address text recognition model. Due to the complexity of the scene text recognition, the output character sequence of the Bangla address text recognition model sometimes contains wrong characters which can be corrected by the contextual information of the character sequence. The sequenceto-sequence Encoder-Decoder framework is appropriate for the address text correction model.\nAddress text correction is a sequence-tosequence modeling problem. The encoder-Decoder framework is the most effective model architecture for sequence-to-sequence modeling problems. For Bangla address text correction, we have designed a transformer-based encoder-decoder model inspired by the Bangla to English translation model7 found in (Tiedemann and Thottingal, 2020). Figure 7 shows an overview of the transformer-based encoder-decoder model.\nThe transformer model comprises two essential components: the encoder network and the decoder network. These networks consist of various crucial elements, including positional encoding, multihead attention layer, feed-forward layer, as well as residual connections and layer normalization." }, { "figure_ref": [], "heading": "Positional Encoding", "publication_ref": [], "table_ref": [], "text": "The transformer model does not employ convolutional or sequential operations on the input se-quence, instead relying on position encoding to account for the sequential order of the input data. By adding positional encodings to the embedded input tokens, the transformer model is able to determine the relative position of each token within the embedding." }, { "figure_ref": [], "heading": "Multi-head Attention Layer", "publication_ref": [], "table_ref": [], "text": "The multi-head attention layer consists of multiple self-attention layers that work simultaneously. Selfattention, also called Scaled Dot-Product Attention, is a mechanism intended to capture relationships within a sequence. Self-attention enables the model to effectively incorporate long-term dependencies. The multi-head attention layers employ multiple attention heads in parallel to acquire complex pattern within the input sequence easily." }, { "figure_ref": [], "heading": "feed-forward layer", "publication_ref": [], "table_ref": [], "text": "The feed-forward layer utilizes both a linear transformation and a non-linear activation function. By incorporating the feed-forward layer, the transformer model introduces non-linearity, enabling it to learn complex patterns and complex relationships within the sequence." }, { "figure_ref": [], "heading": "Residual Connections and Layer Normalization", "publication_ref": [], "table_ref": [], "text": "To overcome the vanishing gradient problem and enhance information flow, the network includes residual connections. These connections are followed by layer normalization, which stabilizes the distribution of hidden states, leading to accelerated training of the transformer model. Layer normalization helps mitigate the negative effects of input variations and internal covariate shift, reducing their influence on the model." }, { "figure_ref": [], "heading": "Encoder Network", "publication_ref": [], "table_ref": [], "text": "The encoder network of the transformer layer consists of a multi-head attention layer followed by encoder layers. The input embedding is added with the positional encoding before providing it as the input to the multi-head attention layer. Each encoder layer consists of two feed-forward layers followed by a normalization layer. Moreover, there is a residual connection from the output of the previous encoder layer to the input of the normalization layer. The encoder network generates the encoded context from the incorrect address text." }, { "figure_ref": [], "heading": "Decoder Network", "publication_ref": [], "table_ref": [], "text": "The decoder network consists of a multi-head attention layer followed by decoder layers. Each In this research work, we train and evaluate the transformer-based encoder-decoder model using the synthetically generated Bangla address correction (Syn-Bn-AC) dataset." }, { "figure_ref": [ "fig_9" ], "heading": "Address Text Parser Model", "publication_ref": [], "table_ref": [], "text": "The next step is to parse the address text into different address components such as house number, road number/name, area name, thana name, and district name. We can consider the address text parsing problem to both sequence-to-sequence modeling and token classification problem. When we consider the address text parsing problem as a sequence-to-sequence modeling problem, then we design the encoder-decoder model using RNN, LSTM, and Bi-LSTM units. On the contrary, when we consider the address parsing problem as a token classification problem, then we design the transformer-based pre-trained language model for Bangla address text parsing.\nThe input of the Bangla address text parser model is a sequence of word tokens and the output is a sequence of address components. For sequenceto-sequence modeling problems, the Encoder-Decoder framework is one of the effective model architectures. We have utilized the Encoder-Decoder model architecture for Bangla address text parser model. Figure 8 shows an overview of the Encoder-Decoder model for the Bangla address parser model. In this research work, we train and evaluate different sequence-to-sequence Encoder-Decoder models using RNN, LSTM, and Bi-LSTM units." }, { "figure_ref": [], "heading": "Transformer-based Pre-trained", "publication_ref": [], "table_ref": [], "text": "Language Model for Bangla Address Text Parser Model Architecture\nIn the address text parsing task, we tag each token of the address text as an address component. Therefore, we can consider address text parsing as a token classification problem. We propose a token classification model for the Bangla address text We can see an overview of the proposed model architecture in Figure 9. The Encoder network consists of 12 encoder layers where each encoder layer has a multi-head attention layer followed by add & norm layer and a feed-forward layer followed by another add & norm layer. Moreover, there are two residual connections -one residual connection is from the input of the multi-head attention layer to the first add & norm layer and another residual connection is from the input of the feed-forward layer to the second add & norm layer. Finally, there is a linear layer and a softmax layer at the end of the model architecture.\nIn this research work, we train and evaluate the transformer-based pre-trained language model for Bangla address text parsing using the Bangla Address Parsing (Bn-AP) dataset. Moreover, we present a comparative analysis among transformerbased pre-trained language models and sequenceto-sequence Encoder-Decoder models with RNN, LSTM, and Bi-LSTM units for Bangla address text parsing." }, { "figure_ref": [ "fig_10", "fig_11" ], "heading": "Datasets", "publication_ref": [], "table_ref": [], "text": "In this section, we present the datasets developed to train proposed model architectures for different sub-problems of the end-to-end system. We discuss the data collection or creation procedure for each of the datasets. Moreover, we show the data preprocessing steps followed before using the dataset for the training and evaluation of different model architectures. The signboard detection model detects signboards from the raw image with the background. Recognizing address text from raw images with complex backgrounds is different without detecting the signboard portion first. However, signboard detection is a challenging problem in a developing country like Bangladesh where no standard signboard design is followed. Moreover, signboards come in various sizes, colors, and orientations, de-pending on their purpose, location, and local regulations. There is no existing Bangla signboard detection dataset available to train a signboard detection model. Therefore, in the research work, we have created a novel Bangla signboard detection dataset named Bn-SD by collecting more than 16000 natural scene images containing Bangla signboards. Different Yolo-based object detection models have been trained on the Bn-SD dataset. Dhaka is a densely populated area with more than 23 million people and the population is increasing at almost 3.3% per year8 . To serve such a large population, there are an increasing number of marketplaces and commercial areas around Dhaka city. Therefore, there are numerous natural scenes containing signboards with Bangla text. We have collected more than 16000 natural scene images containing Bangla signboards. To collect the natural scene images, we have only selected Dhaka city which is the capital of Bangladesh. Figure 10 presents some sample natural scene images from the collected raw images. Before utilizing the collected dataset, we have to annotate the raw images with an appropriate object detection labeling tool. From raw images, we create the Bangla Signboard Detection (Bn-SD) dataset by annotating it with an object detection labeling tool named LabelImg9 . LabelImg annotates the scene image with rectangular boxes around the signboard area. Figure 11 shows an overview of the annotation process. The total size of the Bn-SD dataset is 27.2 GB after annotating using LabelImg." }, { "figure_ref": [ "fig_12" ], "heading": "Bangla Address Detection (Bn-AD) Dataset", "publication_ref": [], "table_ref": [], "text": "After detecting the signboard from the raw image, the next step is to detect the address text portion from the signboard. We frame the address text detection from the signboard as an object detection problem. We introduce a new dataset by cropping the signboard area of the original natural scene images. Different Yolo-based object detection models have been trained on the Bangla Address Detection (Bn-AD) dataset. From the Bn-SD dataset, we separate the raw images which contain signboards with address information. There are more than 8000 signboard images with address information. We then create a novel dataset by cropping the signboard area from the separated raw images. We create the Bangla Address Detection (Bn-AD) dataset by annotating rectangular boxes around the address area with the LabelImg object detection labeling tool. Figure 12 shows an overview of the annotation process for the Bn-AD dataset. The total size of the Bn-AD dataset is 5.01 GB after annotating using LabelImg." }, { "figure_ref": [], "heading": "Raw Address Text Corpus", "publication_ref": [], "table_ref": [], "text": "We have collected a large dataset of full address text from different areas of Dhaka city such as Dhanmondi, Hajaribag, Jigatola, Rayer Bazar, Shankar, and Tolarbag. The full address consists of the address segments/components -House number, Road number/name, Area name, Thana name, District name. We use the raw address text corpus to create the datasets for Bangla address text recognition, address text correction, and address text parser models. We apply different data augmentation tech-niques on the raw address text before using it to create new datasets." }, { "figure_ref": [ "fig_13", "fig_15" ], "heading": "Synthetic Bangla OCR (Syn-Bn-OCR) Dataset", "publication_ref": [], "table_ref": [], "text": "To train the CTC-based or Encoder-Decoder model for Bangla address text recognition, we need a large label dataset. However, as Bangla is a low-resource language, such a large label dataset is not available. Therefore, we create a synthetic dataset named Synthetic Bangla OCR (Syn-Bn-OCR) dataset by using a text recognition data generator named TextRecog-nitionDataGenerator10 . Figure 13 shows some sample data from the Syn-Bn-OCR dataset. The TextRecognitionDataGenerator is a popular tool to generate synthetic text recognition datasets where we need to provide only a raw Bangla address text corpus. The TextRecognitionDataGenerator generates images with Bangla address text on them. We create a sequence of tokens from the address text and label each token with a tag. Therefore, each data sample is a pair of a sequence of tokens and a sequence of tags. Figure 15 shows some sample data from the Bangla Address Parsing (Bn-AP) dataset." }, { "figure_ref": [], "heading": "Experimental Results and Discussions", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the experimental results and discussion of our research work. In our system pipeline, there are five different deep learningbased models for signboard detection from the raw image, address text detection from the signboard image, Bangla address text recognition from the cropped address portion, address text correction, and finally address text parsing. We have proposed different model architectures for each of the subproblems. In the following sections, we discuss the experimental results during the training and evaluation of the proposed model architectures. Moreover, we conduct a comparative study among different possible model architectures for each of the subproblems." }, { "figure_ref": [], "heading": "Environment Setting", "publication_ref": [], "table_ref": [], "text": "The training, validation, and testing of the different proposed model architectures are performed on a computer with hardware configurations: an Intel processor (model number i7-7700k) with 8 MB Intel smart cache, 4 processing cores, and 8 threads running with 4.50GHZ max turbo frequency; an Nvidia GTX 1070 GPU with 8GB of VRAM; a DDR5 RAM with 16 GB operating at a clock speed of 5200 MHz; and storage of 1 TB with 512 GB SSD and 512 GB HDD. Moreover, we need to use the Google Colab platform with hardware configuration: a processor with Intel(R) Xeon(R) CPU @ 2.20GHz; a 16GB NVIDIA Tesla T4 GPU; and a 12.68 GB system RAM; and a 78.2 GB disk storage." }, { "figure_ref": [], "heading": "Signboard Detection Model", "publication_ref": [], "table_ref": [], "text": "To develop a signboard detection model, we train and evaluate different recent Yolo-based object detection models such as YOLOv3, YOLOv4, and YOLOv5 using our novel Bn-SD dataset. During training and testing on the model architectures, we consider the following parameter setting and evaluation metrics." }, { "figure_ref": [], "heading": "Parameter Setting", "publication_ref": [ "b8" ], "table_ref": [], "text": "While training the Yolo-based models, we need to set some parameters to their recommended values. The Yolo-based model takes a fixed-size image as input. Therefore, the spatial dimension of the input image is set to 416 × 416. As we are working with RGB images, the number of channels is 3. To ensure the stability of the gradient during training, we set the momentum to 0.95 and weight_decay to 0.0005. Burn_in is assigned to 1000 so that the learning rate steadily increases for the first 1000 batches until it reaches a specified value of 0.001. It is recommended to assign the max_batches to 6000 for only 1 class detection. The steps parameter is set to (80% of max_batches, 90% of max_batches) which is equal to (4800, 5400). After 4800 steps, the learning rate is multiplied by a factor of 0.1 so that it would be decreased to 0.0001. After 5400 steps, the learning rate is again multiplied by a factor of 0.1 so that it would be decreased to 0.00001. By decreasing the learning rate at steps 4800 and 5000, we get the nearly perfect model (Bochkovskiy et al., 2020). The number of filters required is calculated by using the following Equation 2. As the number of classes is 1, using Equation 2, we set the number of filters to 18. We choose a batch size of 64 and a subdivision of 16.\nf ilter = (5 + number_of _classes) × 3 (2)" }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b25", "b76" ], "table_ref": [], "text": "For evaluating different Yolo-based Bangla signboard detection models, we use standard evaluation metrics such as Average Precision (AP). To calculate the AP, it is required to find the Intersection Over Union (IoU), Precision (P), and Recall (R).\nIoU is the ratio of the area overlap and the area of union between the predicted and the ground truth bounding box. IoU refers to the accuracy of bounding box prediction and the range of the IoU is 1 to 0. We calculate the value of IoU using the following Equation 3.\nIoU = Area_of _Overlap Area_of _U nion(3)\nAccording to the PASCAL VOC metrics (Everingham et al., 2010;Padilla et al., 2020), we calculate different evaluation metrics such as Precision, Recall, and AP when the IoU is greater than 50% (IoU ≥ 0.5). That means if the IoU is greater than 50%, then we consider that the signboard is correctly detected.\nThe true positive instance refers to a correct detection with IoU ≥ 50%, the false positive is defined as an incorrect detection with IoU < 50%, and finally false negative refers to the ground truth signboard is not detected due to the low confidence level. The confidence level indicates how likely a bounding box contains an object and how accurate the boundary box prediction is. Precision is the measure of the accuracy of positive predictions and can be calculated by the ratio of the number of true positive instances and all truly predicted instances(sum of all true positive and false positive instances). Recall is the ability to correctly identify all positive instances and can be calculated by the ratio of the true positive instances and all true instances in the dataset (sum of all true positive and false negative instances). The formulas to measure the Precision and Recall are shown in Equation 4and 5 respectively." }, { "figure_ref": [], "heading": "P =", "publication_ref": [], "table_ref": [], "text": "T rue_P ositive T rue_P ositive + F alse_P ositive (4) R = T rue_P ositive T rue_P ositive + F alse_N egative\n(5)\nF1 score is an evaluation metric that combines Precision and Recall into a single metric by taking the harmonic mean. We show the formula to calculate the F1 score in Equation 6.\nF 1 Score = 2 × P recision × Recall P recision + Recall (6)\nHowever, the F1 score represents only single points on the precision-recall curves. Average Precision (AP) overcomes the drawback of F1 score. The AP represents the overall precisionrecall curves into a single evaluation metric. Therefore the AP is a more acceptable and comprehensive metric to evaluate the signboard detection models. We calculate the value of AP using the following Equation 7. The AP is the weighted average of the precision at n thresholds where the weight is the recall increment for each threshold.\nAP = k=n-1 k=0 [R(k) -R(k -1)] × P (k) (7)" }, { "figure_ref": [], "heading": "Training and Testing Results for Signboard Detection Model", "publication_ref": [], "table_ref": [], "text": "Before training the Bangla signboard detection model, we have divided the Bn-SD dataset into train (80%), validation (10%), and test (10%) sets. By utilizing the above parameter setting and evaluation metrics, we have trained the different Yolobased models such as YOLOv3, YOLOv4, and YOLOv5 for Bangla signboard detection using our " }, { "figure_ref": [], "heading": "Address Text Detection Model", "publication_ref": [], "table_ref": [], "text": "The address text detection model detects the address text portion from the input signboard image.\nThe training process of the address text detection model is similar to the signboard detection model. To develop an address text detection model, we train and evaluate different recent Yolo-based object detection models such as YOLOv3, YOLOv4, and YOLOv5 using our novel Bn-AD dataset. During training and testing on the model architectures, we consider the same parameter setting and evaluation metrics applied for the signboard detection model." }, { "figure_ref": [], "heading": "Training and Testing Results for Address Text Detection Model", "publication_ref": [], "table_ref": [], "text": "We have partitioned the Bn-AD into three subsets: train (80%), validation (10%), and test (10%) set.\nWe have trained different Yolo-based models such as YOLOv3, YOLOv4, and YOLOv5 using the " }, { "figure_ref": [], "heading": "Parameter Setting", "publication_ref": [], "table_ref": [], "text": "We select the shape of the input image as 64 × 600.\nDuring the training loop, the batch size is set to 32. We choose the Adadelta optimizer, which is a modified version of the Adagrad optimizer. We set some hyper-parameters of the Adadelta optimizer such as the initial learning rate to 0.01, β 1 to 0.9, ρ to 0.95, ϵ to 0.00000001, and the grad_clip to 5." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We use Word Recognition Accuracy (WRA) as the evaluation metric to evaluate the Bangla address text recognition model. WRA is the ratio between the number of correctly recognized words (W r ) and the overall count of ground-truth words (W ). We calculate the WRA using the following Equation 8.\nW RA = W r W(8)" }, { "figure_ref": [], "heading": "Training and Testing Results for Bangla Address Text Recognition Model", "publication_ref": [ "b62" ], "table_ref": [ "tab_4" ], "text": "We have created Synthetic Bangla OCR (Syn-Bn-OCR) dataset with 100k labeled images using a popular synthetic data generator named TextRecog-nitionDataGenerator. We have partitioned the Syn-Bn-OCR dataset into three subsets: train set (80%), validation set (10%), and test set (10%).\nUsing the stated parameter setting and evaluation metrics, we have trained different CTC-based model architectures (VGG+Bi-LSTM+CTC, RCNN+Bi-LSTM+CTC, ResNet+Bi-LSTM+CTC, and GRCL+Bi-LSTM+CTC) and different Encoder-Decoder model architectures (VGG+Bi-LSTM+Attention, RCNN+Bi-LSTM+Attention, ResNet+Bi-LSTM+Attention, and GRCL+Bi-LSTM+Attention) using the Syn-Bn-OCR dataset. We can consider the VGG+Bi-LSTM+CTC and VGG+Bi-LSTM+Attention models as the baseline model as VGG is the simpler feature extractor compared to RCNN and ResNet. We have trained each model architecture before convergence. Table 1 shows the word recognition accuracy for each different Bangla address text recognition model on the test dataset. We have found that the ResNet+Bi-LSTM+CTC model shows the best result with a word recognition accuracy of 94.5%. We have better results for the CTC-based model as the CTC-based layer has less dependency on the language models to capture the final character sequence (Long et al., 2021). However, the results of the attention-based model differ from the CTC-based with only a very small margin." }, { "figure_ref": [], "heading": "Address Text Correction Model", "publication_ref": [], "table_ref": [], "text": "We propose an address text correction model to improve the performance of the Bangla address text recognition model by post-correction. The address text correction model automatically corrects the incorrect output character sequence of the Bangla text. Sequence-to-sequence Encoder-Decoder model is the most effective model to generate the correct character sequence from the incorrect character sequence using contextual information. We have designed a transformer-based encoder-decoder model for Bangla address text correction. To train the address text correction model architecture, we consider the following parameter setting and evaluation metrics." }, { "figure_ref": [], "heading": "Parameter Setting", "publication_ref": [ "b88" ], "table_ref": [], "text": "Using the training phase, we select the batch size of 32 and the initial learning rate of 0.00005 and the number of warmup steps of 10000, and the weight decay rate of 0.01. For the tokenization step, we choose the Byte-Pair Encoding (BPE) tokenizer (Sennrich et al., 2015)." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "During the training and testing phase, we evaluate the address text correction model using the Word Level Accuracy (WLA) with is similar to the evaluation metric used for the Bangla address text recognition model. output sequence is W c and the number of words in the ground-truth sequence is W , then we calculate the WLA using the following Equation 9.\nW LA = W c W (9)" }, { "figure_ref": [], "heading": "Training and Testing Results for Address Text Correction Model", "publication_ref": [], "table_ref": [], "text": "We have created the synthetic Bangla address correction (Syn-Bangla-AC) dataset with 60k pairs of incorrect and correct address text. We have partitioned the Syn-Bangla-AC dataset into three subsets: train set (80%), validation set (10%), and test set (10%). By considering the above parameter setting and evaluation metrics, we have trained the sequence-to-sequence transformer-based Encoder-Decoder model using the Syn-Bangla-AC dataset for Bangla text address correction. We have trained the model before convergence. We have evaluated the model on the test dataset and found a word-level accuracy of 98.1%." }, { "figure_ref": [], "heading": "Address Text Parser Model", "publication_ref": [], "table_ref": [], "text": "We have proposed two different types of address text parser models by considering the parsing as sequence-to-sequence modeling and token classification problem. We design sequence-to-sequence encoder-decoder models with RNN, LSTM, and Bi-LSTM units to parse the address text. Moreover, we propose a token classification model for Bangla address text parsing problem using the transformer-based pre-trained language named banglabert. While training the model architectures, we consider the following parameter setting and evaluation metrics." }, { "figure_ref": [], "heading": "Parameter Setting", "publication_ref": [ "b6" ], "table_ref": [], "text": "While training the Bangla address text parser model, we choose a batch size of 32, an initial learning rate of 0.0001, a number of warmup steps of 5000, and a weight decay rate of 0.01. For the tokenization step, we choose the WordPiece tokenization algorithm (Bhattacharjee et al., 2021)." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "We evaluate the Bangla address text parser model during the training and testing phase using Accuracy, Precision, Recall, and F1 Score as the evaluation metrics. To calculate the evaluation metrics, we use a popular sequence labeling evaluator Python library named seqeval11 . Sample calculations of the evaluation metrics are found in the HuggingFace token classification pipeline12 ." }, { "figure_ref": [], "heading": "Training and Testing Results for Address Text Parser Model", "publication_ref": [], "table_ref": [], "text": "To extend the novel Bangla Address Parsing (Bn-AP) dataset, we apply different augmentation techniques such as randomly removing address components, randomly swapping two address components, revising the address, and removing punctuation. The final augmented Bn-AP dataset contains 30k labeled data. We have divided the Bn-AP dataset into three subsets: training set (80%), validation set (10%), and testing set (10%).\nWe have trained the sequence-to-sequence Encoder-Decoder model with RNN, LSTM, and Bi-LSTM units and finetuned the token classification model using Banglabert model before convergence. The Encoder-Decoder model with RNN units is the simplest model and is considered the baseline model for the address text parsing problem. Table 2 shows the Accuracy, Precision, Recall, and F1 Scores for different Bangla address text parser models on the test dataset. We have found that the token classification model using the Banglabert model provides the best results with a Precision of 0.96, Recall of 0.97, F1 Score of 0.965, and Accuracy of 97.32%." }, { "figure_ref": [], "heading": "Conclusion and Future Works", "publication_ref": [], "table_ref": [], "text": "In this research work, we have developed an endto-end system for detecting, recognizing, and parsing the Bangla address text from natural scene images containing signboards using deep learning-based approaches. We have developed manually annotated or synthetic datasets for detecting, recognizing, correcting, and parsing address text from the natural scene. We have trained and evaluated different Yolo-based model architectures for detecting signboards from natural scene images and the address text from signboard images. We have conducted a performance analysis among different CTC-based and Encoder-Decoder models with attention mechanisms for Bangla address text recognition and found the best-performing model architecture. We have introduced a novel address correction model using a sequence-to-sequence transformer network to improve the performance of Bangla text recognition by post-correction. Finally, We have developed a Bangla address parser using the state-of-the-art transformer-based pre-trained language model.\nIn the future, we will extend our novel signboard dataset for multi-modal analysis by utilizing both the visual context and the textual context found in the signboard images. By creating a synthetic Bangla OCR dataset using a general text corpus, we can train a general Bangla scene text recognition model that can recognize Bangla text from natural scene images. We can explore the possibility of using the transform-based model for the sequence modeling layer in the text recognition model. Finally, we can train a general text correction model by training the sequence-to-sequence transformerbased model architecture with a Bangla text correction dataset created by using a general text corpus." } ]
Retrieving textual information from natural scene images is an active research area in the field of computer vision with numerous practical applications. Detecting text regions and extracting text from signboards is a challenging problem due to special characteristics like reflecting lights, uneven illumination, or shadows found in real-life natural scene images. With the advent of deep learning-based methods, different sophisticated techniques have been proposed for text detection and text recognition from the natural scene. Though a significant amount of effort has been devoted to extracting natural scene text for resourceful languages like English, little has been done for low-resource languages like Bangla. In this research work, we have proposed an end-to-end system with deep learning-based models for efficiently detecting, recognizing, correcting, and parsing address information from Bangla signboards. We have created manually annotated datasets and synthetic datasets to train signboard detection, address text detection, address text recognition, address text correction, and address text parser models. We have conducted a comparative study among different CTC-based and Encoder-Decoder model architectures for Bangla address text recognition. Moreover, we have designed a novel address text correction model using a sequence-to-sequence transformer-based network to improve the performance of Bangla address text recognition model by post-correction. Finally, we have developed a Bangla address text parser using the state-of-the-art transformer-based pre-trained language model.
Towards Detecting, Recognizing, and Parsing the Address Information from Bangla Signboard: A Deep Learning-based Approach
[ { "figure_caption": "Figure 1 :1Figure 1: An overview of the research problem", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An overview of the proposed solution", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An overview of the end-to-end system", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "this research, we have developed two detection models -the signboard detection model and the address text detection model. The signboard detection model detects the signboard portion from the natural scene image and the address text detection model then detects the address text portion from the cropped signboard image. The deep learning-based approaches are the most effective technique for detection. There are two different deep learning-based approaches for detection such as segmentation-based and object detection based where object detection-based techniques are the most effective one for real-time detection. There are two different types of object detection approaches such as two-stage detection and one-stage detection where one stage detection approach is faster and more accurate than two-stage detection. The Yolo-based object detection model and its improved versions are examples of one-stage detection approaches. In this research work, we train and evaluate different recent Yolo-based object detection models such as YOLOv3, YOLOv4, and YOLOv5 for both the signboard detection model and the address text detection model. Figure 4 shows a high-level overview of the Yolo-based model architecture for both the signboard detection model and the address text detection model. The Yolo-based model is a single unified network that considers the object detection problem as a regression problem to simultaneously predict the rectangular bounding boxes and the classwise probabilities for each object. The Yolo-based model enables to design and train end-to-end models for real-time object detection. There are three main components of the Yolo-based model: backbone, neck, and prediction head.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: A high-level overview of the Yolo-based model architecture for both the signboard detection model and the address text detection", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "The head of the Yolo-based model consists of convolution layers and fully connected layers. The YOLOv3 model uses the Darknet-53 architecture as the backbone model for feature extraction from the input image. The feature fusion layers are utilized to design the neck of the YOLOv3 model. The feature fusion layers apply upsampling and concatenation operations to combine the features from different layers of Darnet-53 architecture so that the YOLOv3 model can handle the scale variation of the signboard. The head of YOLOv3 consists of different detection layers to predict the rectangular bounding boxes and the class-wise probabilities. The YOLOv4 model utilizes a modified version of the CSPDarknet53 architecture as the backbone, which extracts high-level features from the input image through stacks of convolutional layers. The neck component of YOLOv4 employs feature fusion layers and feature pyramid modules to combine features from different scales produced by different layers of the backbone. This fusion allows the model to effectively handle variations in scale. Additionally, the neck incorporates additional convolutional layers to refine and integrate the features further. The head of the YOLOv4 model contains multiple detection layers responsible for predicting rectangular bounding boxes and class probabilities. The YOLOv5 model also uses the CSPDark-net53 architecture as the backbone. The YOLOv5 model utilizes the spatial pyramid pooling module with Path Aggregation Network(PANet) to design the neck. Finally, the head of YOLOv5 consists of different detection layers to predict the rectangular bounding boxes and the class-wise probabilities. The implementation of the YOLOv5 model is on a popular Python library called Pytorch so that researchers can develop an object detection model easily compared to the previous version of the Yolo-based model. The performance of the YOLOv5 model is almost similar to the YOLOv4 model as both models use the same backbone of CSPDarknet53.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The CTC-based Model Architecture for Bangla Address Text Recognition", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The Encoder-Decoder Model Architecture for Bangla Address Text Recognition", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The transformer-based encoder-decoder model architecture for Bangla address text correction", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Sequence-to-sequence Encoder-Decoder models using RNN, LSTM, and Bi-LSTM units for the Bangla address text parsing problem", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Sample natural scene images from the collected raw images", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The annotation process of the Bangla Signboard Detection (Bn-SD) Dataset using LabelImg. For each image, the LabelImg tool creates a txt label file containing the class number (0 for only one class named signboard) and 4 values representing the relative location of the rectangular box.", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: The annotation process of the Bangla Address Detection (Bn-AD) Dataset using LabelImg. For each image, the LabelImg tool creates a txt label file containing the class number (0 for only one class named address) and 4 values representing the relative location of the rectangular box.", "figure_data": "", "figure_id": "fig_12", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Some sample data from the Synthetic Bangla OCR (Syn-Bn-OCR) dataset generated by the Tex-tRecognitionDataGenerator tool", "figure_data": "", "figure_id": "fig_13", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Some samples from the Synthetic Bangla Address Correction (Syn-Bn-AC) dataset", "figure_data": "", "figure_id": "fig_14", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Some sample data from the Bangla Address Parsing (Bn-AP) dataset", "figure_data": "", "figure_id": "fig_15", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: AP scores of different Yolo-based model architectures for signboard detection model", "figure_data": "", "figure_id": "fig_16", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "novel Bn-AD dataset for Bangla address text detection. We have trained each model for 10000 iterations. After the training process, we have conducted a comparative study among YOLOv3, YOLOv4, and YOLOv5 models by calculating the AP score for address text detection. We have evaluated the AP score on the test dataset for each Yolo-based model. The YOLOv3 model provides an AP score of 93.5%, whereas the YOLOv4 and YOLOv5 models show AP scores of 97.0% and 96.5% respectively for address text detection. Figure 17 shows the AP scores of different Yolo-based model architectures in a bar chart for the address text detection model. The YOLOv4 and YOLOv5 models provide nearly similar results due to the similarity of the backbone model of CSPDark-net53.", "figure_data": "", "figure_id": "fig_17", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: AP scores of different Yolo-based model architectures for address text detection model", "figure_data": "", "figure_id": "fig_18", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "A comparison among different Bangla address text recognition models by calculating word recognition accuracy on the test set", "figure_data": "Type of FrameworkModel ArchitectureWRAVGG+Bi-LSTM+CTC (Baseline)91.5%CTC-basedRCNN+Bi-LSTM+CTC GRCL+Bi-LSTM+CTC92.9% 92.2%ResNet+Bi-LSTM+CTC94.5%VGG+Bi-LSTM+Attention (Baseline) 91.4%Encoder-DecoderRCNN+Bi-LSTM+Attention GRCL+Bi-LSTM+Attention92.2% 92.8%ResNet+Bi-LSTM+Attention93.5%", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "If the number of correct words in the", "figure_data": "Model ArchitecturePrecision Recall F1 Score AccuracyEncoder-Decoder model with RNN (Baseline)0.890.900.89590.16%Encoder-Decoder model with LSTM0.920.940.9393.30%Encoder-Decoder model with Bi-LSTM0.930.950.9494.55%Token classification model with Banglabert0.960.970.96597.32%", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "A comparison among different Bangla address text parser models on the test set", "figure_data": "", "figure_id": "tab_6", "figure_label": "2", "figure_type": "table" } ]
Hasan Murad; Mohammed Eunus
[ { "authors": "Nosheen Abid; Adnan Ul Hasan; Faisal Shafait", "journal": "IEEE", "ref_id": "b0", "title": "Deepparse: A trainable postal address parser", "year": "2018" }, { "authors": "Tasnim Ahmed; Md Nishat Raihan; Rafsanjany Kushol; Md Sirajus Salekin", "journal": "IEEE", "ref_id": "b1", "title": "A complete bangla optical character recognition system: An effective approach", "year": "2019" }, { "authors": "Muhammad Ali; Hassan Foroosh", "journal": "IEEE", "ref_id": "b2", "title": "Character recognition in natural scene images using rank-1 tensor decomposition", "year": "2016" }, { "authors": "Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b3", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "Fan Bai; Zhanzhan Cheng; Yi Niu; Shiliang Pu; Shuigeng Zhou", "journal": "", "ref_id": "b4", "title": "Edit probability for scene text recognition", "year": "2018" }, { "authors": "Serge Belongie; Jitendra Malik; Jan Puzicha", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b5", "title": "Shape matching and object recognition using shape contexts", "year": "2002" }, { "authors": "Abhik Bhattacharjee; Tahmid Hasan; Uddin Wasi; Kazi Ahmad; Md Samin; Anindya Saiful Islam; M Iqbal; Rifat Sohel Rahman; Shahriyar", "journal": "", "ref_id": "b6", "title": "Banglabert: Language model pretraining and benchmarks for low-resource language understanding evaluation in bangla", "year": "2021" }, { "authors": "Alessandro Bissacco; Mark Cummins; Yuval Netzer; Hartmut Neven", "journal": "", "ref_id": "b7", "title": "Photoocr: Reading text in uncontrolled conditions", "year": "2013" }, { "authors": "Alexey Bochkovskiy; Chien-Yao Wang; Hong-Yuan Mark Liao", "journal": "", "ref_id": "b8", "title": "Yolov4: Optimal speed and accuracy of object detection", "year": "2020" }, { "authors": "Vinayak Borkar; Kaustubh Deshmukh; Sunita Sarawagi", "journal": "", "ref_id": "b9", "title": "Automatic segmentation of text into structured records", "year": "2001" }, { "authors": "John Canny", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b10", "title": "A computational approach to edge detection", "year": "1986" }, { "authors": "Datong Chen; Hervé Bourlard; J-P Thiran", "journal": "IEEE", "ref_id": "b11", "title": "Text identification in complex background using svm", "year": "2001" }, { "authors": "Huizhong Chen; Sam S Tsai; Georg Schroth; Radek David M Chen; Bernd Grzeszczuk; Girod", "journal": "IEEE", "ref_id": "b12", "title": "Robust text detection in natural images with edgeenhanced maximally stable extremal regions", "year": "2011" }, { "authors": "Jie Chen; Zhouhui Lian; Yizhi Wang; Yingmin Tang; Jianguo Xiao", "journal": "Science China Information Sciences", "ref_id": "b13", "title": "Irregular scene text detection via attention guided border labeling", "year": "2019" }, { "authors": "Xiangrong Chen; Alan L Yuille", "journal": "IEEE", "ref_id": "b14", "title": "Detecting and reading text in natural scenes", "year": "2004" }, { "authors": "Xiaoxue Chen; Lianwen Jin; Yuanzhi Zhu; Canjie Luo; Tianwei Wang", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b15", "title": "Text recognition in the wild: A survey", "year": "2021" }, { "authors": "Xilin Chen; Jie Yang; Jing Zhang; Alex Waibel", "journal": "IEEE Transactions on image processing", "ref_id": "b16", "title": "Automatic detection and recognition of signs from natural scenes", "year": "2004" }, { "authors": "Zhanzhan Cheng; Fan Bai; Yunlu Xu; Gang Zheng; Shiliang Pu; Shuigeng Zhou", "journal": "", "ref_id": "b17", "title": "Focusing attention: Towards accurate text recognition in natural images", "year": "2017" }, { "authors": "Tim Churches; Peter Christen; Kim Lim; Justin Xi Zhu", "journal": "BMC Medical Informatics and Decision Making", "ref_id": "b18", "title": "Preparation of name and address data for record linkage using hidden markov models", "year": "2002" }, { "authors": "Dorin Comaniciu; Peter Meer", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b19", "title": "Mean shift: A robust approach toward feature space analysis", "year": "2002" }, { "authors": "Helen Craig; Dragomir Yankov; Renzhong Wang; Pavel Berkhin; Wei Wu", "journal": "", "ref_id": "b20", "title": "Scaling address parsing sequence models through active learning", "year": "2019" }, { "authors": "Navneet Dalal; Bill Triggs", "journal": "", "ref_id": "b21", "title": "Histograms of oriented gradients for human detection", "year": "2005" }, { "authors": "Teófilo Emídio; De Campos; Bodla Rakesh Babu; Manik Varma", "journal": "VISAPP", "ref_id": "b22", "title": "Character recognition in natural images", "year": "2009" }, { "authors": "Dan Deng; Haifeng Liu; Xuelong Li; Deng Cai", "journal": "", "ref_id": "b23", "title": "Pixellink: Detecting scene text via instance segmentation", "year": "2018" }, { "authors": "Boris Epshtein; Eyal Ofek; Yonatan Wexler", "journal": "IEEE", "ref_id": "b24", "title": "Detecting text in natural scenes with stroke width transform", "year": "2010" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International journal of computer vision", "ref_id": "b25", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Yunze Gao; Yingying Chen; Jinqiao Wang; Hanqing Lu", "journal": "", "ref_id": "b26", "title": "Reading scene text with attention convolutional sequence modeling", "year": "2017" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b27", "title": "Fast r-cnn", "year": "2015" }, { "authors": "Ross Girshick; Jeff Donahue; Trevor Darrell; Jitendra Malik", "journal": "", "ref_id": "b28", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "Lluis Gomez; Dimosthenis Karatzas", "journal": "IEEE", "ref_id": "b29", "title": "Object proposals for text extraction in the wild", "year": "2015" }, { "authors": "Alex Graves; Marcus Liwicki; Horst Bunke; Jürgen Schmidhuber; Santiago Fernández", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Unconstrained on-line handwriting recognition with recurrent neural networks", "year": "2007" }, { "authors": "Ankush Gupta; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b31", "title": "Synthetic data for text localisation in natural images", "year": "2016" }, { "authors": "Dong Haifeng; Han Siqi", "journal": "IOP Publishing", "ref_id": "b32", "title": "Natural scene text detection based on yolo v2 network model", "year": "2020" }, { "authors": "Muhammad Shehzad; Lionel Hanif; Prevost", "journal": "IEEE", "ref_id": "b33", "title": "Text detection and localization in complex scene images using constrained adaboost algorithm", "year": "2009" }, { "authors": "Muhammad Shehzad; Lionel Hanif; Pablo Augusto Prevost; Negri", "journal": "IEEE", "ref_id": "b34", "title": "A cascade detector for text detection in natural scene images", "year": "2008" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b35", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Pan He; Weilin Huang; Yu Qiao; Chen Loy; Xiaoou Tang", "journal": "", "ref_id": "b36", "title": "Reading scene text in deep convolutional sequences", "year": "2016" }, { "authors": "Lichao Huang; Yi Yang; Yafeng Deng; Yinan Yu", "journal": "", "ref_id": "b37", "title": "Densebox: Unifying landmark localization with end to end object detection", "year": "2015" }, { "authors": "Weilin Huang; Zhe Lin; Jianchao Yang; Jue Wang", "journal": "", "ref_id": "b38", "title": "Text localization in natural images using stroke feature transform and text covariance descriptors", "year": "2013" }, { "authors": "Weilin Huang; Yu Qiao; Xiaoou Tang", "journal": "Springer", "ref_id": "b39", "title": "Robust scene text detection with convolution neural network induced mser trees", "year": "2014-09-06" }, { "authors": "Asif Isthiaq; Asreen Najoa; Saif", "journal": "International Journal of Modern Education and Computer Science", "ref_id": "b40", "title": "Ocr for printed bangla characters using neural network", "year": "2020" }, { "authors": "Max Jaderberg; Karen Simonyan; Andrea Vedaldi; Andrew Zisserman", "journal": "International journal of computer vision", "ref_id": "b41", "title": "Reading text in the wild with convolutional neural networks", "year": "2016" }, { "authors": "Max Jaderberg; Andrea Vedaldi; Andrew Zisserman", "journal": "Springer", "ref_id": "b42", "title": "Deep features for text spotting", "year": "2014-09-06" }, { "authors": "Ravpreet Kaur; Sarbjeet Singh", "journal": "Digital Signal Processing", "ref_id": "b43", "title": "A comprehensive review of object detection with deep learning", "year": "2022" }, { "authors": "Sulaiman Khan; Shah Nazir; Habib Ullah Khan ; A", "journal": "IEEE Access", "ref_id": "b44", "title": "Analysis of navigation assistants for blind and visually impaired people: A systematic review", "year": "2021" }, { "authors": "Tauseef Khan; Ram Sarkar; Ayatullah Faruk; Mollah ", "journal": "Artificial Intelligence Review", "ref_id": "b45", "title": "Deep learning approaches scene text detection: a comprehensive review", "year": "2021" }, { "authors": "Tahani Khatib; Huda Karajeh; Hiba Mohammad; Lama Rajab", "journal": "Scientific Research and Essays", "ref_id": "b46", "title": "A hybrid multilevel text extraction algorithm in scene images", "year": "2015" }, { "authors": "In Kwang; Keechul Kim; Jin Jung; Kim Hyung", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b47", "title": "Texture-based approach for text detection in images using support vector machines and continuously adaptive mean shift algorithm", "year": "2003" }, { "authors": "Thananop Kobchaisawat; Shin'ichi Thanarat H Chalidabhongse; Satoh", "journal": "Electronics", "ref_id": "b48", "title": "Scene text detection with polygon offsetting and border augmentation", "year": "2020" }, { "authors": "Hyung Il; Koo ; Nam Ik Cho", "journal": "IEEE Transactions on Image Processing", "ref_id": "b49", "title": "Text-line extraction in handwritten chinese documents based on an energy minimization framework", "year": "2011" }, { "authors": "Bineeth Kuriakose; Raju Shrestha; Frode Eika Sandnes", "journal": "IETE Technical Review", "ref_id": "b50", "title": "Tools and technologies for blind and visually impaired navigation support: a review", "year": "2022" }, { "authors": "Chen-Yu Lee; Simon Osindero", "journal": "", "ref_id": "b51", "title": "Recursive recurrent nets with attention modeling for ocr in the wild", "year": "2016" }, { "authors": "Seonghun Lee; Jin Hyung; Kim ", "journal": "Image and Vision Computing", "ref_id": "b52", "title": "Integrating multiple character proposals for robust scene text extraction", "year": "2013" }, { "authors": "Xiang Li; Hakan Kardes; Xin Wang; Ang Sun", "journal": "", "ref_id": "b53", "title": "Hmm-based address parsing with massive synthetic training data generation", "year": "2014" }, { "authors": "Yao Li; Huchuan Lu", "journal": "IEEE", "ref_id": "b54", "title": "Scene text detection via stroke width", "year": "2012" }, { "authors": "Minghui Liao; Baoguang Shi; Xiang Bai", "journal": "IEEE transactions on image processing", "ref_id": "b55", "title": "Textboxes++: A single-shot oriented scene text detector", "year": "2018" }, { "authors": "Minghui Liao; Baoguang Shi; Xiang Bai; Xinggang Wang; Wenyu Liu", "journal": "", "ref_id": "b56", "title": "Textboxes: A fast text detector with a single deep neural network", "year": "2017" }, { "authors": "Rainer W Lienhart; Frank Stuber", "journal": "SPIE", "ref_id": "b57", "title": "Automatic text recognition in digital videos", "year": "1996" }, { "authors": "Han Lin; Peng Yang; Fanlong Zhang", "journal": "Archives of computational methods in engineering", "ref_id": "b58", "title": "Review of scene text detection and recognition", "year": "2020" }, { "authors": "Wei Liu; Dragomir Anguelov; Dumitru Erhan; Christian Szegedy; Scott Reed; Cheng-Yang Fu; Alexander C Berg", "journal": "Springer", "ref_id": "b59", "title": "Ssd: Single shot multibox detector", "year": "2016-10-11" }, { "authors": "Zichuan Liu; Yixing Li; Fengbo Ren; Wang Ling Goh; Hao Yu", "journal": "", "ref_id": "b60", "title": "Squeezedtext: A real-time scene text recognition by binary convolutional encoderdecoder network", "year": "2018" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b61", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "Shangbang Long; Xin He; Cong Yao", "journal": "International Journal of Computer Vision", "ref_id": "b62", "title": "Scene text detection and recognition: The deep learning era", "year": "2021" }, { "authors": "Shangbang Long; Jiaqiang Ruan; Wenjie Zhang; Xin He; Wenhao Wu; Cong Yao", "journal": "", "ref_id": "b63", "title": "Textsnake: A flexible representation for detecting text of arbitrary shapes", "year": "2018" }, { "authors": " Lowe", "journal": "Int. J", "ref_id": "b64", "title": "Sift-the scale invariant feature transform", "year": "2004" }, { "authors": "Jiri Matas; Ondrej Chum; Martin Urban; Tomás Pajdla", "journal": "Image and vision computing", "ref_id": "b65", "title": "Robust wide-baseline stereo from maximally stable extremal regions", "year": "2004" }, { "authors": "Anand Mishra; Karteek Alahari; Jawahar", "journal": "IEEE", "ref_id": "b66", "title": "An mrf model for binarization of natural scene text", "year": "2011" }, { "authors": "Anand Mishra; Karteek Alahari; Jawahar", "journal": "", "ref_id": "b67", "title": "Scene text recognition using higher order language priors", "year": "2012" }, { "authors": " Bmva", "journal": "", "ref_id": "b68", "title": "", "year": "" }, { "authors": "Shekoofeh Mokhtari; Ahmad Mahmoody; Dragomir Yankov; Ning Xie", "journal": "", "ref_id": "b69", "title": "Tagging address queries in maps search", "year": "2019" }, { "authors": "Veronica Naosekpam; Sushant Aggarwal; Nilkanta Sahu", "journal": "Springer", "ref_id": "b70", "title": "Utextnet: a unet based arbitrary shaped scene text detector", "year": "2021" }, { "authors": "Veronica Naosekpam; Naukesh Kumar; Nilkanta Sahu", "journal": "Springer", "ref_id": "b71", "title": "Multi-lingual indian text detector for mobile devices", "year": "2020" }, { "authors": "Veronica Naosekpam; Nilkanta Sahu", "journal": "International Journal of Multimedia Information Retrieval", "ref_id": "b72", "title": "Text detection, recognition, and script identification in natural scene images: A review", "year": "2022" }, { "authors": "Lukas Neumann; Jiri Matas", "journal": "Springer", "ref_id": "b73", "title": "A method for text localization and recognition in real-world images", "year": "2010" }, { "authors": "Nomura Matsunobu; Kageyama Yoichi; Ishizawa Chikako; Makoto Nishida", "journal": "ternational Journal of the Society of Materials Engineering for Resources", "ref_id": "b74", "title": "Automatic extraction of character sequences from electric signboards in nighttime scene images in japan", "year": "2014" }, { "authors": "Shigueo Nomura; Keiji Yamanaka; Osamu Katai; Hiroshi Kawakami; Takayuki Shiose", "journal": "Pattern Recognition", "ref_id": "b75", "title": "A novel adaptive morphological approach for degraded character image segmentation", "year": "2005" }, { "authors": "Rafael Padilla; Sergio L Netto; Eduardo Ab Da Silva", "journal": "IEEE", "ref_id": "b76", "title": "A survey on performance metrics for object-detection algorithms", "year": "2020" }, { "authors": "Yi-Feng Pan; Xinwen Hou; Cheng-Lin Liu", "journal": "IEEE", "ref_id": "b77", "title": "Text localization in natural scene images based on conditional random field", "year": "2009" }, { "authors": "Yi-Feng Pan; Xinwen Hou; Cheng-Lin Liu", "journal": "IEEE transactions on image processing", "ref_id": "b78", "title": "A hybrid approach to detect and localize texts in natural scene images", "year": "2010" }, { "authors": "Jonghyun Park; Gueesang Lee; Euichul Kim; Junsik Lim; Soohyung Kim; Hyungjeong Yang; Myunghun Lee; Seongtaek Hwang", "journal": "Pattern Recognition Letters", "ref_id": "b79", "title": "Automatic detection and recognition of korean text in outdoor signboard images", "year": "2010" }, { "authors": "Haodi Hongbo Qin; Hai Zhang; Yujin Wang; Min Yan; Wei Zhang; Zhao", "journal": "Applied Sciences", "ref_id": "b80", "title": "An algorithm for scene text detection using multibox and semantic segmentation", "year": "2019" }, { "authors": "Siyang Qin; Roberto Manduchi", "journal": "IEEE", "ref_id": "b81", "title": "Cascaded segmentation-detection networks for word-level text spotting", "year": "2017" }, { "authors": "Joseph Redmon; Santosh Divvala; Ross Girshick; Ali Farhadi", "journal": "", "ref_id": "b82", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Joseph Redmon; Ali Farhadi", "journal": "", "ref_id": "b83", "title": "Yolo9000: better, faster, stronger", "year": "2017" }, { "authors": "Joseph Redmon; Ali Farhadi", "journal": "", "ref_id": "b84", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "Advances in neural information processing systems", "ref_id": "b85", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Partha Pratim; Roy ; Umapada Pal; Josep Lladós; Mathieu Delalandre", "journal": "IEEE", "ref_id": "b86", "title": "Multi-oriented and multisized touching character segmentation using dynamic programming", "year": "2009" }, { "authors": "Prithwish Sen; Anindita Das; Nilkanta Sahu", "journal": "Springer", "ref_id": "b87", "title": "End-to-end scene text recognition system for devanagari and bengali text", "year": "2021" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "", "ref_id": "b88", "title": "Neural machine translation of rare words with subword units", "year": "2015" }, { "authors": "Hai-Lin Shao; Yi Ji; Ying Li; Chun-Ping Liu", "journal": "IEEE", "ref_id": "b89", "title": "Bdfpn: Bi-direction feature pyramid network for scene text detection", "year": "2021" }, { "authors": "Shikhar Sharma; Ritesh Ratti; Ishaan Arora; Anshul Solanki; Gaurav Bhatt", "journal": "IEEE", "ref_id": "b90", "title": "Automated parsing of geographical addresses: A multilayer feedforward neural network based approach", "year": "2018" }, { "authors": "Karthik Sheshadri; Santosh Kumar; Divvala ", "journal": "", "ref_id": "b91", "title": "Exemplar driven character recognition in the wild", "year": "2012" }, { "authors": "Baoguang Shi; Xiang Bai; Cong Yao", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b92", "title": "An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition", "year": "2016" }, { "authors": "Palaiahnakote Shivakumara; Souvik Bhowmick; Bolan Su; Chew Lim Tan; Umapada Pal", "journal": "IEEE", "ref_id": "b93", "title": "A new gradient based character segmentation method for video text recognition", "year": "2011" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "Advances in neural information processing systems", "ref_id": "b94", "title": "Sequence to sequence learning with neural networks", "year": "2014" }, { "authors": "Weilin Zhi Tian; Tong Huang; Pan He; Yu He; Qiao", "journal": "Springer", "ref_id": "b95", "title": "Detecting text in natural image with connectionist text proposal network", "year": "2016-10-11" }, { "authors": "Jörg Tiedemann; Santhosh Thottingal", "journal": "European Association for Machine Translation", "ref_id": "b96", "title": "Opusmt-building open translation services for the world", "year": "2020" }, { "authors": "Manik Varma; Andrew Zisserman", "journal": "Springer", "ref_id": "b97", "title": "Classifying images of materials: Achieving viewpoint and illumination independence", "year": "2002-05-28" }, { "authors": "Kai Wang; Boris Babenko; Serge Belongie", "journal": "IEEE", "ref_id": "b98", "title": "End-to-end scene text recognition", "year": "2011" }, { "authors": "Kai Wang; Serge Belongie", "journal": "Springer", "ref_id": "b99", "title": "Word spotting in the wild", "year": "2010-09-05" }, { "authors": "Minlue Wang; Valeriia Haberland; Amos Yeo; Andrew Martin; John Howroyd; J Mark Bishop", "journal": "IEEE", "ref_id": "b100", "title": "A probabilistic address parser using conditional random fields and stochastic regular grammar", "year": "2016" }, { "authors": "Tao Wang; David J Wu; Adam Coates; Andrew Y Ng", "journal": "IEEE", "ref_id": "b101", "title": "End-to-end text recognition with convolutional neural networks", "year": "2012" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Wenbo Hou; Tong Lu; Gang Yu; Shuai Shao; ; ", "journal": "", "ref_id": "b102", "title": "Shape robust text detection with progressive scale expansion network", "year": "2019" }, { "authors": "Wenhai Wang; Enze Xie; Xiaoge Song; Yuhang Zang; Wenjia Wang; Tong Lu; Gang Yu; Chunhua Shen", "journal": "", "ref_id": "b103", "title": "Efficient and accurate arbitrary-shaped text detection with pixel aggregation network", "year": "2019" }, { "authors": "Erik Jerod J Weinman; Allen Learned-Miller; Hanson", "journal": "IEEE", "ref_id": "b104", "title": "Fast lexicon-based scene text recognition with sparse belief propagation", "year": "2007" }, { "authors": "Zbigniew Wojna; Dar-Shyang Alexander N Gorban; Kevin Lee; Qian Murphy; Yeqing Yu; Julian Li; Ibarz", "journal": "IEEE", "ref_id": "b105", "title": "Attention-based extraction of structured information from street view imagery", "year": "2017" }, { "authors": "Qiangpeng Yang; Mengli Cheng; Wenmeng Zhou; Yan Chen; Minghui Qiu; Wei Lin; Wei Chu", "journal": "", "ref_id": "b106", "title": "Inceptext: A new inception-text module with deformable psroi pooling for multi-oriented scene text detection", "year": "2018" }, { "authors": "Cong Yao; Xiang Bai; Nong Sang; Xinyu Zhou; Shuchang Zhou; Zhimin Cao", "journal": "", "ref_id": "b107", "title": "Scene text detection via holistic, multi-channel prediction", "year": "2016" }, { "authors": "Marouane Yassine; David Beauchemin; François Laviolette; Luc Lamontagne", "journal": "IEEE", "ref_id": "b108", "title": "Leveraging subword embeddings for multinational address parsing", "year": "2021" }, { "authors": "Qixiang Ye; Wen Gao; Weiqiang Wang; Wei Zeng", "journal": "IEEE", "ref_id": "b109", "title": "A robust text detection algorithm in images and video frames", "year": "2003" }, { "authors": "Chucai Yi; Yingli Tian", "journal": "Springer", "ref_id": "b110", "title": "Assistive text reading from complex background for blind persons", "year": "2012" }, { "authors": "Xuwang Xu-Cheng Yin; Kaizhu Yin; Hong-Wei Huang; Hao", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b111", "title": "Robust text detection in natural scene images", "year": "2013" }, { "authors": "Zheng Zhang; Chengquan Zhang; Wei Shen; Cong Yao; Wenyu Liu; Xiang Bai", "journal": "", "ref_id": "b112", "title": "Multi-oriented text detection with fully convolutional networks", "year": "2016" }, { "authors": "Yu Zhong; Kalle Karu; Anil K Jain", "journal": "Pattern recognition", "ref_id": "b113", "title": "Locating text in complex color images", "year": "1995" }, { "authors": "Zhuoyao Zhong; Lianwen Jin; Shuangping Huang", "journal": "IEEE", "ref_id": "b114", "title": "Deeptext: A new approach for text proposal generation and text detection in natural images", "year": "2017" }, { "authors": "Xinyu Zhou; Cong Yao; He Wen; Yuzhi Wang; Shuchang Zhou; Weiran He; Jiajun Liang", "journal": "", "ref_id": "b115", "title": "East: an efficient and accurate scene text detector", "year": "2017" }, { "authors": "Zhiwei Zhou; Linlin Li; Chew Lim; Tan ", "journal": "IEEE", "ref_id": "b116", "title": "Edge based binarization for video text images", "year": "2010" }, { "authors": "Xiangyu Zhu; Yingying Jiang; Shuli Yang; Xiaobing Wang; Wei Li; Pei Fu; Hua Wang; Zhenbo Luo", "journal": "IEEE", "ref_id": "b117", "title": "Deep residual text detection network for scene text", "year": "2017" } ]
[ { "formula_coordinates": [ 14, 127.23, 661.57, 162.63, 21.88 ], "formula_id": "formula_0", "formula_text": "p(l|y) = π:β(π)=l p(π|y)(1)" }, { "formula_coordinates": [ 22, 82.67, 396.64, 207.2, 9.81 ], "formula_id": "formula_1", "formula_text": "f ilter = (5 + number_of _classes) × 3 (2)" }, { "formula_coordinates": [ 22, 118.59, 593.88, 171.27, 24.67 ], "formula_id": "formula_2", "formula_text": "IoU = Area_of _Overlap Area_of _U nion(3)" }, { "formula_coordinates": [ 22, 323.32, 418, 201.82, 24.43 ], "formula_id": "formula_3", "formula_text": "F 1 Score = 2 × P recision × Recall P recision + Recall (6)" }, { "formula_coordinates": [ 22, 320.11, 610.15, 205.03, 32.12 ], "formula_id": "formula_4", "formula_text": "AP = k=n-1 k=0 [R(k) -R(k -1)] × P (k) (7)" }, { "formula_coordinates": [ 24, 149.92, 404.26, 139.95, 24.43 ], "formula_id": "formula_5", "formula_text": "W RA = W r W(8)" }, { "formula_coordinates": [ 25, 150.59, 236.75, 139.28, 24.43 ], "formula_id": "formula_6", "formula_text": "W LA = W c W (9)" } ]
10.1162/tacl_a_00407
2023-11-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b2", "b3", "b1", "b30", "b0", "b20", "b14", "b10", "b23", "b26", "b10", "b49", "b39", "b6", "b16", "b38", "b7" ], "table_ref": [], "text": "Large Language Models (LLMs) like GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), andGPT4 (OpenAI, 2023), followed by many open-source replications including LLaMA (Touvron et al., 2023a), Pythia (Biderman et al., 2023) are revolutionizing the paradigm and re-shaping the expectation of modern natural language processing. When further trained with alignment treatment (Ouyang et al., 2022;Bai et al., 2022), these LLMs further exhibit impressive capability in responding to generalized human instructions, which implies their potential as generalpurpose intelligent assistants and this has since attract considerable attention in the field and around the world.\nAs LLMs find more diverse applications and exert widespread influence, it becomes increasingly imperative to ensure their reliability and faithfulness, particularly in fields such as healthcare (Kung et al., 2023) and law (Huang et al., 2023). These are domains where inaccurate predictions can lead to significant, potentially severe challenges. However, due to the intrinsic autoregressive mechanism and complex system structures, the behaviours of these models can not be easily attributed or interpreted.\nConfidence calibration is an effective method to estimate a model's awareness of its uncertainty, and it helps enhance our understanding and assurance of the trustworthiness of deep models. Generally, it associates model output confidence, i.e. probability, with ground truth correctness likelihood (Guo et al., 2017) and informs the user to what extent the outputs should be trusted, even though they may not always be correct. Intuitively, for example, given 100 predictions in a classification task which are produced by a classifier and each of them is assigned 0.8 confidence, we expect 80 of them to be correctly classified for a well-calibrated classifier. As a consequence, better calibration of LLMs could significantly extend their usability. In early meteorology, calibration was noted as validity (Miller, 1962) or reliability (Murphy, 1973), indicating the trustworthiness of forecasters. Well calibrated probabilities can provide extra information for users to decide whether to trust the model's prediction, particularly for modern neural networks whose decisions are harder to interpret (Guo et al., 2017). Studies have also pointed out that calibration is helpful to reduce hallucination in language models (Xiao and Wang, 2021;Tian et al., 2020). Previous works have shown that pre-trained language models can generate well-calibrated predictions (Desai and Durrett, 2020;Kadavath et al., 2022). However, these works mainly concentrate on vanilla language models, while the aligned language models receive less focus. A newly proposed work evaluates calibration of some aligned models by prompting them to verbalize confidence in the token space (Tian et al., 2023), but it mainly studies black-box models, whose training process is not available, and thus can not provide insight into how model calibration is affected by different factors in the alignment training process. To conclude, a systematical study on the calibration of aligned language models is still missing, and our work aims to fill this gap.\nIn this work, we study the calibration of aligned language models in the entire building cycle and provide evidence on how to achieve decent model calibration. An overview of the scheme of out study is at Figure 1 Besides the understanding and generating ability, factual faithfulness and reasoning capability are two widely considered issues with large language models (Du et al., 2023). We also follow this path to study models' calibration when applied to different tasks. For this purpose, we design three tasks for each of the stages above. (1) To evaluate model calibration on common text generation, we use Causal Language Modeling (CLM) task, which is also the objective of pre-training stage.\n(2) To study model calibration on factuality, we designed a facts generation task where the models are asked to generate fact-related content. (3) To study model calibration on reasoning, we use multi-task language understanding task, where questions and possible options are provided and models are asked to select the most probable one.\nThrough extensive experiments and analysis, we arrive at the following findings." }, { "figure_ref": [], "heading": "For pretraining of LLMs:", "publication_ref": [], "table_ref": [], "text": "• Larger Parameter Scales : Improve models' calibration.\n• Longer Training Dynamics : Also benefit calibration accuracy.\nFor alignment of LLMs:\n• Instruction Tuning : Deteriorates models' calibration.\n• Synthetic Data : Exacerbates the harmful effect of instruction tuning.\n• Parameter-efficient Fine-tuning : Effective regularization for restraining calibration error.\n• RLHF : Help maintaining calibration accuracy.\nFor different tasks:\n• In pre-training: Improvement in calibration accuracy is more significant on fact generation task or language understanding tasks than language modeling task.\n• In alignment training: Calibration accuracy evolves consistently across different downstream tasks including fact generation, language understanding or vanilla language modeling.\nWe believe these conclusions as well as detailed experiments can take us a step further towards understanding large language models, especially the intrinsic mechanism of their calibration behaviour. Our experimental results also provide us with some possible solutions to improve calibration, including increasing model scales and employing parameter efficient tuning methods. Besides, diversity guided instruction data construction may also be very promising. Hopefully these findings can shed light on future works to construct more factual and trustworthy assistants." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b47", "b25", "b34", "b51", "b30", "b5", "b31", "b52", "b13", "b28", "b29", "b24", "b10", "b19", "b15", "b43", "b36", "b18", "b38", "b17", "b33", "b48", "b21", "b22" ], "table_ref": [], "text": "Aligned Large Language Models are large language models that are specially trained to follow human's intents or instructions. Large language models are proved to have the ability of completing some downstream tasks without any gradient updating (Brown et al., 2020). To better make use of such ability, many researches have found that instruction following models can be constructed by fine-tuning models with instruction-response pairs, which is called instruction tuning (Weller et al., 2020;Mishra et al., 2022;Sanh et al., 2022;Wei et al., 2022a;Xu et al., 2023b). While these models can understand human instructions and make reasonable responses, they often produce unexpected results like lies, made-up facts, biased or toxic texts and so on. To better align models with human intents, reinforcement learning with human feedback is introduced to the training of large language models (Ouyang et al., 2022). Though instruction tuning and RLHF can significantly improve the models' ability of interacting with humans, how they influence the calibration of large language models have not been researched on.\nConfidence calibration is a concerned problem for classification models. A large amount of works have studied the calibration of statistical machine learning systems and the methods to improve their calibration (DeGroot and Fienberg, 1983;Palmer et al., 2008;Yang and Thompson, 2010). Later, calibration of neural networks have also been researched on (Hendrycks and Gimpel, 2016;Nguyen and O'Connor, 2015;Nixon et al., 2019;Minderer et al., 2021). Guo et al. (2017) points out that modern neural networks are not as calibrated as their ancestors and proposes a temperature scaling methods to calibrate neural networks. In natural language processing field, calibration of transformer-based language models are evaluated among different tasks, including machine translation (Kumar and Sarawagi, 2019), QA (Jiang et al., 2021) and selective prediction (Varshney et al., 2022). Recently, large-scale generative language models are receiving growing attention, and some works have examined calibration of these models (Srivastava et al., 2022;Kuhn et al., 2023;Tian et al., 2023). There are also works improving calibration of large language models, for example, Xu et al. (2023a) propose kNN Prompting to effectively mitigate calibration errors in in-context learning. However, as mentioned before, these works either concentrate on vanilla language models or study black-box models. We study models calibration in their whole life cycles from pre-training to alignment training, where our main contributions Analysis of large language models. Understanding various aspects of LLMs through theoretical or empirical approaches have long been an important interests for NLP scholars. Many works have demonstrated the scaling law of LLMs in different scenarios w.r.t. model scales, data size and computational costs (Kaplan et al., 2020;Rae et al., 2022;Xia et al., 2023). Wei et al. (2022b) defines and reveals the emergent abilities of large language models. Liang et al. (2022) proposes a holistic evaluation framework named HELM to analyze large language models on their capabilities, limitations, and potential risks. Hallucination and factuality also draw a lot of attention (McKenna et al., 2023;Zheng et al., 2023), but they do not take a further step towards the intrinsic mechanism while merely explore the verification on the surface. Differently, this paper provides a formal and systematical analysis on calibration behaviour of LLMs and their alignment treatment." }, { "figure_ref": [ "fig_2" ], "heading": "Definitions", "publication_ref": [ "b10", "b10", "b5", "b27", "b10", "b6", "b11" ], "table_ref": [], "text": "In this section we formally define some basic concepts in our work, including confidence calibration, reliability diagram and expected calibration error. Confidence calibration is the main objective we are studying in this work and the other two are tools we use to evaluate model calibration. Our selection of tools follows previous work (Guo et al., 2017).\nConfidence Calibration. Given a supervised multi-class classification scenario, where we have input x, label y ∈ Y = {1, 2...K}, model prediction y ′ ∈ Y and confidence p ′ ∈ [0, 1]. A model is perfectly calibrated, if we can get\nP (y ′ = y|p ′ = p) = p, ∀p ∈ [0, 1]\nfor any input x (Guo et al., 2017). In another word, the more confident a model is, the chance of its prediction being the same as ground truth should be higher. It should be noted that P (y ′ = y|p ′ = p) can not be calculated with finite number of samples, so calibration is often evaluated by some statisitcal approximations.\nReliability Diagram is a kind of visualized evaluation of confidence calibration (DeGroot and Fienberg, 1983), which plots prediction accuracy as a function of confidence (e.g. Figure 2).\nTo evaluate the calibration with a model with a finite set of samples, we divide confidence interval [0, 1] into M bins with equal length (1/M ) and group model predictions into these bins according to their prediction confidence. Let B m be the set of indices of samples which fall into the interval ( m-1 M , m M ], then for each interval bin, we can calculate corresponding accuracy and average confidence as follows:\nAcc(B m ) = 1 |B m | i∈Bm 1( ŷi = y i ), Conf (B m ) = 1 |B m | i∈Bm pi ,\nwhere ŷi and y i are the prediction class and ground truth of the i th sample. 1 is the indicator function which produces 1 if ŷi = y i otherwise 0. pi is the prediction confidence (probability) of the i th sample.\nGiven Acc(B m ) and Conf (B m ), we can draw the reliability diagram for a model. For a perfectly calibrated model, we will have Acc(B m ) = Conf (B m ) for all m, so its reliability diagram will be y = x. Obviously, the nearer a curve is to the diagonal, the better calibration it represents. Though perfect calibration is impossible, we normally hope a model is well calibrated. Note that since reliability diagram do not contain the number of samples, sometimes it could be insufficient to represent true calibration of a model when some bins only contain very few samples.\nExpected Calibration Error (ECE). As reliability diagram is more like a qualitative evaluation which depicts model calibration in different confidence intervals, we hope to get a quantitative scalar metric which can reflect overall calibration level of a model. Expected Calibration Error is such a quantitative measurement of calibration (Naeini et al., 2015).\nFor a set of N samples, we also divide confidence intervals into M bins and get Acc(B m ) and Conf (B m ) in the same way as we did when drawing a reliability diagram. Then ECE is calculated as follows:\nECE = M m=1 |B m | N |Acc(B m ) -Conf (B m )|\nECE represents confidence error averaged on samples and obviously lower ECE means better calibration. We set m = 10 when measuring calibration with the tools above following previous works (Guo et al., 2017;Desai and Durrett, 2020;He et al., 2023)." }, { "figure_ref": [], "heading": "Calibration Evaluation Tasks and Data", "publication_ref": [ "b9", "b8", "b12", "b12" ], "table_ref": [], "text": "As mentioned before, we evaluate model calibration on three tasks considering language models' ability of understanding, factuality and reasoning. In this section, we in detail introduce for each task how the evaluation is conducted and the datasets chosen for evaluation.\nCausal Language Modeling is the task of predicting the next token for a given sequence, which is also the pre-training objective of causal language models. For a test sequence, we randomly sample a position in the sequence. Then this sequence is fed into models to generate a token corresonding to the position. If the predicted token is the same as the one in original sentence, we count it as a true positive. In such way we can get generation accuracy and confidence of test dataset, and then evaluate model calibration with reliability diagram and ECE metric. We use development and test set of the PILE dataset (Gao et al., 2020) in this task. The PILE dataset is a large-scale English text corpus which is frequetly used in the pre-training of large language models. Facts Generation is the task aimed at evaluating models' memory on factual knowledge. The task is mostly the same as causal language modeling in form , except that we utilize entity linking data, in which texts are labeled with entity spans. We let models generate the first token of entities. We only take the first token of entities into account as when the first token is correctly generated, there is high chance that the whole entity can also be recovered. In facts generation task, we use an enitity linking dataset T-REx (Elsahar et al., 2018), which includes entity-labeled texts extracted from Wikipedia pages. For T-REx and the PILE dataset, we randomly draw 100k samples as our evaluation set.\nMulti-task Language Understanding is a task where models are given questions across different fields with multiple answer options, which is designed for testing the understanding and reasoning ability of a language model. We mainly focus on questions with a single correct answer. Following MMLU benchmark (Hendrycks et al., 2021), we concatenate 5 in-context samples ahead of the questions and designed prompts to constrain models to respond with answer options (i.e. 'ABCD'). We choose MMLU benchmark (Hendrycks et al., 2021) as our evaluation data, which covers singlechoice questions in 57 subjects across STEM, the humanities, the social sciences and so on." }, { "figure_ref": [], "heading": "Calibration in Pre-training Stage", "publication_ref": [], "table_ref": [], "text": "In this section, we study the effect of parameter scales and training dynamics in pre-training stage to models' calibration." }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [ "b1", "b9" ], "table_ref": [], "text": "We choose Pythia as our base model (Biderman et al., 2023). Pythia is a suite of transformer-based, auto-regressive language models designed for scientific research. It contains 8 models whose scales range from 70m to 12B parameters and for each of the scale it provides 154 checkpoints including 143 checkpoints saved every 1,000 training steps (i.e. 1 epoch) and 11 checkpoints trained for less than 1,000 steps. All of these models are trained on exactly the same data-the PILE (Gao et al., 2020) dataset in the same order. For parameter scale study, we experiment on models with all 8 scales. As for training dynamics, we choose Pythia-1B4 considering time and computational cost, and use 2 n * 1, 000(n = 1, 2...) steps checkpoints (up to step143,000) for our study. We also include checkpoints of step256 and step512 in our experiments to observe the behavior of under-fitted models." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Parameter Scales", "publication_ref": [ "b53" ], "table_ref": [], "text": "Figure 3 shows the experimental results of parameter scales on three tasks. Generally, larger models produce better calibrated results while the level of such effect is diverse among tasks. We find that models with all parameter scales can produce well-calibrated predictions on CLM task, with ECE lower than 0.1. Also, parameter scales only mildly affect model calibration on the CLM task, where difference is minor between smallest and largest model. This might because CLM task is the same as the pre-training objective, where large scale and diverse corpus makes it hard for models to be overconfident when generating common texts. On facts generation task, model performance on both calibration and accuracy shows a stronger positive correlation with parameter scales. Results on MMLU (Figure 3-e and 3-f) seems very messy, but we can still observe some meaningful patterns here. All models perform poorly on language understanding task, with accuracies only slightly better than random choice. As Pythia models have not been trained to follow certain instructions, it is hard for them to understand these knowledge-demanding questions in the MMLU dataset. However, while accuracies from all models are similar, ECE decreases monotonically as the model scale increases. Moreover, we found that as the parameter scale increases, confidence distribution of model output gradually shrinks to a smaller and lower interval (see Appendix B), which might indicate that although larger models still can not solve these problems, they are more aware of their own capability than small ones. To further verify our conclusions, we conduct same experiments on 4 more models, LLaMA (Touvron et al., 2023a), LLaMA-2 (Touvron et al., 2023b), FLAN-T5 (Chung et al., 2022) and OPT (Zhang et al., 2022). Results are presented in Appendix C.1. We can see that our conclusions holds most of the time, where LLaMA2, FLAN-T5 and OPT show monotonic improvement with increasing model scale while there are outliers in the results of LLaMA. This demonstrates that factors other than scale may influence the trend, and LLMs should be examined using more continuously sampled checkpoints to reach a more robust conclusion, which we leave for future works." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Training Dynamics", "publication_ref": [], "table_ref": [], "text": "On the whole, the effect of training dynamics follows the same pattern with that of parameter scales but there are a few unique observations can be pointed out (See Figure 4). On CLM task, we can observe apparent improvement in both accuracy and calibration in the very early stage of pretraining. However, though accuracy keeps growing as the training goes on, ECE stabilize at a low level for the rest of the training process, which means that under-fitted models can also be wellcalibrated. Training dynamics also show a stronger impact on facts generation task. It can be seen that models trained for less than 1 epoch behave extremely poor. In this stage, models are near to randomly initialized parameters which can not generate a reasonable probability distribution, thus the accuracy is almost zero for all confidence intervals. Results on MMLU dataset is similar to that of parameter scales, with accuracy barely grows while calibration keep improving. Note that we observe an increase of ECE in step143,000 (see Figure 4-f), which may be an sign of over-fitting. As Pythia does not provide checkpoints with further steps, we keep this as a simple hypothesis." }, { "figure_ref": [ "fig_5" ], "heading": "Calibration in Alignment Stage", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Alignment training is divided into two sub-stages, instruction tuning and RLHF. In this section we study how model calibration changes during these process. As the models are fine-tuned on instructions in this stage, we add instruction prompt for all three tasks (see Table 1). Figure 5 shows the ECE level of fine-tuned models on three tasks when using different alignment training settings. We also report accuracy results in Appendix C.2, where the " }, { "figure_ref": [], "heading": "Tasks Prompts", "publication_ref": [], "table_ref": [], "text": "Causal Language Modeling Finish this sentence: Facts Generation Finish this Wikipedia description: Multi-task Language Understandign The following are multiple choice questions (with answers) about {...}. " }, { "figure_ref": [], "heading": "Experimental Setups", "publication_ref": [ "b37", "b45", "b32" ], "table_ref": [], "text": "In alignment training stage, we use LLaMA-7B as our base model (Touvron et al., 2023a). Complete training hyper-parameters can be found in Appendix A.\nInstruction Tuning. We leverage open-source GPT-generated and human-labeled data in instruction tuning. Alpaca (Taori et al., 2023) contains 52k pairs of instructions and responses generated in the style of self-instruct (Wang et al., 2022) using OpenAI GPT model. OpenAssistant Conversations1 , which we denote as OA, is a human-labeled assistant-style conversation corpus. We extract all single-turn conversations written in English from OA, resulting in 11k pairs of instructions. Considering fairness, we sample the datasets to the same size when comparing effect of instruction tuning. We follow setups of Stanford-Alpaca2 and Alpaca-LoRA3 for direct fine-tuning and LoRA training respectively. For each experiment group, we train the model for 3 epochs.\nRLHF training contains two parts, reward model training and reinforcement learning. Training of reward models often needs ranked response data, where responses from different language models to the same instructions are collected and ranked by human annotators or another language model. We use open-source LM-ranked responses data (Peng et al., 2023), where GPT-4, GPT-3.5 and OPT-IML generate responses to Alpaca instructions and these responses are ranked by GPT-4. We use LLaMA as our reward model and conduct PPO training on top of previously instruction tuned model with Alpca data. We also use LoRA in RLHF training process to lower computational costs. We perform RLHF training with Huggingface TRL Library4 ." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Instruction Tuning", "publication_ref": [ "b35", "b42", "b11" ], "table_ref": [], "text": "We find that instruction tuning generally weakens the calibration of language models. Such impact changes when using different instruction tuning settings.\nTraining Data. As can be seen in Figure 5, direct fine-tuning with Alpaca does the most harm to calibration while models fine-tuned with OA dataset perform better. Note that in MMLU dataset, the model trained with OA is even better calibrated than LLaMA in the first epoch, which might because of the capability gain in following instructions. However, the ECE level rapidly becomes worse in later epochs, which represents that degeneration of calibration is a result of fitting process and OA is less likely to cause such degeneration. We presume such behavior is related to the diversity of datasets. Such presumption is mostly based on intuition where samples in OA datasets display strong personal or emotional characteristics while instructions in Alpaca are much more homogeneous. To further look into the difference of Alpaca and OA, we compare semantic diversity of their responses. We use MPNET (Song et al., 2020) to extract sentence features of responses in both datasets and visualize these features with t-SNE (Van der Maaten and Hinton, 2008), see Figure 6. Results show that semantic features of OA dataset are more evenly distributed while those of Alpaca tend to be dense and clustered, which means the former is more diverse in semantics. We attribute this difference of diversity to how the datasets are constructed. Alpaca is a synthetic corpus generated in a self-instruct way, where a small set of humanwritten instruction data containing 175 seed tasks are fed into GPT-3 and augmented to 52k scale. In this case, Alpaca will contain a lot of instruction data with similar format and content for each seed task, and fine-tuning with such clustered dataset consequently leads to a worse calibration. On the other hand, OA is created by crowd-sourcing where thousands of volunteers are asked to submit their own instruction data, which makes the dataset more diverse in tasks and text styles, thus do less harm to model calibration.\nTraining Methods. Parameter efficient tuning is a type of training methods that keep the pre-trained weight unchanged and only train a small set of extra parameters. Although these methods are originally designed to train large models with lower resources, they may be able to improve calibration by reducing catastrophic forgetting (He et al., 2023). We compare calibration of models trained on Alpaca and OA datasets with full fine-tuning and LoRA tuning. Results exhibit that in all three tasks, model trained with LoRA performs better in calibration than those are directly fine-tuned. Besides, it can be noticed that deterioration in calibration is slight for model trained with LoRA (often in a level of 0.001) when training epochs increase, while the full fine-tuned model becomes visibly worse. Such observations proves that LoRA can mitigate the calibration degeneration in the instruction tuning process. In MMLU dataset we observe that behaviors of models trained on Alpaca with LoRA are similar to those trained with OA, where calibration improves compared to LLaMA in the first epoch. This may also indicate that LoRA is helpful in reduce the harmful effect of instruction tuning and improve model calibration. Training Dynamics. In almost all instruction tuning experimental groups, models trained for more steps behave worse in calibration, which indicates that models calibration are affected severely by the small instruction dataset. Note that we obtained contradictory observation (model calibration improves when trained longer) in the pre-training stage where the model is trained with large-scale and diverse corpora, thus we anticipate that improve the scale and diversity of instruction data can also help improve calibration." }, { "figure_ref": [ "fig_5" ], "heading": "RLHF", "publication_ref": [], "table_ref": [], "text": "The last groups of bars in each chart of Figure 5 show ECE level of models trained with RLHF for three epochs. We can see that compared to the instruction-tuned model, i.e. the 3rd epoch model trained with Alpaca, there is no significant degeneration in ECE after RLHF training. Moreover, models' calibration do not deteriorate as RLHF last for more epochs. This indicates that when applied to models that already have been trained with instruction data, RLHF might not do further harm to model calibration." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b44" ], "table_ref": [], "text": "Confidence calibration is helpful for building honest large language models in two ways. Firstly, confidence calibration is closely related to the uncertainty of language models, which is leveraged in many approaches like self-consistency to improve model performance (Wang et al., 2023). Studying and improving model calibration will provide further evidence to these methods and inspire more uncertainty-based techniques." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In this work we systematically study the calibration of aligned large language models. We designed " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "There are two main limitations in this work. The first one is that we can not carry out fine-grained experiments on larger and better-performing models like LLaMA and LLaMA-2, as they only provide limited number of variants on scale (e.g. 7B/30B/70B for LLaMA-2) and do not provide checkpoints for different training dynamics. More detailed and rigorous conclusions can be drawn if finer-grained model variants are available. The second one is that our observations and conclusions can be further explored, like mining the relation of these observations to the mathematical theory of confidence calibration and proving our conclusions theoretically. We leave such in-depth exploration for future works." }, { "figure_ref": [], "heading": "A Hyper-parameters", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We list all used hyper-parameters in Table 2 " }, { "figure_ref": [ "fig_7" ], "heading": "B MMLU Confidence Distribution", "publication_ref": [], "table_ref": [], "text": "Figure 7 shows the confidence distribution of outputs of Pythia models from 70m to 12B. As is explained in Section 5.2, the ranges of the distributions tend to become smaller for larger models. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to thank all the reviewers sincerely for their valuable advice to improve this work. This research is supported by National Science Fund for Excellent Young Scholars under Grant 62222212 and the General Program of National Natural Science Foundation of China under Grant 62376033." }, { "figure_ref": [], "heading": "C Supplementary Experimental results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Calibration of Other Models", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.2 Accuracy Results in Alignment Stage", "publication_ref": [], "table_ref": [], "text": "" } ]
As large language models attract increasing attention and find widespread application, concurrent challenges of reliability also arise at the same time. Confidence calibration, an effective analysis method for gauging the reliability of deep models, serves as a crucial tool for assessing and improving their reliability. However, such investigation has been comparatively underexplored. In this work, we conduct a systematic examination of the calibration of aligned language models throughout the entire construction process, including pretraining and alignment training. At each stage, we investigate how different training settings, such as parameter scales and training data, affect model calibration. To thoroughly assess model calibration, we evaluate models on three most concerned aspects: generation, factuality and understanding. Our work sheds light on whether popular LLMs are well-calibrated and how the training process influences model calibration.
On the Calibration of Large Language Models and Alignment
[ { "figure_caption": "Figure 1 :1Figure 1: Scope of investigations in this paper.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": ". Following the training process of aligned language models, we study model calibration in pre-training stage and alignment training stage respectively. In each stage, we reveal how model calibration changes when using different training settings. For pre-training stage, we examine the effect of parameter scale and training dynamics (steps). For alignment training stage, we study the effect of instruction tuning and RLHF, in which instruction tuning is further scrutinized by changing instruction datasets, training methods and also training dynamics.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Reliability diagram for a Pythia-70m model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Model calibration of different parameter scales.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Model calibration of different training dynamics.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Model calibration using different alignment training settings.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Sentence feature distributions of Alpaca and OA dataset (Each point is a response sentence).", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Confidence distribution of model outputs of different scales on MMLU Dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Prompts of different tasks in Alignment Training Stage accuracy performance generally remains stable and only fluctuates with different datasets and training methods.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "thorough experiments evaluating the model calibration with different training settings and reveal how model calibration is affected by pre-training and alignment training process. In pre-training, we find that model calibration improves as parameter scales and training dynamics increases. In alignment training stage, experimental results show that instruction tuning damages model calibration significantly and ill-distributed synthetic data does more harm. Such harm will increase when finetuning process lasts longer while can be remediated by using parameter efficient training methods like LoRA. In the mean time, we surprisingly find that RLHF has little impact on calibration of instruction tuned models.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "and Table3. Hyper-parameters of instruction tuning.", "figure_data": "ParametersValuesDirect Fine-tuneParametersValuesnums of gpu4Reward Modelepochs3nums of gpu4batch size per gpu4epochs2gradient accumulation8batch size per gpu8total batch size4 * 4 * 8 = 128gradient accumulation1max sequence length2048total batch size4 * 8 * 1 = 32learning rate2e-5max sequence length2048warmup ratio0.03learning rate2e-5lr schedulercosinelora r8LoRAlora alpha32nums of gpu4lora dropout0.1epochs3lr schedulercosinebatch size per gpu16RLHFgradient accumulation2nums of gpu4total batch size4 * 16 * 2 = 128epochs3max sequence length2048batch size8learning rate3e-4gradientaccumulation8warmup steps100output max length128lora r8learning rate1.4e-5lora alpha32lora r16lora dropout0.1lora alpha32lora target modules[q_proj, v_proj]lora dropout0.05lr schedulerlinear", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hyper-parameters of RLHF.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Chiwei Zhu; Benfeng Xu; Quan Wang; Yongdong Zhang; Zhendong Mao
[ { "authors": "Yuntao Bai; Andy Jones; Kamal Ndousse; Amanda Askell; Anna Chen; Nova Dassarma; Dawn Drain; Stanislav Fort; Deep Ganguli; Tom Henighan", "journal": "", "ref_id": "b0", "title": "Training a helpful and harmless assistant with reinforcement learning from human feedback", "year": "2022" }, { "authors": "Stella Biderman; Hailey Schoelkopf; Quentin Anthony; Herbie Bradley; O' Kyle; Eric Brien; Mohammad Hallahan; Shivanshu Aflah Khan; Purohit; Edward Usvsn Sai Prashanth; Aviya Raff; Lintang Skowron; Oskar Sutawika; Van Der Wal", "journal": "", "ref_id": "b1", "title": "Pythia: A suite for analyzing large language models across training and scaling", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Yunxuan Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Alex Chowdhery; Marie Castro-Ros; Kevin Pellat; Dasha Robinson; Sharan Valter; Gaurav Narang; Adams Mishra; Vincent Yu; Yanping Zhao; Andrew Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b4", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "H Morris; Stephen E Degroot; Fienberg", "journal": "Journal of the Royal Statistical Society: Series D (The Statistician)", "ref_id": "b5", "title": "The comparison and evaluation of forecasters", "year": "1983" }, { "authors": "Shrey Desai; Greg Durrett", "journal": "", "ref_id": "b6", "title": "Calibration of pre-trained transformers", "year": "2020" }, { "authors": "Yilun Du; Shuang Li; Antonio Torralba; Joshua B Tenenbaum; Igor Mordatch", "journal": "", "ref_id": "b7", "title": "Improving factuality and reasoning in language models through multiagent debate", "year": "2023" }, { "authors": "Hady Elsahar; Pavlos Vougiouklis; Arslen Remaci; Christophe Gravier; Jonathon Hare; Frederique Laforest; Elena Simperl", "journal": "European Language Resources Association (ELRA", "ref_id": "b8", "title": "T-REx: A large scale alignment of natural language with knowledge base triples", "year": "2018" }, { "authors": "Leo Gao; Stella Biderman; Sid Black; Laurence Golding; Travis Hoppe; Charles Foster; Jason Phang; Horace He; Anish Thite; Noa Nabeshima; Shawn Presser; Connor Leahy", "journal": "", "ref_id": "b9", "title": "The pile: An 800gb dataset of diverse text for language modeling", "year": "2020" }, { "authors": "Chuan Guo; Geoff Pleiss; Yu Sun; Kilian Q Weinberger", "journal": "PMLR", "ref_id": "b10", "title": "On calibration of modern neural networks", "year": "2017" }, { "authors": "Guande He; Jianfei Chen; Jun Zhu", "journal": "", "ref_id": "b11", "title": "Preserving pre-trained features helps calibrate fine-tuned language models", "year": "2023" }, { "authors": "Dan Hendrycks; Collin Burns; Steven Basart; Andy Zou; Mantas Mazeika; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b12", "title": "Measuring massive multitask language understanding", "year": "2021" }, { "authors": "Dan Hendrycks; Kevin Gimpel", "journal": "", "ref_id": "b13", "title": "A baseline for detecting misclassified and out-of-distribution examples in neural networks", "year": "2016" }, { "authors": "Quzhe Huang; Mingxu Tao; Zhenwei An; Chen Zhang; Cong Jiang; Zhibin Chen; Zirui Wu; Yansong Feng", "journal": "", "ref_id": "b14", "title": "Lawyer llama technical report", "year": "2023" }, { "authors": "Zhengbao Jiang; Jun Araki; Haibo Ding; Graham Neubig", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b15", "title": "How can we know when language models know? on the calibration of language models for question answering", "year": "2021" }, { "authors": "Saurav Kadavath; Tom Conerly; Amanda Askell; Tom Henighan; Dawn Drain; Ethan Perez; Nicholas Schiefer; Zac Hatfield-Dodds; Nova Dassarma; Eli Tran-Johnson; Scott Johnston; Sheer El-Showk; Andy Jones; Nelson Elhage; Tristan Hume; Anna Chen; Yuntao Bai; Sam Bowman; Stanislav Fort; Deep Ganguli; Danny Hernandez; Josh Jacobson; Jackson Kernion; Shauna Kravec; Liane Lovitt; Kamal Ndousse; Catherine Olsson; Sam Ringer; Dario Amodei; Tom Brown; Jack Clark; Nicholas Joseph; Ben Mann; Sam Mccandlish; Chris Olah; Jared Kaplan", "journal": "", "ref_id": "b16", "title": "Language models (mostly) know what they know", "year": "2022" }, { "authors": "Jared Kaplan; Sam Mccandlish; Tom Henighan; Tom B Brown; Benjamin Chess; Rewon Child; Scott Gray; Alec Radford; Jeffrey Wu; Dario Amodei", "journal": "", "ref_id": "b17", "title": "Scaling laws for neural language models", "year": "2020" }, { "authors": "Lorenz Kuhn; Yarin Gal; Sebastian Farquhar", "journal": "", "ref_id": "b18", "title": "Semantic uncertainty: Linguistic invariances for uncertainty estimation in natural language generation", "year": "2023" }, { "authors": "Aviral Kumar; Sunita Sarawagi", "journal": "", "ref_id": "b19", "title": "Calibration of encoder decoder models for neural machine translation", "year": "2019" }, { "authors": "Tiffany H Kung; Morgan Cheatham; Arielle Medenilla; Czarina Sillos; Lorie De Leon; Camille Elepaño; Maria Madriaga; Rimel Aggabao; Giezel Diaz-Candido; James Maningo", "journal": "PLoS digital health", "ref_id": "b20", "title": "Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models", "year": "2023" }, { "authors": "Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; D ; Christopher; Diana Christopher Ré; Drew A Acosta-Navas; Eric Hudson; Esin Zelikman; Faisal Durmus; Frieda Ladhak; Hongyu Rong; Huaxiu Ren; Jue Yao; Keshav Wang; Laurel Santhanam; Lucia Orr; Mert Zheng; Mirac Yuksekgonul; Nathan Suzgun; Neel Kim; Niladri Guha; Omar Chatterji; Peter Khattab; Qian Henderson; Ryan Huang; Sang Chi; Shibani Michael Xie; Surya Santurkar; Tatsunori Ganguli; Thomas Hashimoto; Tianyi Icard; Vishrav Zhang; William Chaudhary; Xuechen Wang; Yifan Li; Yuhui Mai; Yuta Zhang; Koreeda", "journal": "", "ref_id": "b21", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "Nick Mckenna; Tianyi Li; Liang Cheng; Mohammad Javad Hosseini; Mark Johnson; Mark Steedman", "journal": "", "ref_id": "b22", "title": "Sources of hallucination by large language models on inference tasks", "year": "2023" }, { "authors": "G Robert; Miller", "journal": "Springer", "ref_id": "b23", "title": "Statistical prediction by discriminant analysis", "year": "1962" }, { "authors": "Matthias Minderer; Josip Djolonga; Rob Romijnders; Frances Hubis; Xiaohua Zhai; Neil Houlsby; Dustin Tran; Mario Lucic", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Revisiting the calibration of modern neural networks", "year": "2021" }, { "authors": "Swaroop Mishra; Daniel Khashabi; Chitta Baral; Hannaneh Hajishirzi", "journal": "", "ref_id": "b25", "title": "Cross-task generalization via natural language crowdsourcing instructions", "year": "2022" }, { "authors": "H Allan; Murphy", "journal": "Journal of Applied Meteorology and Climatology", "ref_id": "b26", "title": "A new vector partition of the probability score", "year": "1973" }, { "authors": "Gregory Mahdi Pakdaman Naeini; Milos Cooper; Hauskrecht", "journal": "", "ref_id": "b27", "title": "Obtaining well calibrated probabilities using bayesian binning", "year": "2015" }, { "authors": "Khanh Nguyen; Brendan O' Connor", "journal": "", "ref_id": "b28", "title": "Posterior calibration and exploratory analysis for natural language processing models", "year": "2015" }, { "authors": "Jeremy Nixon; Michael W Dusenberry; Linchuan Zhang; Ghassen Jerfel; Dustin Tran", "journal": "OpenAI", "ref_id": "b29", "title": "Measuring calibration in deep learning", "year": "2019" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b30", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "F J Palmer; Antje Doblas-Reyes; Weisheimer; Rodwell", "journal": "Bulletin of the American Meteorological Society", "ref_id": "b31", "title": "Toward seamless prediction: Calibration of climate change projections using seasonal forecasts", "year": "2008" }, { "authors": "Baolin Peng; Chunyuan Li; Pengcheng He; Michel Galley; Jianfeng Gao", "journal": "", "ref_id": "b32", "title": "Instruction tuning with gpt-4", "year": "2023" }, { "authors": "Jack W Rae; Sebastian Borgeaud; Trevor Cai; Katie Millican; Jordan Hoffmann; Francis Song; John Aslanides; Sarah Henderson; Roman Ring; Susannah Young; Eliza Rutherford; Tom Hennigan; Jacob Menick; Albin Cassirer; Richard Powell; George Van Den Driessche; Lisa Anne Hendricks; Maribeth Rauh; Po-Sen Huang; Amelia Glaese; Johannes Welbl; Sumanth Dathathri; Saffron Huang; Jonathan Uesato; John Mellor; Irina Higgins; Antonia Creswell; Nat Mcaleese; Amy Wu; Erich Elsen; Siddhant Jayakumar; Elena Buchatskaya; David Budden; Esme Sutherland; Karen Simonyan; Michela Paganini; Laurent Sifre; Lena Martens; Lorraine Xiang; Adhiguna Li; Aida Kuncoro; Elena Nematzadeh; Domenic Gribovskaya; Angeliki Donato; Arthur Lazaridou; Jean-Baptiste Mensch; Maria Lespiau; Nikolai Tsimpoukelli; Doug Grigorev; Thibault Fritz; Mantas Sottiaux; Toby Pajarskas; Zhitao Pohlen; Daniel Gong; Cyprien Toyama; Yujia De Masson D'autume; Tayfun Li; Vladimir Terzi; Igor Mikulik; Aidan Babuschkin; Diego Clark; De Las; Aurelia Casas; Chris Guy; James Jones; Matthew Bradbury; Blake Johnson; Laura Hechtman; Iason Weidinger; William Gabriel; Ed Isaac; Simon Lockhart; Laura Osindero; Chris Rimell; Oriol Dyer; Kareem Vinyals; Jeff Ayoub; Lorrayne Stanway; Demis Bennett; Koray Hassabis; Geoffrey Kavukcuoglu; Irving", "journal": "", "ref_id": "b33", "title": "Scaling language models: Methods, analysis & insights from training gopher", "year": "2022" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Teven Le Scao; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Tali Bers; Stella Biderman; Leo Gao; Thomas Wolf; Alexander M Rush", "journal": "", "ref_id": "b34", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022" }, { "authors": "Kaitao Song; Xu Tan; Tao Qin; Jianfeng Lu; Tie-Yan Liu", "journal": "", "ref_id": "b35", "title": "Mpnet: Masked and permuted pretraining for language understanding", "year": "2020" }, { "authors": "Aarohi Srivastava; Abhinav Rastogi; Abhishek Rao; Abu Awal; Md Shoeb; Abubakar Abid; Adam Fisch; Adam R Brown; Adam Santoro; Aditya Gupta; Adrià Garriga-Alonso", "journal": "", "ref_id": "b36", "title": "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models", "year": "2022" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b37", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Katherine Tian; Eric Mitchell; Allan Zhou; Archit Sharma; Rafael Rafailov; Huaxiu Yao; Chelsea Finn; Christopher D Manning", "journal": "", "ref_id": "b38", "title": "Just ask for calibration: Strategies for eliciting calibrated confidence scores from language models fine-tuned with human feedback", "year": "2023" }, { "authors": "Ran Tian; Shashi Narayan; Thibault Sellam; Ankur P Parikh", "journal": "", "ref_id": "b39", "title": "Sticking to the facts: Confident decoding for faithful data-to-text generation", "year": "2020" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b40", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b41", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b42", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Neeraj Varshney; Swaroop Mishra; Chitta Baral", "journal": "", "ref_id": "b43", "title": "Investigating selective prediction approaches across several tasks in iid, ood, and adversarial settings", "year": "2022" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc Le; Ed Chi; Sharan Narang; Aakanksha Chowdhery; Denny Zhou", "journal": "", "ref_id": "b44", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2023" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b45", "title": "Self-instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b46", "title": "Finetuned language models are zero-shot learners", "year": "2022" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler; Ed H Chi; Tatsunori Hashimoto; Oriol Vinyals; Percy Liang; Jeff Dean; William Fedus ; Orion; Nicholas Weller; Matt Lourie; Matthew E Gardner; Peters", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Emergent abilities of large language models", "year": "2020" }, { "authors": "Mengzhou Xia; Mikel Artetxe; Chunting Zhou; Xi Victoria Lin; Ramakanth Pasunuru; Danqi Chen; Luke Zettlemoyer; Ves Stoyanov", "journal": "", "ref_id": "b48", "title": "Training trajectories of language models across scales", "year": "2023" }, { "authors": "Yijun Xiao; William Yang; Wang ", "journal": "", "ref_id": "b49", "title": "On hallucination and predictive uncertainty in conditional language generation", "year": "2021" }, { "authors": "Benfeng Xu; Quan Wang; Zhendong Mao; Yajuan Lyu; Qiaoqiao She; Yongdong Zhang", "journal": "", "ref_id": "b50", "title": "k$NN prompting: Beyond-context learning with calibrationfree nearest neighbor inference", "year": "2023" }, { "authors": "Benfeng Xu; An Yang; Junyang Lin; Quan Wang; Chang Zhou; Yongdong Zhang; Zhendong Mao", "journal": "", "ref_id": "b51", "title": "Expertprompting: Instructing large language models to be distinguished experts", "year": "2023" }, { "authors": "Huiqin Yang; Carl Thompson", "journal": "Journal of Advanced Nursing", "ref_id": "b52", "title": "Nurses' risk assessment judgements: A confidence calibration study", "year": "2010" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b53", "title": "Opt: Open pretrained transformer language models", "year": "2022" }, { "authors": "Jie Shen Zheng; Kevin Huang; -Chuan Chen; Chang", "journal": "", "ref_id": "b54", "title": "Why does chatgpt fall short in providing truthful answers?", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 100.17, 96.44, 159.66, 12.06 ], "formula_id": "formula_0", "formula_text": "P (y ′ = y|p ′ = p) = p, ∀p ∈ [0, 1]" }, { "formula_coordinates": [ 4, 101.34, 407.51, 157.32, 69.54 ], "formula_id": "formula_1", "formula_text": "Acc(B m ) = 1 |B m | i∈Bm 1( ŷi = y i ), Conf (B m ) = 1 |B m | i∈Bm pi ," }, { "formula_coordinates": [ 4, 315.89, 202.21, 198.77, 31.72 ], "formula_id": "formula_2", "formula_text": "ECE = M m=1 |B m | N |Acc(B m ) -Conf (B m )|" } ]
2024-03-21
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b7", "b8", "b9", "b11", "b7", "b12", "b13", "b14", "b16", "b7", "b17", "b18", "b19", "b7", "b8", "b20" ], "table_ref": [], "text": "The rapid progress of Large Language Models (LLMs) has brought a profound impact on various domains. Notable examples include ChatGPT [1] and GPT-4 [2], which have demonstrated the ability to perform complex tasks and provide appropriate responses based on human instructions [3]- [5]. Furthermore, these models possess an understanding of their limitations in terms of capabilities [1]. The capabilities of LLMs are developed through a three-stage process. The first stage involves pre-training, where a foundation model is trained to predict subsequent words within large corpora [6]. However, while foundation models like LLaMA [7] can complete input sentences, they lack the ability to effectively respond to human instructions. To address this limitation, LLMs undergo fine-tuning on diverse instructions, leveraging desired responses as learning signals in order to generalize 1 https://github.com/lunyiliu/CoachLM to unseen instructions [8]- [10]. This process is commonly referred to as instruction tuning. Some LLMs also incorporate Reinforcement Learning (RL) pipelines to dynamically learn the boundaries of their responses, thereby avoiding the generation of harmful or sensitive content [1], [11], [12].\nAmong these techniques, instruction tuning is considered a crucial process to enhance the capabilities of LLMs by leveraging stored knowledge from pre-training and effectively aligning with human expectations [13]. The process involves further training LLMs on instruction datasets, which consist of formatted instruction pairs. As illustrated in Fig. 1, an instruction pair can be represented as (INSTRUCTION, RESPONSE), with INSTRUCTION denoting the human instruction for the model and RESPONSE representing the desired output following the instruction. Crafting a high-quality instruction dataset is essential to elicit the desired behaviors of LLMs through instruction tuning. Prominent LLMs, such as ChatGPT [1], GPT-4 [2], and Bard2 , utilize proprietary instruction datasets constructed with significant amounts of human annotation. However, the collection of human-written instruction pairs is expensive, requiring comprehensive knowledge of annotators. Alternatively, Wang et al. proposed Self-Instruct, an automatic approach to construct instruction datasets by leveraging LLMs to produce instruction pairs with high diversity [14]. With the increasing capabilities and flexibility of LLMs, instruction tuning using LLM-generated instruction datasets has emerged and rewriting of low-quality instruction pairs, leading to an average improvement of 8.4% in the win rates of our tuned Alpaca-human model, where the expert-revised subset was merged back into the ALPACA52K dataset.\n• We introduced CoachLM, an industry-friendly coach language model that automatically revises instruction pairs. CoachLM significantly increased the proportion of highquality samples in the ALPACA52K dataset, improving it from 17.7% to 78.9%. Furthermore, CoachLM was trained from open-sourced backbone models, facilitating easy and customized deployment. • We demonstrated the effectiveness of CoachLM in enhancing the instruction-following capabilities of instruction-tuned LLMs. Our Alpaca-CoachLM model, fine-tuned on the CoachLM-revised ALPACA52K dataset, outperformed the top-performing Alpaca variants by up to 21.5% and even stronger LLMs with more parameters and training stages." }, { "figure_ref": [], "heading": "II. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Motivation", "publication_ref": [ "b26", "b25", "b26", "b23", "b25", "b27", "b26" ], "table_ref": [], "text": "Our work is motivated by the challenges of data quality in instruction tuning and the limitations of existing approaches.\n(1) A systematic and deeper examination on the data quality of LLM-generated instruction datasets is in need, as unguaranteed quality of instruction pairs will hinder the instruction-following abilities of subsequently tuned LLMs. Recent studies have shown that LLM-generated instruction datasets, such as the ALPACA52K dataset, contain errors in the surface form, such as invalid formats, which negatively impact the performance of LLMs. Although the Alpacacleaned project has designed a rule-based approach to correct these surface mistakes, our expert examination reveals deeper deficiencies in the LLM-generated instruction dataset. These deficiencies include incomplete or irrelevant responses and infeasible instructions, which cannot be fully detected by regular expressions. As will be discussed in Section III-C, fixing these deficiencies can further enhance model performances.\n(2) There is a need for an automated and industryfriendly approach to improve the quality of instruction datasets, which arises from the high cost associated with manual revisions on a large scale and the uncertainties introduced by relying on API-dependent LLMs. Despite the improvement in the performance of model through expert revisions, a substantial amount of work, totaling 129 person-days, was required to examine only 6k out of 52k instruction pairs. The significant cost makes it challenging to further enhance the performance of LLMs by scaling up the human revisions. Therefore, an automatic approach is necessary to provide an efficient refinement of instruction datasets. Recent approaches, such as AlpaGasus [20], have utilized off-the-shelf and cloudbased LLMs, such as ChatGPT, to automatically enhance the overall quality of instruction datasets. However, the application of such API-dependent methods is often limited in industrial scenarios due to difficulties in reproducing results caused by frequent updates to the LLM and uncertainties in accessibility due to increasingly stringent blocking strategies. Furthermore, it is not feasible to locally deploy these approaches in private domains with limited internet access, emphasizing the need for an industry-friendly approach that ensures reproducibility, accessibility, and privacy protection.\n(3) Existing filtering-based approaches have the potential to negatively impact the diversity of instruction datasets, which in turn hampers the generalization ability of LLMs. These approaches typically select a small subset of instruction pairs with high ratings from the dataset and fine-tune LLMs on this subset, resulting in improved performance compared to LLMs tuned on the full dataset [19], [20]. Although it has been extensively demonstrated that including low-quality instruction pairs in LLM instruction tuning diminishes the instructionfollowing capability of the models [17], [19], [21], dropping the majority of instruction pairs poses a risk of compromising the integrity of the instruction dataset, as this may lead to a lack of instructions from certain categories and a reduction in the instruction-following abilities of subsequently tuned LLMs in those areas. For instance, Chen et al. [20] observed that the high filtering ratio of code-related instruction pairs in the training dataset of AlpaGasus resulted in relatively weaker performance in responding to coding instructions. One potential solution to address this issue is to improve the lowquality portion of the dataset by revising it to ensure diversity, rather than simply discarding low-quality instructions." }, { "figure_ref": [], "heading": "B. Overview of CoachLM", "publication_ref": [], "table_ref": [], "text": "The architecture of CoachLM, our proposed model for automatic instruction pair revision, is depicted in Fig. 2. In the training stage (Fig. 2(a)), we construct an expert revision dataset consisting of original low-quality instruction pairs and their corresponding manually revised versions. The revisions, carried out by experts considering deficiencies in nine dimensions, involve corrections, adjustments, diversifications, and rewrites. Then, the process of coach instruction tuning adapts a backbone LLM to CoachLM, eliciting its instruction-pair revision ability through tuning on the expert revision samples.\nIn the inference stage, each instruction pair in an instruction dataset is input to CoachLM for revisions, resulting in a CoachLM-revised instruction dataset. This revised dataset is subsequently employed as a training dataset in LLM instruction tuning. As shown in Fig. 2(b), the displayed CoachLMrevised versions of the instruction pairs, when compared with those in the ALPACA52K dataset, alleviate ambiguity in instructions, expand the necessary reasoning process in responses, and enhance adherence to the requirements in instructions. Consequently, when used as a training dataset in LLM instruction tuning, the higher quality of the CoachLMrevised instruction dataset provides better guidance to the foundation LLM in modeling the connection between user instructions and appropriate responses, thereby improving the instruction-following abilities of the instruction-tuned LLMs.\nThe remainder of Section II is organized as follows. Section II-C introduces the expertise and grouping of the language experts involved in our work. Section II-D discusses the definition of data quality in instruction tuning and presents our criteria for evaluating the quality of instruction pairs. Section II-E describes the human revision process of instruction pairs from the ALPACA52K dataset. Section II-F provides a detailed illustration of the methodology used in the training and inference stages of CoachLM. Finally, Section II-G introduces CoachLM150, the instruction-following test set we created. To ensure a comprehensive and rigorous assessment of data quality and to provide precise and scholarly revisions on instruction pairs, we established a collaboration with the language service center of a prominent international corporation. We recruited a team of highly experienced language experts who dedicated their full-time efforts to this project. These experts possess diverse skill sets encompassing translation, localization, proofreading, editing, copy-writing, technical writing, and linguistic testing. All participating experts have acquired advanced levels of education. Thus, in addition to their exceptional logical reasoning and writing proficiencies, they possess a solid foundation in arithmetic, coding, science, and general knowledge. Furthermore, owing to the existence of multilingual instructions in the ALPACA52K dataset, the multiple language capabilities of our team members, such as English, Chinese, Spanish, Arabic and French, render them uniquely qualified for this project." }, { "figure_ref": [], "heading": "C. Profile of Involved Language Experts", "publication_ref": [], "table_ref": [], "text": "As shown in Table I, a total of 26 language experts participated in the study, and they were divided into three non-overlapping groups, each assigned with specific tasks. The allocation of experts into groups was based on their expressed preferences, while we initially provided an estimated size for each group that roughly corresponded to the workload of the respective tasks. Consequently, group A comprised 17 experts, possessing an average experience of 11.29 years. Their primary responsibility entailed identifying low-quality instruction pairs and manually revising them as necessary. Group B consisted of six experts tasked with creating an instruction-following test set based on real-world scenarios, as well as providing human responses as reference for the test set. Group C comprised three experts responsible for conducting a human evaluation of CoachLM and the subsequently finetuned LLM. Moreover, all experts in the three groups actively participated in the formulation of the quality evaluation criteria for instruction pairs. Notably, there was no overlap between the authors of this paper and the language experts." }, { "figure_ref": [], "heading": "D. Quality Evaluation Criteria for Instruction Pairs", "publication_ref": [ "b24", "b26", "b22", "b28", "b30", "b22", "b28", "b30" ], "table_ref": [ "tab_1" ], "text": "Before examining the data quality of the instruction dataset, it is crucial to establish a comprehensive definition of the quality of instruction pairs. Previous studies [18]- [20] generally agree that for LLMs, high-quality instruction pairs are advantageous for instruction tuning, while low-quality pairs may impede the instruction-following ability of LLMs trained on such data. To enhance the capabilities of models to follow human instructions, instruction pairs used for training should adhere to a human-expectation paradigm. Existing research [16], [22]- [24] suggests that human expectations for LLM behavior encompass various dimensions, including basic language safety and advanced expectations, such as factual correctness, contextual richness, and helpfulness of responses. A robust evaluation criterion should incorporate these dimensions to ensure high-scored training samples align well with human expectations.\nBy incorporating the dimensions outlined in existing evaluation criteria [16], [22]- [24], a comprehensive set of criteria encompassing nine different evaluation dimensions (as shown in Table II) has been proposed to assess the quality of (INSTRUCTION, RESPONSE) pairs. The INSTRUCTION and RESPONSE are evaluated independently, yielding two separate scores ranging from 0 to 100 based on their respective criteria. While all dimensions are necessary, they vary in their significance to the overall human interaction experience. Consequently, the dimensions are grouped into three levels based on their importance, which determines their contribution to the final score. The red-line level (e.g., safety) represents the minimum acceptable standard for human tolerance, where any violation results in a score no higher than 40. The basic level (e.g., correctness and relevance) signifies dimensions that enable effective human-model interaction, and any flaws in this level restrict the score to a maximum of 80. Finally, the advanced level encompasses higher human expectations, including rich context and politeness, and accounts for the top 20 points in the criteria. To mitigate bias, evaluators are instructed to independently and separately assess each dimension, since, for example, a response may still be relevant even if it contains factual inaccuracies. " }, { "figure_ref": [], "heading": "Feasibility", "publication_ref": [], "table_ref": [], "text": "The instruction is clear, specific, feasible, and easily understandable.\nCheck for ambiguous or vague expressions, logical errors, or requests beyond the ability of an AI model." }, { "figure_ref": [], "heading": "0-80 Readability", "publication_ref": [], "table_ref": [], "text": "The instruction adheres to the conventions and stylistic norms of the target language.\nCheck for language-related issues such as grammar, spelling, and punctuations. " }, { "figure_ref": [], "heading": "Criteria", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "40-80", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Comprehensiveness", "publication_ref": [ "b7" ], "table_ref": [], "text": "Responses comprehensively cover all necessary angles and information.\nCheck (1) No omissions or deficiencies in fully explaining user questions.\n(2) Multiple angles, sufficient contexts and details for an unbiased response." }, { "figure_ref": [], "heading": "Relevance", "publication_ref": [], "table_ref": [], "text": "Responses should be effective and direct, and provide in-topic solutions.\nCheck (1) Irrelevance: Response misinterprets user's intention; (2) Deviation: Response is related to user's topic, but deviates from the focus." }, { "figure_ref": [], "heading": "Correctness", "publication_ref": [ "b22", "b23", "b31", "b32", "b33" ], "table_ref": [ "tab_1" ], "text": "Responses should be grounded in factual information, common sense, and logical reasoning, while also staying up-to-date and adhering to the user's specific requirements.\nCheck Regarding the criteria for assessing the quality of the INSTRUCTION in an instruction pair in Table II, firstly, an INSTRUCTION should be grammatically correct and logically feasible. Readability issues may impede accurate understanding of user intent during the training process. Additionally, infeasible INSTRUCTIONS containing logical errors in the training dataset may prevent the model from learning correct connections between instructions and responses, thereby exacerbating the hallucination of tuned LLMs [16], [17], [25]. Moreover, recent studies have shown that including more contextual information and details in user instructions leads to better model responses [26], [27]. Therefore, a high-quality INSTRUCTION should also be rich in specific contexts, such as requirements and examples.\nSimilarly, a high-quality RESPONSE to the user's instruction ensures a desirable user experience. Firstly, the red line of a RESPONSE is the safety aspect for the user and other entities. Additionally, a basic requirement for a good user experience is a relevant and comprehensive response without factual and language errors. Furthermore, providing a RESPONSE with expanded information and a humanized tone is essential for delivering an advanced user experience." }, { "figure_ref": [], "heading": "E. Manual Instruction Revision with Experts", "publication_ref": [], "table_ref": [], "text": "In this section, we present details of the human revision process conducted on a randomly selected subset of 6k instruction pairs from the ALPACA52K dataset. " }, { "figure_ref": [], "heading": "41.7%", "publication_ref": [], "table_ref": [], "text": "Beyond Expertise: Overly professional scenes.\nGenerate the chords for an E minor scale." }, { "figure_ref": [], "heading": "27.7%", "publication_ref": [], "table_ref": [], "text": "Massive Workload: Poem or lyric requiring massive rewriting.\nFrom the given lyrics, create a haiku poem." }, { "figure_ref": [], "heading": "8.2%", "publication_ref": [], "table_ref": [], "text": "Multi-modal: Image, video and audio, which are not supported.\nList the products in the photo. Input: (photo of a grocery store)." }, { "figure_ref": [], "heading": "6.5%", "publication_ref": [ "b34", "b35", "b21" ], "table_ref": [ "tab_4", "tab_4" ], "text": "Safety: Overly toxic content, copyrighted content and sensitive content. 15.9%\n1) Preliminary Filtering: Before the primary revision, experts from group A conducted a preliminary filtering on the sampled 6k instruction pairs to exclude unsuitable pairs. As shown in Table III, a total of 1088 pairs were excluded, mainly due to missing or invalid key parts, excessive expertise or workload requirements, inclusion of unsupported multimodal information, and overly toxic or sensitive content. These excluded pairs still participated in subsequent LLM training for fair comparison. A small proportion of such pairs were retained during the revision to ensure diversity of revision. 2) Expert Revision: After excluding the 1088 filtered instruction pairs, the remaining 4.9k instruction pairs underwent the primary revision. To ensure an effective revision process, we adopted an expertise-based approach to assign instruction pairs to experts [28], [29]. Based on the categories proposed in [15], the instruction pairs were classified into three classes representing different levels of difficulty (i.e., expertise required) for revision. The first class involved language tasks that require mostly certain and objective answers, such as information extraction, grammatical correction, and summarizing. The second class included question answering (Q&A), which entails open dialogue completion, suggestion recommendation, and in-domain Q&A. Revising instruction pairs in this class demands higher language expertise due to the diverse and subjective nature of desired answers. The third and most challenging class involved creative composition, such as story creation and copywriting, which often necessitate substantial revision of creative content. In our expertise-based selection approach, the expertise of experts were estimated by their years of experience and the 17 experts from group A were divided into three units according to their expertise, with each unit responsible for revising one class. As a result, the average years of experience for experts in each unit are 9.4 years for language task performing, 11.2 years for Q&A, and 13.1 years for creative composition.\nIn addition, each unit was assigned an owner whose responsibility was to assess the quality of the revised instruction pairs produced by unit members. The revision process strictly adhered to the criteria outlined in Table II, following the principle of \"making all necessary revisions,\" regardless of the importance of the revised dimensions. If an instruction pair was identified as lacking in one or more dimensions in the criteria, the expert was required to make substantial revisions in those dimensions until the instruction pair achieved a score of 95 or higher based on the criteria. Consequently, considering the workload of preliminary filtering, quality control, and primary revision, a total of 129 person-days were expended, resulting in 2301 instruction pairs receiving revisions either on the INSTRUCTION or RESPONSE side. Among the 2.3k revised pairs, 1079 of them underwent revisions on INSTRUCTION.\nDuring the revision, each instruction pair may have received revisions in multiple dimensions. The revised instruction pairs were categorized based on the primary type of revisions they underwent, and the distribution of each revision category is displayed in Table IV. For revisions on the INSTRUCTION side, approximately 68.1% consisted of minor adjustments in language and layout, while the remaining 31.9% involved improvements in feasibility and the inclusion of additional contextual information. As for RESPONSES, the most common types of revisions comprised expanding the depth of the response or providing necessary supporting explanations, accounting for 43.7% of the revisions. Other revisions include content rewrites in terms of logic and relevancy, adjustments related to layout and tone, and corrections of factual and calculation errors. In order to ensure a diverse range of revisions, approximately 1.9% of the revisions were cases that should have fell into the categories listed in Table III. See more analysis details from the technical report in our repository." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "F. Design of CoachLM", "publication_ref": [ "b25", "b24", "b26", "b36", "b37" ], "table_ref": [ "tab_10", "tab_1" ], "text": "The effectiveness of our criteria and revision process is evident from the advantage of Alpaca-human over Alpaca in Table IX. However, it is important to note that our manual examination only encompasses a limited portion of the AL-PACA52K dataset, leaving the quality of the majority of the dataset uncertain. Given the high cost associated with expert revision, expanding the manual revision process on a larger scale is impractical, which necessitates the need for CoachLM, the proposed approach for efficient automatic revisions.\n1) Coach Instruction Tuning: CoachLM is trained by taking content revision as a type of instruction, which LLMs can follow via instruction tuning. Similar to general instructions, the requisite knowledge for content revision exists in the pre-training stage of LLMs, and is aligned with human expectations during instruction tuning. For instance, contentrevision instructions found in the ALPACA52K dataset, such as \"correct the grammatical errors in the sentence\", elicit the basic capacity of instruction-tuned LLMs like Alpaca to engage in content revision. Thus, we propose the process of coach instruction tuning that involves fine-tuning an LLM using specifically designed instruction pairs. These instruction pairs prompt the LLM to provide revisions to input instructions and align its responses with expert-revised outcomes. Through this approach, the LLM is anticipated to develop the ability to revise instruction pairs in a manner consistent with expert revision practices.\nSpecifically, given an instruction dataset V of instruction pairs x = (INSTRUCTION, RESPONSE) with x ∈ V , each instruction pair x undergoes a revision through the expert revision process, resulting in a revised instruction pair x r . The expert revision dataset R is then formed, which comprises both the original and revised instruction pairs, denoted as R = {(x, x r ) | x ∈ V }. During the coach instruction tuning process, each (x, x r ) ∈ R is leveraged to construct an instruction pair x c , leading to an instruction dataset C = {x c | x ∈ V }. As shown in Fig. 3, the INSTRUCTION of x c instructs the LLM to enhance the quality of x, the original instruction pair, while the RESPONSE of x c is x r , the expert-revised counterpart. When designing the INSTRUCTION component, we provide a succinct revision instruction that highlights the primary areas for revision based on the expert revision results. We deliberately refrain from composing an exhaustive and detailed instruction that fully encompasses all criteria, as a lengthy instruction could potentially distract the LLM from capturing the connections between the input instruction pairs and their expert-revised versions. Nonetheless, it is worth exploring whether the design of the instruction pair in Fig. 3 is optimal in future research.\nGiven an LLM with parameters θ as the initial model for coach instruction tuning, training the model on the constructed instruction dataset C results in the adaption of the LLM's parameters from θ to θ c , denoted as CoachLM. Specifically, θ c is obtained by maximizing the probability of predicting the next tokens in the RESPONSE component of x c , conditioned on the INSTRUCTION of x c ∈ C, which is formulated as:\nθ c = arg max θ xc∈C log P (RESPONSE | INSTRUCTION; θ, x c ).\n(1) 2) Quality Control of Human Input: In the pre-LLM era, models were required to learn both task-specific knowledge and the alignment between task input and desired output. This is why training on negative samples was sometimes beneficial, as it provided the model with supplementary knowledge and boundaries for the task-specific information [19]. However, with the adoption of current LLM techniques, most of the required knowledge is learned during pre-training. Numerous pieces of evidence suggest that when fine-tuning an LLM through instruction tuning, the introduction of low-quality instruction pairs actually hinders the performance of the tuned LLM [18]- [20], [30]. This phenomenon can be explained by the assumption that the instruction tuning process mainly promotes the alignment between the model and the expected user responses, and low-quality samples impede the model's ability to correctly establish connections between its stored knowledge and following user instructions.\nThis concern also applies to the proposed coach instruction tuning process, as it may lead to sub-optimal performance of CoachLM if all the 2.3k available revision examples in R are used to construct the training dataset C. Although the expert revision process includes a quality control stage that ensures each revised instruction pair x r meets the criteria in Table II, the original instruction pair x may still influence the overall quality of the constructed instruction pair x c . If x is already in good shape, only minor revisions are made to obtain x r . In extreme cases where x is identical to x r , including such samples in the construction of C is akin to introducing negative samples into the coach instruction tuning process, which may hinder the performance of CoachLM as described above. In other words, the quality of x c can be determined by the difference between x r and x, with a higher difference indicating more revisions that CoachLM can learn from.\nTo avoid biased results from the experts, we did not impose a minimum amount of revision for each revised sample in the expert revision process. Instead, we employ the edit distance metric to assess the quality of (x, x r ) ∈ R and define α, the human input ratio, to determine the final subset of samples used in C. The edit distance, also known as the Levenshtein distance, quantifies the minimum number of single-character edits needed to transform one string into another [31]. The edit distance reflects the difference between x and x r , thereby measuring the quality of x c . Then, by defining a ratio α between 0 and 1, we can ensure that C α comprises human input samples from R with the highest α proportion of edit distances. By replacing C with C α in Eq. ( 1), we obtain a CoachLM trained with a high-quality subset of the constructed instruction dataset C.\n3) Automatic Revision with CoachLM: Through coach instruction tuning, CoachLM generates automatic revisions on input instruction pairs, creating a CoachLM-revised instruction dataset. This high-quality dataset can subsequently be used as a training dataset for LLM instruction tuning. Let D represent an input instruction dataset (e.g., the ALPACA52K dataset), consisting of instruction pairs x. Each x ∈ D is combined with the revision prompt shown in Fig. 3 to form an instruction pair x ′ c ∈ D ′ , with an empty RESPONSE to be filled by CoachLM. The CoachLM-revised instruction dataset, denoted as D c , is obtained by applying θ c , the CoachLM, on D ′ :\nD c = {θ c (x ′ c ) | x ′ c ∈ D ′ },(2)" }, { "figure_ref": [], "heading": "G. CoachLM150 Test Set", "publication_ref": [ "b20", "b21" ], "table_ref": [ "tab_1" ], "text": "As mentioned in Section II-C, the primary task of experts in group B is to create a high-quality LLM test suite called the CoachLM150 test set. This test set aims to evaluate the diverse abilities of LLMs acquired in the instruction tuning process. To construct this test set, the experts analyzed the categories of instructions in existing instruction tuning datasets [14], [15] and identified 42 distinct categories, including information extraction, scientific inference, dialogue completion, brainstorming, in-domain question answering, and more, to assess the instruction-following ability of LLMs.\nThe 42 categories were evenly assigned to five out of the six experts in group B. Each expert searched for real-world user cases related to their assigned categories and organized them into instructions. The sources of these user cases include tutorial websites 4 , online blogs5 , and user forums 6 . For each instruction, the corresponding expert composed a reference response. Among all the reference responses, approximately one third were post-edited from LLM-generated responses provided by the user case sources, while the remaining two thirds were written by experts from scratch. The quality control of the curated instruction pairs was performed by the remaining expert, who evaluated them based on the criteria mentioned in Table II and rejected low-quality pairs. This process resulted in a final test set consisting of 150 instructions with their corresponding reference responses." }, { "figure_ref": [], "heading": "III. EXPERIMENTS AND EVALUATIONS", "publication_ref": [ "b26", "b22", "b30" ], "table_ref": [], "text": "In Section III-A, we provide an overview of the experimental set-up of CoachLM. Section III-B investigates the effectiveness of CoachLM in enhancing the data quality of the revised instruction dataset. Section III-C assesses the performance improvement achieved by tuning the LLM using the CoachLM-revised instruction dataset. Furthermore, in Sections III-D and III-E, we conduct an ablation study on the influence of parameter settings and backbone models on CoachLM. [20] Instruction Dataset Direct Score Medium Medium GPT-4 [16] LLM Performance Comparison Medium Low PandaLM [24] LLM Performance Comparison High High 1) Evaluation Approach: In the experiment, a comprehensive evaluation of CoachLM is conducted using both automatic and human approaches, as shown in Table V." }, { "figure_ref": [], "heading": "A. Experimental Setup", "publication_ref": [ "b26", "b22", "b22", "b30", "b8", "b30", "b13", "b26", "b20", "b30", "b22", "b39" ], "table_ref": [ "tab_1", "tab_7" ], "text": "a) Human: Three experts from group C (denoted by R1, R2, and R3, respectively) independently assign scores between 0-100 to each INSTRUCTION or RESPONSE based on the criteria in Table II, unaware of the sources of rated samples. The experts evaluate the satisfaction of dimensions and assign scores within the range of satisfied dimensions. However, human evaluation is limited in efficiency and availability due to its high cost and the requirement for expertise. b) ChatGPT: Following AlpaGasus [20], the overall quality of the CoachLM-revised instruction dataset is rated using ChatGPT (i.e., the GPT-3.5-turbo API). This method prompts ChatGPT to evaluate the accuracy of the RESPONSE in an instruction pair, using a rating scale ranging from 0 to 5. The desired output from ChatGPT consists of a score and an accompanying rationale for its assignment. c) GPT-4: To evaluate the performance of LLMs, GPT-4 is used to compare and rate the RESPONSES from two candidate models [16]. A sophisticated prompt is designed by Chiang et al. [16]. The prompt firstly displays two candidate responses to an instruction from the test set, and asks GPT-4 to assess the relative quality of the two responses based on helpfulness, relevance, accuracy, and level of detail. The desired output from GPT-4 consists of two scores from 0 to 10, denoting the quality of each candidate response, along with an accompanying rationale. However, this approach has limitations due to its vulnerable API-dependent nature and the reported evaluation biases when swapping candidates [24], despite the strong ability of GPT-4 against humans [2].\nd) PandaLM: This open-source judge model allows for local deployment and offers efficient evaluations on LLMs [24]. By fine-tuning LLaMA [7] using 300k evaluation samples (generated by GPT-3.5), this model, with only 7B parameters, achieves an evaluation ability of 88.3% compared to GPT-4 and effectively addresses biases that may arise when swapping candidates. PandaLM takes an instruction and two candidate responses as inputs. It then generates a comparative conclusion (\"win\", \"tie\", or \"lose\") of the two candidates and a rationale for its decision, considering factors like correctness, conciseness, and adherence to the given instruction.\nTo address biases in comparison-based evaluations, we used the approach in AlpaGasus [20]. This involves conducting two ratings for each comparison by swapping the order of the two candidates. Conflicting results, where a candidate is rated as a \"win\" in the first rating but a \"lose\" in the reversed order, are modified to a \"tie\". Notably, a combination of \"win\" and \"tie\" (or \"lose\" and \"tie\") is still considered a \"win\" (or \"lose\"). 2) Instruction-following Test Sets: As shown in Table VI, in addition to the CoachLM150 test set, we also utilize three popular public LLM test sets in our experiments, namely the Self-Instruct252 test set [14], the PandaLM170 test set [24], and the Vicuna80 test set [16]. The Self-Instruct252 test set was curated by Wang et al., who provided instructions under various application scenarios such as Gmail, Twitter, and Github, along with human responses. The PandaLM170 test set was created by sampling instructions from the Self-Instruct252 test set, with reference responses generated by ChatGPT. The Vicuna80 test set comprises instructions related to writing, role-play, math, and knowledge, for which the responses from Bard were used as reference responses due to the absence of human responses.\n3) Implementation Details: The experiments were conducted using 8 NVIDIA A100 GPUs. We explored different backbone models θ and different α values for CoachLM. In our main experiment, we used ChatGLM2 [32] as the backbone model, which has 6B parameters, and set α to 0.3. To efficiently adapt the backbone LLMs, we employed LoRA [33], a partial fine-tuning technique. See detailed parameter settings in our repository. CoachLM was trained for seven epochs with a learning rate of 2 × 10 -4 . For training the instruction-following models, we utilized the same settings as the official Alpaca repository 7 , with the exception of using different instruction datasets. During the inference stage, the beam size for decoding was set to one for all models. 1) CoachLM-revised ALPACA52K Dataset: By inputting every instruction pair from the ALPACA52K dataset into CoachLM for revisions as described in Eq. ( 2), a CoachLMrevised ALPACA52K dataset was obtained. We performed automatic post-processing on the outputs of CoachLM using regular expressions to remove invalid characters and repeated strings that were occasionally produced. Approximately 1.3% of the outputs were not valid instruction pairs and were replaced with the original instruction pairs. To avoid data leakage, instructions appeared in the training of CoachLM were kept from the inference and the original samples were directly adopted, which accounted for around 1.3% as well. Three examples revised by CoachLM are shown in Fig. 2." }, { "figure_ref": [], "heading": "B. Data Quality of CoachLM-revised Instruction Dataset", "publication_ref": [ "b26", "b26" ], "table_ref": [ "tab_7", "tab_9" ], "text": "Table VII presents the statistics of the ALPACA52K dataset before and after revision, including the average length and average edit distance at the word-level. The CoachLM-revised dataset showed significant revisions on RESPONSES in most instruction pairs and resulted in longer responses on average compared with the original dataset, indicating the addition of substantial new content in the revised responses. In contrast, only around 8k instruction pairs exhibited revisions on INSTRUCTIONS. The relatively small number of revisions and nearly unchanged average length suggest that CoachLM 2) ChatGPT Evaluation: As described in Section III-A1b, ChatGPT is employed to rate the accuracy of each RESPONSE on a scale of 0-5 [20], which we utilized as an automatic quality metric for the entire dataset. Fig. 4 illustrates the significant improvement in the average rating of responses in the ALPACA52K dataset, rising from 3.95 to 4.31 after the revision by CoachLM. The original dataset had only 17.7% (around 9k as reported in [20]) of instruction pairs with a rating above 4.5. However, this ratio increased significantly to 78.9% in the CoachLM-revised dataset. This enhancement indicates that instead of refining the ALPACA52K dataset by discarding a majority of samples, the CoachLM-revised dataset predominantly consists of high-quality instruction pairs. As a result, it can positively impact the instruction tuning of LLMs, while preserving the integrity of the original dataset. 3) Human Evaluation on Data Quality: Since the evaluation approach of ChatGPT only covers RESPONSES, we performed a human evaluation to assess the quality of both the RESPONSES and INSTRUCTIONS, as described in Section III-A1a. To achieve this, we randomly selected 150 instruction pairs from the revised dataset and obtained ratings from three independent reviewers who were unaware of the sample sources. Among these pairs, 18 had modifications in terms of INSTRUCTIONS made by CoachLM. The results, presented in Table VIII, indicate that after the revision by CoachLM, both the INSTRUCTIONS and RESPONSES received higher average scores according to all three reviewers. Notably, the improvement in RESPONSES was more pronounced for the 18 " }, { "figure_ref": [ "fig_5" ], "heading": "C. Evaluation of LLM Tuned on CoachLM-revised Dataset", "publication_ref": [ "b21", "b22" ], "table_ref": [ "tab_10", "tab_11" ], "text": "In this section, we evaluate the Alpaca-CoachLM model, which is tuned using the same settings as Alpaca [15], but with the CoachLM-revised dataset replacing the ALPACA52K dataset. We also display our Alpaca-human model, with the human-revised subset merged into the full dataset.\n1) Compare Alpaca-CoachLM with Existing LLMs: a) Setup: We compare our model with two groups of existing language models (LLMs). The first group is Baseline LLMs, which are instruction-tuned LLMs from LLaMA with the same number of parameters (i.e., 7B) and similar amounts of training data. To further assess the boundary of Alpaca-CoachLM, we compare it with the second group of Stronger LLMs. These models have larger scales (13B), are tuned with proprietary instruction datasets (e.g., LLaMA2-chat [34], ChatGLM2 [32]), or benefit from additional feedback from RL pipelines. The four test sets used in the evaluation are described in Section II-G. For each sample in a test set, PandaLM rates the candidate response against the reference responses and produces a conclusion of \"win\", \"tie\", or \"lose\". We compute three types of win rates: (1) WR1, which considers a \"tie\" as a half-win and is calculated as WR1= #win+0.5×#tie #all\n, where #all is the number of samples in the test set; (2) WR2, which excludes tied cases and is given by WR2= #win #all-#tie ; and (3) QS, a quality score that measures the ratio of responses reaching the level of references, formulated as QS= #win+#tie #all . b) Result: The result is shown in Table IX. In addition to the advantage of Alpaca-human on win rates against Alpaca and Alpaca-cleaned, Alpaca-CoachLM further evolves after being trained on the fully revised dataset and outperforms all models in the baseline group, including the Vicuna-7b model [16], which is tuned with 70k high-quality user-shared conversations with ChatGPT. Additionally, despite being smaller in scale and trained with fewer signals, Alpaca-CoachLM achieves impressive results in the group of stronger LLMs, with the highest win rates in five out of the 12 comparisons, and outperforms the 13B Vicuna model in all test sets.\n2) Human Evaluation on Alpaca-CoachLM: In addition to automatic evaluation, human reviewers independently rated the responses generated by Alpaca-CoachLM and the original Alpaca model in the CoachLM150 test set. The reviewers were unaware of the sources of the responses. As shown in Table X, all reviewers consistently gave Alpaca-CoachLM a higher average score (ranging from 58.6 to 64.3) compared with the original Alpaca model. This improved performance of Alpaca-CoachLM further confirms the effectiveness of the revisions made by CoachLM, which successfully enhance the instruction-following ability of subsequently tuned LLMs by optimizing the quality of the underlying instruction dataset. Although the introduction of less-modified human input samples hindered the performance of Alpaca-CoachLM, the win rate of Alpaca-human steadily increases as more humanrevised samples replace the original ones in the training dataset (Fig. 5(b)). This suggests that even minor human revisions improve the quality of revised instruction pairs compared to the original counterparts, thereby enhancing the dataset used to train Alpaca-human. Based on linear fitting (R 2 = 0.9799), the win rate of Alpaca-human increases at a rate of 3.07%/k and is estimated to surpass Alpaca-CoachLM with 7.3k humanrevised samples. Notably, Alpaca-CoachLM only requires around 0.7k human-revised samples, highlighting the costsaving advantage of CoachLM in expert labor, as it achieves the same model performance with only 9.45% human input. Given that their human annotation guidelines also encompass dimensions such as the feasibility of instructions, as well as the correctness, richness, and helpfulness of responses, the integration of CoachLM can serve as an improved precursor for human revisions, thus mitigating the manual workload." }, { "figure_ref": [], "heading": "E. Different Backbone Models of CoachLM", "publication_ref": [], "table_ref": [], "text": "As of the time of writing this paper, the deployed CoachLM has successfully involved in the production of an entire batch of high-quality instruction pairs (approximately 40k). The inference process of CoachLM was executed on 1 NVIDIA A100 GPU with an inference batch size of 32, achieving an average speed of 1.19 samples per second. A comparative analysis between the current batch of data cleaning and the previous batch (with online models unchanged) reveals that the integration of CoachLM, with its revised instruction pairs serving as a precursor for human annotators, has resulted in an increase in the production efficiency of high-quality instruction pairs from around 80 per person-day to nearly 100 per person-day, while adhering to the same acceptance criteria as the previous batch. After deducting the improvement of efficiency brought by enhanced proficiency of human experts in annotation, the net improvement brought by CoachLM is estimated to be around 15-20%, which is a significant cost saving since the inference of CoachLM on 100 samples only costs around two minutes." }, { "figure_ref": [], "heading": "B. Feedbacks of CoachLM from Experts", "publication_ref": [], "table_ref": [], "text": "During the evaluation and practice of CoachLM, comments from the participating experts were actively encouraged and collected. One of the human evaluators provided feedback indicating that the responses revised by CoachLM \"generally provide more pervasive points, especially in mathematics and logical problems\". Moreover, a practitioner commented that \"CoachLM significantly augments the raw instruction pair by generating a more comprehensive structure of content, thereby enhancing the efficiency of subsequent human post-editing tasks in comparison to manual composition of the structure\".\nHowever, there were also some concerns raised. One evaluator described a case where CoachLM did not correct the inclusion of hallucinated content but instead assumed it to be factual and further expanded upon it. Additionally, another evaluator highlighted that for certain straightforward instructions, such as determining the sum of two numbers, the level of detail in the responses revised by CoachLM may be excessive. These valuable feedbacks shed light on potential future directions for enhancing the performance of CoachLM, including refining the evaluation criteria and integrating RL signals to mitigate the occurrence of hallucinations." }, { "figure_ref": [], "heading": "V. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Instruction-following LLMs", "publication_ref": [ "b14", "b15", "b7", "b41", "b20", "b21", "b30", "b26", "b22", "b23" ], "table_ref": [], "text": "The initial investigation of the instruction-following ability of LLMs involves fine-tuning the models on a combination of multiple verbalized Natural Language Processing (NLP) datasets [8], [9], demonstrating impressive generalization capabilities across various unseen tasks. Subsequently, instead of fine-tuning on a single task-related dataset, the mainstream LLMs have shifted towards being fine-tuned on complex human-curated instruction datasets [1], [32], [34], [35]. Due to the expertise requirement and high cost associated with this approach, Alpaca [14], [15] provides an automated method to create instruction datasets by distilling the knowledge of a teacher LLM (e.g., GPT-3.5). Various variants of Alpaca have been developed, including hyper-parameter optimization (Alpaca-PandaLM [24]), subset filtering (AlpaGasus [20]), and noise cleaning (Alpaca-cleaned). Additionally, studies have explored the use of real-world user dialogue data with ChatGPT to perform instruction tuning [16], [17]." }, { "figure_ref": [], "heading": "B. Data Quality in LLM", "publication_ref": [ "b42", "b44", "b23", "b25", "b36", "b26", "b25", "b27" ], "table_ref": [], "text": "Over the past decade, efforts have been made to improve the data quality within the AI/ML lifecycle [36]- [38]. When creating training datasets for LLMs, it is widely recognized that the quality of the data is more important than the quantity [17]- [19], [30], [34]. In fact, the introduction of low-quality data can harm the performance of the models. This issue is particularly pronounced in machine-generated instruction datasets, as evidenced by AlpaGasus [20], which found that out of the 52k instruction pairs in the ALPACA52K dataset, only 9k were of high quality. In addition to filtering-based approaches [19]- [21], the Alpaca-cleaned project explored an improvement-based approach with rule-based cleaning on a small subset of the dataset." }, { "figure_ref": [], "heading": "C. LLMs for Data Engineering in Industry", "publication_ref": [ "b45", "b46", "b47", "b48", "b50", "b51" ], "table_ref": [], "text": "LLM-based approaches have been increasingly utilized in various real-world data engineering tasks. For instance, Ahmed et al. [39] employed fine-tuned GPT-3.x models to facilitate cloud incident management at Microsoft. Chen et al. [40] leveraged the semantic matching capabilities of LLMs to develop a multi-vendor configuration management tool at Huawei. LLM-based programming assistants, such as Copilot [41], have been successfully deployed in code data analysis applications, providing accurate code understanding and recommendations [42]- [44]. Additionally, Liu et al. utilized LLMs to automate high-precision data analysis on tabular datasets, implementing their approach in an LCD factory and a solar cell factory [45].\nIn comparison to existing studies, our work focuses on improving data quality in LLM training and thereby can be integrated into industrial LLM applications to improve data engineering performance. We validates the feasibility of expertaligned revisions on instruction pairs from the entire instruction dataset. Compared with filtering-based approaches, our approach maintains the integrity of the dataset and increases the proportion of high-quality samples, thereby resulting in better performance improvements of LLMs." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this study, we propose CoachLM, a novel approach to tackle the issue of unguaranteed data quality in LLM instruction tuning. Owing to the ability of automatic revisions aligned with language experts, CoachLM effectively enhances the proportion of high-quality samples in the ALPACA52K dataset, resulting in notable performance improvements in instruction-tuned LLMs. Additionally, the successful deployment of CoachLM in an industrial-level data management system highlights its potential advantages in the operation and maintenance lifecycle of LLMs, reducing costs associated with manual data cleaning and labeling. Future work includes training CoachLM on a larger scale of parameters, integrating RL pipelines to mitigate hallucination and validating it using a more diverse range of instruction datasets." } ]
Instruction tuning is crucial for enabling Language Learning Models (LLMs) in responding to human instructions. The quality of instruction pairs used for tuning greatly affects the performance of LLMs. However, the manual creation of highquality instruction datasets is costly, leading to the adoption of automatic generation of instruction pairs by LLMs as a popular alternative. To ensure the high quality of LLM-generated instruction datasets, several approaches have been proposed. Nevertheless, existing methods either compromise dataset integrity by filtering a large proportion of samples, or are unsuitable for industrial applications. In this paper, instead of discarding low-quality samples, we propose CoachLM, a novel approach to enhance the quality of instruction datasets through automatic revisions on samples in the dataset. CoachLM is trained from the samples revised by human experts and significantly increases the proportion of high-quality samples in the dataset from 17.7% to 78.9%. The effectiveness of CoachLM is further assessed on various real-world instruction test sets. The results show that CoachLM improves the instruction-following capabilities of the instruction-tuned LLM by an average of 29.9%, which even surpasses larger LLMs with nearly twice the number of parameters. Furthermore, CoachLM is successfully deployed in a data management system for LLMs at Huawei, resulting in an efficiency improvement of up to 20% in the cleaning of 40k realworld instruction pairs. We release various assets of CoachLM, including the training data, code and test set 1 .
CoachLM: Automatic Instruction Revisions Improve the Data Quality in LLM Instruction Tuning
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustration of instruction tuning LLMs on pairs of INSTRUCTION and RESPONSE.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "( a )aThe training process of CoachLM (b) The workflow of CoachLM in boosting LLM instruction tuning Fig. 2. Illustration of CoachLM: (a) in the training stage and (b) in the inference stage. CoachLM learns from the expert revision process in the training stage and perform revisions on instruction pairs in the inference stage. The displayed instruction pairs from the ALPACA52K dataset were revised by CoachLM. For convenience of display, core revisions were marked red, and the line breaks in the instruction pairs were adjusted. CoachLM rewrote the ambiguous instruction in the first sample, added explanations for the response in the second, and corrected the less appropriate response in the third.", "figure_data": "", "figure_id": "fig_1", "figure_label": "a", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Illustration on format of the instruction pairs xc in the coach instruction tuning. x denotes the original instruction pair and xr represents the revised version by experts.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "7 https://github.com/tatsu-lab/stanford alpaca primarily adjusted the logical and linguistic aspects of the INSTRUCTIONS without adding much new content.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "31 Fig. 4 .314(a) Before: Average score is 3.95 (b) After: Average score is 4.Histogram of ratings by ChatGPT on the whole ALPACA52K dataset before and after CoachLM revision.", "figure_data": "", "figure_id": "fig_4", "figure_label": "314", "figure_type": "figure" }, { "figure_caption": "5 .5Win rates of (a) Alpaca-CoachLM and (b) Alpaca-human against reference responses in the CoachLM150 test set with varying human input ratio α, rated by GPT-4 and PandaLM. α represents ratio of human input used for training, with amount of human revision sorted from largest to smallest. α=0 means no human input in training and α=1 means the full human input is used. The displayed win rate is the average of WR1, WR2 and QS. CoachLM models and subsequently tuned Alpaca-CoachLM models. Fig. 5(a) shows the performance of Alpaca-CoachLM for different α values. Both the ratings by PandaLM and GPT-4 demonstrate a similar trend, with the highest win rate observed at α=0.3. The win rate of Alpaca-CoachLM increases as α goes from 0 to 0.3, indicating the importance of high-quality expert knowledge in achieving desirable revision ability for CoachLM. However, as α increases beyond 0.3, the inclusion of samples with fewer modifications introduces noise in aligning CoachLM with experts, potentially lowering the quality of the CoachLM-revised dataset and decreasing the win rates of the tuned Alpaca-CoachLM. Nevertheless, the reduction in win rate caused by this noise is at most around 10%, demonstrating the relative robustness of CoachLM.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "To further assess the robustness of CoachLM, we trained it with three different open-sourced backbone models: LLaMA, ChatGLM, and ChatGLM2. The win rates of the subsequently acquired Alpaca-CoachLM model on the CoachLM150 test set, evaluated by PandaLM, are displayed in TableXI. In this experiment, we kept the value of α fixed at 1. Our results show that Alpaca-CoachLM outperforms the original Alpaca under all backbone models, indicating the robustness of CoachLM across different backbones. Notably, we observed improved performance from LLaMA, the foundation LLM, to RL-tuned ChatGLM2, suggesting that more powerful backbones enhance the alignment ability with experts in coach instruction tuning.IV. DISCUSSIONA. CoachLM in Practice", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Architecture of an LLM data management system at Huawei integrated with CoachLM. CoachLM automatically cleans noisy instruction pairs and mitigates human workload in data cleaning.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "EVALUATION CRITERIA FOR THE QUALITY OF INSTRUCTION PAIRS", "figure_data": "Criteria for INSTRUCTIONLevelDimensionDescriptionMain ChecklistScore RangeAdvanced RequirementContextualizationThe instruction includes a rich context or effective prompt-ing skills to facilitate detailed and accurate responses.Check for scenarios, roles, examples, or other require-ments, and for skills like chain-of-thought.80-100BasicRequirement", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "DISTRIBUTION OF THE 1088 EXCLUDED INSTRUCTION PAIRS", "figure_data": "ReasonExampleRatioInvalid Input: The key contentof the instruction is invalid.", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "SETS ON INSTRUCTION-FOLLOWING ABILITY OF LLMS", "figure_data": "NameSizeNumber of CategoriesReference ResponseCoachLM15015042HumanPandaLM170 [24]17011ChatGPTVicuna80 [16]809BardSelf-Instruct252 [14]25215Human", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "RATINGS ON A SUBSET OF THE COACHLM-REVISED DATASET", "figure_data": "DatasetINSTRUCTIONRESPONSER1 R2 R3 Avg. R1 R2 R3 Avg.Randomly Sampled 150 Instruction PairsOriginal----71.1 71.2 71.3 71.2CoachLM-revised----73.9 77.2 74.0 75.018 Samples in the Subset with Modified INSTRUCTIONSOriginal76.6 74.7 77.2 76.2 67.9 70.0 68.4 68.8CoachLM-revised 78.3 79.6 79.1 79.0 75.3 81.8 75.6 77.6", "figure_id": "tab_9", "figure_label": "VIII", "figure_type": "table" }, { "figure_caption": "RATES OF LLMS AGAINST REFERENCE RESPONSES ON FOUR INSTRUCTION-FOLLOWING TEST SETS RATED BY PANDALM", "figure_data": "ModelSizeType aCoachLM150PandaLM170Vicuna80Self-instruct252WR1 WR2 QSWR1 WR2QSWR1 WR2QSWR1 WR2QSStronger LLMssamples with modified INSTRUCTIONS compared with the en-tire subset, implying the importance of a feasible and accurateINSTRUCTION in enhancing the quality of RESPONSE.", "figure_id": "tab_10", "figure_label": "IX", "figure_type": "table" }, { "figure_caption": "EVALUATION ON ALPACA-COACHLM AND ALPACAAs is described in Section II-F2, α determines the fraction of human input with high-quality revisions used in training. A higher α implies that a larger proportion of revision examples with highest edit distance is utilized. For Alpaca-CoachLM, when α is set to 1, all 2.3k expert revision examples are used for CoachLM training, while a value of 0 means no training and the backbone model (ChatGLM2) is used directly for revision. By varying α, we obtain different trained", "figure_data": "ModelR1R2R3Avg.Alpaca56.6 58.2 60.958.6Alpaca-CoachLM 61.4 66.9 64.764.3D. Impact of Human Input Ratio α", "figure_id": "tab_11", "figure_label": "X", "figure_type": "table" } ]
Yilun Liu; Shimin Tao; Xiaofeng Zhao; Ming Zhu; Wenbing Ma; Junhao Zhu; Chang Su; Yutai Hou; Miao Zhang; Min Zhang; Hongxia Ma; Li Zhang; Hao Yang; Yanfei Jiang
[ { "authors": "", "journal": "LLaMA", "ref_id": "b0", "title": "b-chat", "year": "" }, { "authors": "", "journal": "B RL-tuned", "ref_id": "b1", "title": "", "year": "0195" }, { "authors": "", "journal": "B I", "ref_id": "b2", "title": "", "year": null }, { "authors": "", "journal": "B RL-tuned", "ref_id": "b3", "title": "", "year": "1994" }, { "authors": "", "journal": "B RL-tuned", "ref_id": "b4", "title": "", "year": null }, { "authors": "", "journal": "B RL-tuned", "ref_id": "b5", "title": "% Baseline LLMs Vicuna-7b [] 7B I-tuned 60", "year": null }, { "authors": "", "journal": "B I", "ref_id": "b6", "title": "3% 82.9% Alpaca-human (ours) 7B I-tuned 52", "year": "0491" }, { "authors": "L Ouyang; J Wu; X Jiang; D Almeida; C Wainwright; P Mishkin; C Zhang; S Agarwal; K Slama; A Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b8", "title": "Gpt-4 technical report", "year": "2023" }, { "authors": "K S Kalyan", "journal": "Natural Language Processing Journal", "ref_id": "b9", "title": "A survey of gpt-3 family large language models including chatgpt and gpt-4", "year": "2023" }, { "authors": "T H Kung; M Cheatham; A Medenilla; C Sillos; L De Leon; C Elepaño; M Madriaga; R Aggabao; G Diaz-Candido; J Maningo", "journal": "PLoS digital health", "ref_id": "b10", "title": "Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models", "year": "2023" }, { "authors": "A Askari; M Aliannejadi; E Kanoulas; S Verberne", "journal": "", "ref_id": "b11", "title": "A test collection of synthetic documents for training rankers: Chatgpt vs. human experts", "year": "2023" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar", "journal": "", "ref_id": "b13", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "C Raffel; N Shazeer; A Roberts; K Lee; S Narang; M Matena; Y Zhou; W Li; P J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b14", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "J Wei; M Bosma; V Zhao; K Guu; A W Yu; B Lester; N Du; A M Dai; Q V Le", "journal": "", "ref_id": "b15", "title": "Finetuned language models are zero-shot learners", "year": "2021" }, { "authors": "S Mishra; D Khashabi; C Baral; H Hajishirzi", "journal": "", "ref_id": "b16", "title": "Cross-task generalization via natural language crowdsourcing instructions", "year": "2022" }, { "authors": "P F Christiano; J Leike; T Brown; M Martic; S Legg; D Amodei", "journal": "Advances in neural information processing systems", "ref_id": "b17", "title": "Deep reinforcement learning from human preferences", "year": "2017" }, { "authors": "A Havrilla; M Zhuravinskyi; D Phung; A Tiwari; J Tow; S Biderman; Q Anthony; L Castricato", "journal": "", "ref_id": "b18", "title": "trlx: A framework for large scale reinforcement learning from human feedback", "year": "2023" }, { "authors": "S Zhang; L Dong; X Li; S Zhang; X Sun; S Wang; J Li; R Hu; T Zhang; F Wu", "journal": "", "ref_id": "b19", "title": "Instruction tuning for large language models: A survey", "year": "2023" }, { "authors": "Y Wang; Y Kordi; S Mishra; A Liu; N A Smith; D Khashabi; H Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Self-instruct: Aligning language models with self-generated instructions", "year": "2023-07" }, { "authors": "R Taori; I Gulrajani; T Zhang; Y Dubois; X Li; C Guestrin; P Liang; T B Hashimoto", "journal": "", "ref_id": "b21", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "W.-L Chiang; Z Li; Z Lin; Y Sheng; Z Wu; H Zhang; L Zheng; S Zhuang; Y Zhuang; J E Gonzalez; I Stoica; E P Xing", "journal": "", "ref_id": "b22", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023-03" }, { "authors": "X Geng; A Gudibande; H Liu; E Wallace; P Abbeel; S Levine; D Song", "journal": "", "ref_id": "b23", "title": "Koala: A dialogue model for academic research", "year": "2023-04" }, { "authors": "C Zhou; P Liu; P Xu; S Iyer; J Sun; Y Mao; X Ma; A Efrat; P Yu; L Yu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Lima: Less is more for alignment", "year": "2024" }, { "authors": "M Li; Y Zhang; Z Li; J Chen; L Chen; N Cheng; J Wang; T Zhou; J Xiao", "journal": "", "ref_id": "b25", "title": "From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning", "year": "2023" }, { "authors": "L Chen; S Li; J Yan; H Wang; K Gunaratna; V Yadav; Z Tang; V Srinivasan; T Zhou; H Huang; H Jin", "journal": "", "ref_id": "b26", "title": "Alpagasus: Training a better alpaca model with fewer data", "year": "2024" }, { "authors": "Y Cao; Y Kang; L Sun", "journal": "", "ref_id": "b27", "title": "Instruction mining: High-quality instruction data selection for large language models", "year": "2023" }, { "authors": "C Xu; Q Sun; K Zheng; X Geng; P Zhao; J Feng; C Tao; Q Lin; D Jiang", "journal": "", "ref_id": "b28", "title": "WizardLM: Empowering large pre-trained language models to follow complex instructions", "year": "2024" }, { "authors": "N Rajani; N Lambert; S Han; J Wang; O Nitski; E Beeching; L Tunstall", "journal": "", "ref_id": "b29", "title": "Can foundation models label data like humans?", "year": "2023" }, { "authors": "Y Wang; Z Yu; Z Zeng; L Yang; W Yao; C Wang; H Chen; C Jiang; R Xie; J Wang; X Xie; W Ye; S Zhang; Y Zhang", "journal": "", "ref_id": "b30", "title": "PandaLM: An automatic evaluation benchmark for LLM instruction tuning optimization", "year": "2024" }, { "authors": "Z Ji; N Lee; R Frieske; T Yu; D Su; Y Xu; E Ishii; Y J Bang; A Madotto; P Fung", "journal": "ACM Computing Surveys", "ref_id": "b31", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; F Xia; E Chi; Q V Le; D Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "P Lu; S Mishra; T Xia; L Qiu; K.-W Chang; S.-C Zhu; O Tafjord; P Clark; A Kalyan", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b33", "title": "Learn to explain: Multimodal reasoning via thought chains for science question answering", "year": "2022" }, { "authors": "R Shang; Y Ma; F Ali; C Hu; S Nazir; H Wei; A Khan", "journal": "Scientific Programming", "ref_id": "b34", "title": "Selection of crowd in crowdsourcing for smart intelligent applications: A systematic mapping study", "year": "2021" }, { "authors": "X Fang; S Si; G Sun; Q Z Sheng; W Wu; K Wang; H Lv", "journal": "Future Internet", "ref_id": "b35", "title": "Selecting workers wisely for crowdsourcing when copiers and domain experts co-exist", "year": "2022" }, { "authors": "L Wei; Z Jiang; W Huang; L Sun", "journal": "", "ref_id": "b36", "title": "Instructiongpt-4: A 200-instruction paradigm for fine-tuning minigpt-4", "year": "2023" }, { "authors": "V I Levenshtein", "journal": "Soviet physics doklady", "ref_id": "b37", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "Z Du; Y Qian; X Liu; M Ding; J Qiu; Z Yang; J Tang", "journal": "", "ref_id": "b38", "title": "Glm: General language model pretraining with autoregressive blank infilling", "year": "2022" }, { "authors": "E J Hu; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b39", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "H Touvron; L Martin; K Stone; P Albert; A Almahairi; Y Babaei; N Bashlykov; S Batra; P Bhargava; S Bhosale", "journal": "", "ref_id": "b40", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "M Conover; M Hayes; A Mathur; J Xie; J Wan; S Shah; A Ghodsi; P Wendell; M Zaharia; R Xin", "journal": "", "ref_id": "b41", "title": "Free dolly: Introducing the world's first truly open instruction-tuned llm", "year": "2023" }, { "authors": "X Chu; I F Ilyas; S Krishnan; J Wang", "journal": "", "ref_id": "b42", "title": "Data cleaning: Overview and emerging challenges", "year": "2016" }, { "authors": "L Schmarje; M Santarossa; S.-M Schröder; C Zelenka; R Kiko; J Stracke; N Volkmann; R Koch", "journal": "Springer", "ref_id": "b43", "title": "A data-centric approach for improving ambiguous labels with combined semi-supervised classification and clustering", "year": "2022" }, { "authors": "D Sanderson; T Kalganova", "journal": "Springer", "ref_id": "b44", "title": "Maintaining performance with less data: Understanding useful data", "year": "2023" }, { "authors": "T Ahmed; S Ghosh; C Bansal; T Zimmermann; X Zhang; S Rajmohan", "journal": "", "ref_id": "b45", "title": "Recommending root-cause and mitigation steps for cloud incidents using large language models", "year": "2023-05" }, { "authors": "H Chen; Y Miao; L Chen; H Sun; H Xu; L Liu; G Zhang; W Wang", "journal": "", "ref_id": "b46", "title": "Software-defined network assimilation: bridging the last mile towards centralized network configuration management with nassim", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Github copilot -your ai pair programmer", "year": "2023-03-13" }, { "authors": "F F Xu; U Alon; G Neubig; V J Hellendoorn", "journal": "", "ref_id": "b48", "title": "A systematic evaluation of large language models of code", "year": "2022" }, { "authors": "Y Li; D Choi; J Chung; N Kushman; J Schrittwieser; R Leblond; T Eccles; J Keeling; F Gimeno; A Lago", "journal": "Science", "ref_id": "b49", "title": "Competitionlevel code generation with alphacode", "year": "2022" }, { "authors": "B Yetistiren; I Ozsoy; E Tuzun", "journal": "", "ref_id": "b50", "title": "Assessing the quality of github copilot's code generation", "year": "2022" }, { "authors": "S.-C Liu; S Wang; T Chang; W Lin; C.-W Hsiung; Y.-C Hsieh; Y.-P Cheng; S.-H Luo; J Zhang", "journal": "", "ref_id": "b51", "title": "Jarvix: A llm no code platform for tabular data analysis and optimization", "year": "2023" } ]
[ { "formula_coordinates": [ 7, 51.57, 545.37, 245.85, 20.06 ], "formula_id": "formula_0", "formula_text": "θ c = arg max θ xc∈C log P (RESPONSE | INSTRUCTION; θ, x c )." }, { "formula_coordinates": [ 7, 383.73, 622.1, 179.3, 12.69 ], "formula_id": "formula_1", "formula_text": "D c = {θ c (x ′ c ) | x ′ c ∈ D ′ },(2)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [], "table_ref": [], "text": "The development of generative AI signifies a radical change in the capabilities of technology. With little more than a text input, powerful machine learning models such as GPT-3 and DALL-E can now produce amazingly convincing text, graphics, music, video, and more. Future uses of this cutting-edge technology have the potential to transform a wide range of sectors, from expediting scientific research to building whole virtual worlds for entertainment. But generative AI also raises important issues that society is only now starting to address, including as security, intellectual property, ethics, and more. This study will examine the immense potential as well as the many problems that this quickly developing sector presents. The paper will look at significant advancements in the history of generative AI, ranging from early attempts at creating procedurally in video games to more current deep learning advances that allow for remarkably human-like creative output. Following this development, the paper will address cuttingedge businesses and research facilities that are presently setting new standards in this field, as well as technologies that have enhanced capabilities like text and picture synthesis. Following an overview of the history and state-of-the-art methods in generative AI, the paper will present specific instances of its application in a variety of industries, including the creative arts, healthcare, and education. Generic artificial intelligence (AI) has the potential to transform various industries, such as drug discovery and tutoring, provided it is carefully incorporated and controlled. The risks posed by the widespread use of these systems, including algorithmic bias, disinformation, and intellectual property infringement, will also be critically examined in this paper. The investigation of the pressing issues regarding ethics and security raised by generative AI will be informed by recent controversies surrounding defective outputs and the risks of widespread media manipulation. The overall goal of this paper is to provide a thorough overview of this quickly developing technology's past, present, and potential future. We can try to improve generative AI's advantages while proactively reducing its risks by carefully examining its potential and highlighting areas that call for caution. This thoughtful examination of the great potential as well as the many hazards will add depth to important conversations about getting ready for the responsible development and application of one of the most revolutionary technologies to emerge in decades." }, { "figure_ref": [], "heading": "II. EXTENT AND IMPACT OF GENERATIVE AI", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. AI Transforming Industries and Consumers", "publication_ref": [], "table_ref": [], "text": "Generative AI is already being used in a variety of realworld applications that are transforming industries and im- Using generative AI, text, photos, audio, and video content can all be produced automatically. Businesses use this capability to produce music, art, blog posts, articles, social media posts, and marketing materials. The potential for generative AI to enable rapid, personalised, and low-cost content creation could upend entire industries, such as graphic design, music composition, and journalism. Product design and development can be completed more quickly and effectively thanks to generative AI. It is employed in the design of new consumer goods, including furniture, clothes, medications, and other items. Businesses can accelerate the release of innovative products onto the market by automating certain aspects of the design process. Workflows and economics in product development could be greatly impacted by this technology. Scientists benefit from generative AI's ability to suggest novel theories and research directions. It has already been applied to the creation of fresh drug candidates and substances. Generative AI may quicken the pace of innovation across all scientific fields by supporting human researchers. These generative AI capabilities are being leveraged by both startups and major corporations. Prominent instances comprise OpenAI's DALL-E 2 for producing lifelike images from text, Anthropic's Claude for engaging in natural language conversations, and Alphabet's AlphaFold for protein structure prediction. Rapid consumer adoption of generative AI is also being made possible by the introduction of tools like OpenAI's GPT-3 language model. -The generation of text and images is led by incumbents.\n-Code generation is becoming more popular among new startups.\n-Over 200 percent more traffic has been coming to these products on average over the last 12 months. According to this analysis, there is a sizable demand from users for intuitive generative AI applications. The next \"big winner\" in the generative AI market might be the business that can effectively blend functionality and power in a single product." }, { "figure_ref": [ "fig_1" ], "heading": "III. INSIGHTS FROM TOP GENERATIVE AI COMPANIES", "publication_ref": [], "table_ref": [], "text": "Rapid advancements in generative AI have occurred in recent years, leading to the emergence of new products and capabilities that are changing markets and affecting consumers. Several significant trends are revealed by analysing the top 50 consumer-facing generative AI companies:Eighty percent of the top 50 companies are new, having launched within the last year. This demonstrates how quickly generative AI is developing. 48 percent of the companies are fully self-funded startups, and only five have ties to large tech companies. This raises the prospect of developing innovative AI products with little outside funding.\nChatGPT Has Early Dominance Currently, ChatGPT completely rules the consumer AI market. It is the 24th most visited website worldwide as of June 2023, receiving 1.6 billion visits each month. CharacterAI, the second-biggest player, only accounts for 21 percent of ChatGPT's traffic. ChatGPT is still smaller than most mainstream websites, comparable to sites like Reddit and LinkedIn. However, it has grown extremely quickly.checkout the \"Fig. 2\".\nText Generation Leads, Creative Tools Rising Chatbots like ChatGPT account for 68 percent of consumer traffic, making text generation the most popular application. However, creative tools like image, music, and video generation are rapidly gaining traction. Image generation has 41 percent of creative tool traffic, writing tools have 26 percent, and video generation 8 percent as shown in the \"Fig. 3\"." }, { "figure_ref": [], "heading": "IV. RISKS OF GENERATIVE AI", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Balancing Progress and Responsibility", "publication_ref": [], "table_ref": [], "text": "Though a revolutionary technology, generative AI is not without risk. The possibility of false information and deepfakes being created is one major worry. Images, videos, and audio recordings can all be produced by generative AI that will look very real to the viewer. Misinformation and malicious impersonation could be disseminated through this. The automation of tasks currently performed by human professionals like journalists, copywriters, and graphic designers poses another risk of job displacement. This is due to the development of generative AI.\nFurthermore, prejudice and discrimination can still affect generative AI systems. Given that they receive training on large-scale datasets, any biases inherent in the data may find their way into the content they produce, which may serve to reinforce negative stereotypes. Developing techniques to recognise and stop the spread of false content is essential to mitigating these risks. Furthermore, it is crucial to guarantee that generative AI systems are trained on objective data and to encourage their ethical and responsible use. Although generative AI has the potential to revolutionise a number of industries and aspects of our lives, we must constantly be aware of the risks it entails and take proactive steps to address and mitigate them. With its rapid advancement and potential to transform industries and societal aspects, generative AI demands a thorough investigation of its causes and effects in order to ensure responsible and ethical application. The proliferation of large datasets, which enable models to understand Significant effects of generative AI are already being seen in a variety of industries. Creative domains give rise to avant-garde literature, music, and art because they produce realistic images, melodies, and literary works. Within the business domain, it streamlines processes through the automation of tasks, improvement of efficiency, and provision of insightful data. Customer service, product innovation, and marketing content are the main winners. Healthcare, meanwhile, is using generative AI to diagnose diseases, develop new drugs, and customise treatments. This approach promises individualised treatment regimens and inventive targets for new drugs.Benefits of generative AI include increased productivity and creativity, easier problem solving, and better quality of life. Contrarily, its more sinister potentialities include the spread of false infor-mation and disinformation, raising issues with democracy and public confidence. Due to task automation, job displacement is a significant concern. Another unsettling possibility is the production of dangerous or harmful content, such as deepfakes and autonomous weapons. In summary, the complex causes and consequences of generative AI highlight its revolutionary potential, necessitating close supervision to maximise its advantages and minimise its hazards.\nIn conclusion, generative AI's complex causes and effects highlight both its potential for positive change and its potential to pose significant societal challenges. Enabling its advantages while reducing its risks depends critically on its responsible development and application." }, { "figure_ref": [], "heading": "B. Mitigating Risks and Facing Oppositions", "publication_ref": [], "table_ref": [], "text": "When discussing the dangers of generative AI, a number of viable remedies and counterarguments surface. A multifaceted strategy is required to effectively mitigate these risks. First and foremost, it is crucial to establish and follow ethical standards for the development and use of generative artificial intelligence. To ensure responsible AI development, these guidelines should address issues of bias, privacy, secu-rity, and transparency. Furthermore, regulation is important. In delicate fields like politics, where deepfakes could be extremely dangerous, governments can enact regulations to regulate the application of generative AI. Additionally vital is public education. People can make wise decisions, spot false information, and effectively combat disinformation by developing a better understanding of generative AI. In addition, there is a keen pursuit of technical solutions. Researchers are working to create watermarks to distinguish genuine content from deepfakes as well as novel techniques for identifying and preventing fake news and disinformation. In addition to these general tactics, solutions specific to the industry must be taken into account. For instance, generative AI has the potential to completely transform the development of individualised treatment plans in the healthcare industry. However, in order to prevent bias, models must be trained on high-quality, diverse, and representative data. Similar to this, generative AI can be an effective tool in the financial industry for identifying fraud and money laundering, which calls for the installation of strong security measures to guard against abuse. It's important not to undervalue how ethical generative AI is. It is imperative to take into account factors like prejudice, privacy, security, and openness. Preventing discrimination and negative consequences requires addressing bias in training data. When realistic images and videos are created, privacy concerns surface, and precautionary steps like gaining consent are required. Transparency is key to building trust, with complete disclosure of data sources, algorithms, and potential hazards. Security measures are necessary to prevent the production of malicious content, such as deepfakes. A spectrum of opposing perspectives on generative AI exists amid these factors. It is argued by some that this technology is dangerous because it can produce hate speech, fake news, and autonomous weapons. It can also threaten jobs by automating jobs. Another issue to be concerned about is the possibility of bias in AI models. On the other hand, generative AI advocates view it as a potent instrument that can boost productivity and creativity by automating processes, resolving challenging issues, and enhancing general quality of life. It is capable of coming up with fresh concepts, producing imaginative writing, and providing original answers to pressing societal issues like creating novel medications and therapies, planning effective transit networks, and completely revamping the educational system. In conclusion, industry-specific precautions, laws, regulations, technical advancements, and ethical standards are all necessary to reduce the risks associated with generative AI. Though there are legitimate worries about its abuse and bias, generative AI has enormous potential to improve society overall by increasing creativity and problem-solving skills as well as general quality of life. To fully utilise this revolutionary technology, it will be necessary to strike a balance between these divergent viewpoints." }, { "figure_ref": [], "heading": "V. ADDITIONAL THOUGHTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Personal Observations", "publication_ref": [], "table_ref": [], "text": "The world is embracing AI and seeing its potential in what it can accomplish thanks to generative AI, which is revolutionising the field. For instance:\n1. Can be used to build lifelike digital twins of individuals that can be used for training in a variety of fields, including education, healthcare, and customer service.\n2. Make artificial data that can be used to train other AI models and enhance their performance.\n3. Customise learning experiences for each student by creating activities and content that are specific to their needs. This is how the renowned Khan Academy achieved it by using GPT 4 behind the scenes to assign each student to a personal tutor, which lowers costs and improves student learning. Noting that generative AI is still a relatively new technology and that its full potential is still unrealized is important. We can anticipate much greater effects on society as generative AI develops and becomes more advanced. which may have a significant impact on how society functions." }, { "figure_ref": [], "heading": "B. Call To Action", "publication_ref": [], "table_ref": [], "text": "With the potential to impact every aspect of our lives, from the trivial to the significant, generative AI has the power to drastically alter our world.A thorough and ongoing conversation must be had about its implications as its capabilities continue to advance at an unprecedented rate. This study has acknowledged the risks and difficulties associated with generative AI while also highlighting its enormous potential. Still, we only have a limited grasp of this emerging technology. Much more study is required to fully understand its implications and make sure it is used for the good of humanity. For this reason, we are putting out a call to action to researchers, decisionmakers in government, business executives, and the general public to join us in this vital project. In particular, we implore:\n• Scholars investigate the advantages, disadvantages, and biases of generative AI by delving deeper into its technical foundations. Additionally, we must work to create strategies for guaranteeing the explainability, safety, and security of generative AI systems.\n• Legislators should think about the societal, legal, and ethical ramifications of generative AI. Creating frameworks to handle matters like liability, ownership, and intellectual property is part of this.\n• Responsible practises in the development and application of generative AI should be adopted by industry leaders. This entails speaking openly with stakeholders about the application of generative AI and addressing any concerns they may have.\n• The general public should learn about generative AI and take part in conversations about its future. This entails exercising critical thought when utilising generative AI and being aware of its possible advantages and disadvantages.\nTogether, we can make sure that generative AI is put to good use and contributes to the development of a more fair, just, and prosperous future for all. Let's not watch as this technological revolution unfolds as spectators. Let's design a future in which generative AI drives progress and positive change." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Generative AI is poised to redefine creativity, reshape industries, and change the very foundation of society as we know it. It stands at the cusp of a transformative era. It has broad and far-reaching implications that include both tremendous potential and significant challenges. Generative AI has the potential to unleash previously unheard-of levels of creativity and productivity. It can free up human ingenuity for more creative and strategic endeavours by automating tasks and producing new ideas. In disciplines like drug discovery, materials science, and design, generative AI can quicken the pace of research and result in discoveries that would be unthinkable otherwise. Furthermore, generative AI holds promise for democratising creativity and opening it up to a larger audience. Generative AI can enable people to express themselves in captivating new ways by offering tools to help with ideation, execution, and refinement. This might result in an explosion of artistic expression and a more inclusive and diverse creative community. But the emergence of generative AI also brings up a number of issues. The possibility of losing one's job is among the most urgent. Work that is currently done by humans will probably be automated by generative AI as it develops. Social unrest and widespread unemployment could result from this. The potential for abuse is another issue. Disinformation such as deepfakes and fake news can be produced using generative AI. This could sow discord in society and erode trust in institutions. It is crucial to develop ethical standards for the creation and application of generative AI in order to reduce these risks. These rules ought to cover things like accountability, transparency, and bias. In the end, generative AI has significant and far-reaching effects. It could be advantageous to society or detrimental. The secret is to maximise its positive effects while reducing its negative ones.\nGenerative AI is a two-edged sword, to sum up. While it also presents risks to jobs and social stability, it has the potential to bring about a more creative and prosperous world. Our use of generative AI will determine how it develops in the future." } ]
Technology is being revolutionized by generative artificial intelligence (AI), which generates highly tailored and lifelike content automatically across a variety of media. Although this technology has the power to completely change businesses, it also poses social, legal, ethical, and security risks. This paper explores the practical uses of generative AI in research, product creation, and marketing by offering a thorough analysis of the field's state and future potential. It addresses important developments in the industry, such as the emergence of new players, the rapid expansion of text generation platforms, and the growing acceptability of creative generative AI. It also emphasizes the growing demand for user-friendly generative AI tools. The paper discusses important moral concerns about misinformation, bias, employment displacement, and malevolent use of generative AI. It suggests mitigating measures like moral guidelines, legal frameworks, public awareness campaigns, and technological advancements and interventions unique to the industry. It offers a fair assessment, taking into account both the advantages and disadvantages of generative AI. The article's conclusion emphasizes the significance of responsible development, which calls for ongoing study and stakeholder dialogue to guarantee that generative AI has a beneficial social impact while reducing its negative effects.
The Rise of Creative Machines: Exploring the Impact of Generative AI
[ { "figure_caption": "Fig. 1 .Fig. 2 .12Fig. 1. Generative AI Products a16z.com", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Popularity Via Usage a16z.com", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Risks Of Generative AI towardsdatascience.com", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" } ]
Saad Shaikh; Sakshi Mhaske; Rajat Bendre; Ankita Aggarwal; Ajeenkya D Y Patil
[ { "authors": "Chris Stokel-Walker; ' ; Richard Van Noorden's", "journal": "Springer Nature Limited", "ref_id": "b0", "title": "The Promise And The eril of Generative AI", "year": "2023-02" }, { "authors": "David Baidoo; -Anu ; Leticia Owusu; Ansah ", "journal": "Journal of AI", "ref_id": "b1", "title": "Education in the Era of Generative Artificial Intelligence (AI): Understanding the Potential Benefits of ChatGPT in Promoting Teaching and Learning", "year": "2023-12" }, { "authors": "Weng Marc Lim; Asanka Gunasekara; Jessica Leigh Pallant; Jason ; Ian Pallant; Ekaterina Pechenkina", "journal": "The International Journal of Management Education", "ref_id": "b2", "title": "Generative AI and the Future of Education: Ragnarök or Reformation? A Paradoxical Perspective from Management Educators", "year": "2023-07" }, { "authors": "K Yogesh; Nir Dwivedi; Laurie Kshetri; Emma Hughes; Louise Slade; Anand Jeyaraj; Arpan Kumar Kar; Abdullah M Baabdullah; Alex Koohang; Manju Vishnupriya Raghavan; Hanaa Ahuja; Albanna; Ahmad Mousa; Adil S Albashrawi; Janarthanan Al-Busaidi; Yves Balakrishnan; Sriparna Barlette; Indranil Basu; Laurence Bose; Dimitrios Brooks; Lemuria Buhalis; Ryan Carter; Wright", "journal": "International Journal of Information Management", "ref_id": "b3", "title": "So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice, and policy", "year": "2023-08" }, { "authors": "Mladan Jovanovic; Mark Campbell", "journal": "IEEE", "ref_id": "b4", "title": "Generative Artificial Intelligence: Trends and Prospects", "year": "2022-10" }, { "authors": "Jonas Oppenlaender; Aku Visuri; Ville Paananen; Rhema Linder; Johanna Silvennoinen", "journal": "", "ref_id": "b5", "title": "Text-to-Image Generation: Perceptions and Realities", "year": "2023-05-02" }, { "authors": "Chen Chen; Jie Fu; Lingjuan Lyu", "journal": "", "ref_id": "b6", "title": "A Pathway Towards Responsible AI Generated Content", "year": "2023-03-17" } ]
[]
2024-03-10
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b2", "b3", "b6", "b3", "b4", "b5", "b6", "b7", "b9", "b10", "b12", "b13", "b16", "b17", "b18", "b19", "b13", "b1", "b14" ], "table_ref": [], "text": "D IGITAL image forensics has been drawing more and more attention in the scientific and industrial communities for the urgent needs to detect forged digital images [1]. Forged digital images are not only used for innocent purposes, but also used for disrupting public opinion, political order or other criminal aims by ill-disposed forgers [2]. Copy-move forgery is one common manipulation among various digital image forgeries, and it duplicates regions in the same images in order to hide or reinforce objects of interest. Copy-move This work was supported in part by NSFC under Grant 62102010, and in part by the Fundamental Research Funds for the Central Universities under Grant 3282023016.\nY. Liu, C. Xia, S. Xiao, Y. Zhang are with Beijing Electronic Science and Technology Institute, Beijing 100070, China (e-mail: liuyaqi@besti.edu.cn; xiachao@besti.edu.cn; xiaosong@mail.xidian.edu.cn).\nQ. Guan is with the Computer Engineering College, Jimei University, Xiamen 361021, China (e-mail: 258817567@qq.com).\nW. Dong is with the State Key Laboratory of Integrated Service Network, Xidian University, Xi'an 710071, China (e-mail: wqdong@xidian.edu.cn).\nN. Yu is with the CAS Key Laboratory of Electro-magnetic Space Information, University of Science and Technology of China, Hefei 230026, China (e-mail: ynh@ustc.edu.cn). forgery detection, which aims to identify duplicated regions, has always been a hot topic in digital image forensics [3].\nConventional copy-move forgery detection methods, which are designed based on hand-crafted features, have dominated this field in the past [3]. While in recent years, deep learning based methods have been in the ascendant [4]- [7]. As the pioneering approach [4], ButsterNet builds an end-to-end trainable deep neural network which features a two-branch architecture followed by a fusion module. It not only detects duplicated regions but also distinguishes source/target regions. In [5], Dense-InceptionNet was constructed for copy-move forgery detection by combining pyramid feature extractor blocks to extract the multi-dimensional and multi-scale densefeatures. In [6], Chen et al. proposed two serially constructed subnetworks: one for copy-move similarity detection, and the other for source/target region distinguishment based on similarity detection results. In [7], Liu et al. concentrated on the similarity detection problem, and proposed a twostage framework which combines self deep matching and keypoint matching through a proposal selection module. All these methods are constructed based on Convolutional Neural Networks (CNN). Transformer-style [8]- [10] and MLP-style (Multi-Layer Perceptron) [11]- [13] networks recently attract an ever increasing attention for many computer vision tasks. In the field of copy-move forgery detection, the feasibility of constructing Transformer-style and MLP-style backbones is still an open issue. We construct three styles (i.e., CNN, Transformer, MLP styles) of networks with a novel pluggable hybrid decoder, making a comparative analysis. Besides, deep learning based copy-move forgery detection methods face a major problem: they tend to rely on the synthetic training datasets and have a poor generalization ability in realistic testing datasets which may have different distributions with training datasets. We propose to use continual learning mechanisms [14]- [17] to alleviate this problem.\nIn this paper, we propose a Transformer-style copy-move forgery detection network, i.e., CMFDFormer, and a novel PCSD (Pooled Cube and Strip Distillation) continual learning framework for copy-move forgery detection. The main architecture is shown in Fig. 1. CMFDFormer mainly consists of a MiT (Mix Transformer) feature extractor and a PHD (Pluggable Hybrid Decoder) mask prediction network. Our motivation of adopting Transformer is that copy-move forgery detection needs to compare all pairs of blocks or regions in one image, the accumulated affinity matrix computation using keyquery multiplication in Transformer is helpful for capturing visual similarity features. And the MiT backbone is selected based on comprehensive analyses among ResNet (CNN-style) [18], CycleMLP (MLP-style) [19], and MiT (Transformerstyle) [20]. Then, we propose a PHD network to generate the predicted mask making use of backbone features. In PHD, we utilize self-correlation computation to detect similar features, and construct a hierarchical feature integration block to get a feature map C with rich hierarchical information. And, we propose a multi-scale cycle fully-connected block making use of Cycle FCs with different dilation rates to investigate multi-scale information from C. Finally, we construct a mask reconstruction block to get the predicted mask. This network is named as Pluggable Hybrid Decoder (PHD) for that it takes advantage of step-by-step hybrid blocks and is adaptable for different style backbones to achieve comparable performance.\nBesides, we propose a PCSD continual learning framework for copy-move forgery detection. Deep learning based copymove forgery detection methods are confronted with a severe domain shift problem. Specifically, the deep learning based methods heavily rely on the training datasets, and can not achieve satisfied results on different testing datasets. While simply finetuning on different testing datasets can cause catastrophic forgetting [14]. In another word, the finetuned model can not simultaneously achieve good performance on former data and finetuning data. (Finetuning is commonly seen in the task of image forgery localization which also faces the catastrophic forgetting problem [2].) We propose a PCSD continual learning framework to make sure our model can achieve comparable performance on both old tasks and new tasks. Our PCSD continual learning framework is different from other continual learning methods in which they adopt intermediate features from feature extractors for knowledge distillation [15]. We find that if we use backbone intermediate features, it is difficult to converge for continual learning in copy-move forgery detection. Thus, our framework adopts features in PHD after self-correlation computation. Besides, we design a PCSD loss in which cube pooling and strip pooling are simultaneously conducted to capture features from both multi-scale square regions and long-range banded regions. In summary the main contributions of our work are three-fold: " }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we briefly review the state-of-the-art copymove forgery detection methods, Transformer-style networks, MLP-style networks and continual learning mechanisms which are the key techniques researched in our work." }, { "figure_ref": [], "heading": "A. Copy-Move Forgery Detection", "publication_ref": [ "b20", "b21", "b23", "b24", "b27", "b2", "b28", "b32", "b2", "b28", "b29", "b30", "b31", "b3", "b6" ], "table_ref": [], "text": "In recent decades, copy-move forgery detection has been a hot topic in digital image forensics. Different from other image forgery detection and localization tasks which need to detect high-level [21] or low-level [22]- [24] inconsistencies, copy-move forgery detection detects visual similar regions in candidate images. Conventional copy-move forgery detection methods can be categorized into two groups: 1) dense-field (or block-based) methods [25]- [28] and 2) sparse-field (or keypoint-based) methods [3], [29]- [33]. Dense-field methods divide investigated images into regular and overlapped blocks, and adopt numerous hand-crafted block features. Although dense-field methods are more accurate, the robustness against distortions still needs to be improved, and densefield methods also have higher complexity. In sparse-field methods, SIFT (Scale Invariant Feature Transform) [3], [29], [30] and SURF (Speeded-Up Robust Features) [31], [32] are commonly adopted for sparse feature extraction. Sparse-field methods are more robust against geometric transformations than dense-field methods. The performance of sparse-field methods may drop when detecting small or smooth copymove forged regions. Although tremendous progress has been made in the field of conventional copy-move forgery detection methods, their hand-crafted features may not be optimal for downstream tasks, and there are many heuristics or manually tuned thresholds [4], which may limit their performance. Thus, deep learning based copy-move forgery detection has drawn more attention recently, and some representative works are discussed in Section I [4]- [7]." }, { "figure_ref": [], "heading": "B. Transformer-Style Networks", "publication_ref": [ "b0", "b6", "b33", "b34", "b35", "b8", "b7", "b19", "b36", "b37", "b38", "b39", "b40", "b1", "b23" ], "table_ref": [], "text": "Convolutional Neural Networks (CNNs) have been the mainstream in computer vision and other downstream tasks for years [1], [7]. Inspired by the major successes in natural language processing, Transformers [34] are adopted into the computer vision community. As the pioneer work, ViT [35] splits the input image into sequences of image patches, and builds pure Transformer blocks for image classification. ViT is also adopted for dense field prediction, e.g., SETR adopts ViT to extract features for semantic segmentation and incorporates a CNN decoder [36]. ViT has two inevitable limitations: single-scale low-resolution feature maps and high computational complexity for large images. Thus, researchers proposed different solutions to address these limitations. Wang et al. [9] extended ViT with pyramid structures named as Pyramid Vision Transformer (PVT). Liu et al. [8] proposed a hierarchical Transformer, i.e., Swin Transformer, whose representation is computed with shifted windows. Xie et al. [20] presented a hierarchically structured Transformer encoder without positional encoding, and a lightweight multi-layer MLP decoder. Transformers have been adopted for various downstream tasks, e.g., object detection [37], semantic segmentation [38], object tracking [39], super-resolution [40], object re-identification [41], and splicing detection [2], [24]. The application of Transformer for copy-move forgery detection still needs further research." }, { "figure_ref": [], "heading": "C. MLP-Style Networks", "publication_ref": [ "b10", "b12", "b11", "b41", "b42", "b43", "b18" ], "table_ref": [], "text": "In MLP-style networks, almost all network parameters are learned from MLP, and these networks can achieve comparable performance. MLP-Mixer [11] shows that neither convolution nor attention is necessary, the pure combination of MLPs applied independently to image patches and MLPs applied across patches can achieve promising results. Subsequently, Res-MLP [13] is constructed with residual MLP, gMLP [12] is designed based solely on MLPs with gating, S 2 -MLP [42] uses spatial-shift MLP for feature exchange, ViP [43] builds a Permute-MLP layer for spatial information encoding to capture long-range dependencies. AS-MLP [44] pays more attention to capture local dependencies by axially shifting channels of feature maps. CycleMLP [19] utilizes the Cycle Fully-Connected Layer (Cycle FC) which has linear complexity the same as channel FC and a larger receptive field than Channel FC. Their experimental results indicate an interesting issue that an attention-free architecture can also serve as a general vision backbone. In this paper, we verify the applicability of the MLP-style network on copy-move forgery detection." }, { "figure_ref": [], "heading": "D. Continual Learning", "publication_ref": [ "b44", "b14", "b45", "b47", "b13", "b48", "b49", "b50", "b51", "b53", "b54", "b56", "b57" ], "table_ref": [], "text": "Continual learning is also referred to as lifelong learning, sequential learning, and incremental learning [45]. The starting point of continual learning which is also the core of continual learning is to learn without catastrophic forgetting: performance on a previously learned task or domain should not significantly degrade as new tasks or domains are added [15]. According to how task specific information is stored and used throughout the learning process, continual learning methods can be broadly divided into three categories: replay methods, regularization-based methods, parameter isolation methods. Replay methods store previous task samples or generate pseudo-samples, and replay these samples while learning a new task to alleviate forgetting [46]- [48]. Regularizationbased methods introduce an extra regularization term in the loss function, maintaining previous knowledge when learning new tasks [14], [49]. Parameter isolation methods dedicate different model parameters to each task, to prevent any possible forgetting [50], [51]. Besides image classification, continual learning is adopted for numerous downstream tasks, e.g., object detection [52]- [54], semantic segmentation [55]- [57], instance segmentation [58]. Copy-move forgery detection aiming for pixel-level binary classification also faces a severe domain shift problem when handling different tasks. We attempt to alleviate this problem by continual learning." }, { "figure_ref": [ "fig_0" ], "heading": "III. METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose CMFDFormer for copy-move forgery detection and a PCSD continual learning framework. As shown in Fig. 1, CMFDFormer mainly consists of a backbone feature extractor, i.e., MiT, and a mask prediction network, i.e., PHD. In section III-A, MiT is introduced, and in section III-B, PHD is presented. In section III-C, we introduce our PCSD continual learning framework." }, { "figure_ref": [ "fig_0" ], "heading": "A. Mix Transformer Encoder", "publication_ref": [ "b19", "b4" ], "table_ref": [], "text": "Copy-move forgery detection tries to compare all pairs of regions in one image, and find suspected duplicated regions. The basic computation procedure in self attention of Transformer is affinity matrix computation using key-query multiplication. The affinity matrix computation procedures are accumulated layer by layer which may be helpful to find visual similar regions. Mix Transformer (MiT) is a kind of Transformer network [20] which can provide multi-scale hierarchical feature maps. As shown in Fig. 1, MiT is built based on a hierarchical architecture with four Transformer modules and four corresponding output feature maps. Each Transformer module is composed of an overlap patch merging layer and several Transformer blocks. Each Transformer block is constituted by efficient multi-head self-attention and positional-encoding-free Mix-FFN (Feed-Forward Network).\n1) Hierarchical Architecture: MiT constructs a hierarchical architecture which can generate multi-level multi-scale features. These features contain both high-resolution low-order features and low-resolution high-order features. Specifically, with an H ×W ×3 input image, we can generate a feature map F i with a resolution of H 2 i+1 × W 2 i+1 × C i , and i ∈ {1, 2, 3, 4}. Each feature map is output by a Transformer module.\n2) Overlap Patch Merging: Overlap patch merging is designed to preserve the local continuity around splitted patches, and it gradually degrades the resolution of feature maps. In another word, the adjacent patches have overlapped regions which can avoid information fragmentation caused by nonoverlap patch splitting. Let K denote the patch size, S denote the stride between two adjacent patches, and P a denote the padding size. MiT sets K = 7, S = 4, P a = 3 in \"Transformer module 1\", and sets K = 3, S = 2, P a = 1 in \"Transformer module 2 -4\" to perform overlap patch merging which can be implemented by convolution operations.\n3) Efficient Multi-head Self-attention: The large spatial scales of query, key and value in multi-head self-attention can increase computational burden of Transformer-style networks. Efficient multi-head self-attention reduces the spatial scale of key and value before the attention operation. The efficient multi-head self-attention in the jth stage is formulated as:\nf ems (Q j , K j , V j ) = ∪(head 1 , • • • , head Nj )W O j (1)\nhead nj = Atten(Q j W Q nj , f sr (K j )W K nj , f sr (V j )W V nj )(2\n) where Q j , K j , V j are the input query, key and value at the jth stage (i.e., the jth Transformer block), ∪(• • • ) denotes the concatenation operation along the channel dimension,\nW O j ∈ R Cj ×Cj , W Q nj ∈ R Cj ×dj , W K nj ∈ R Cj ×dj and W V nj ∈ R Cj ×dj\nare parameters of linear transformation. C j denotes the channel of feature maps at the jth stage, N j is the head number of the attention layer, n j is the corresponding head index, the dimension of each head is d j = Cj Nj . The spatial reduction is computed as:\nf sr (x) = Norm(Reshape(x, R j )W S j )(3)\nwhere x ∈ R (hj ×wj )×Cj is the input sequence, R j denotes the reduction ratio of the attention layers at stage j. The function Reshape(x, R j ) reshapes x to the size of\nhj ×wj Rj × (R j × C j ), and W S j ∈ R (Rj ×Cj )×Cj is a linear projection which reduces x to the dimension of C j , Norm(•) denotes layer normalization. With transformed query q nj ∈ R (hj ×wj )×dj , key\nk nj ∈ R h j ×w j R j ×dj , value v nj ∈ R h j ×w j R j\n×dj at hands, the self-attention can be computed as:\nAtten(q nj , k nj , v nj ) = Softmax( q nj k T nj d j )v nj(4)\n4) Positional-Encoding-Free Design: MiT provides a kind of positional-encoding-free design by introducing Mix-FFN in which the effect of zero padding is considered to leak location information by directly using a 3 × 3 convolution operation in the feed-forward network (FFN). Mix-FFN is computed as: (5) where x in is the feature map from efficient multi-head selfattention, MLP(•) denotes channel-wise multi-layer perceptron, GELU(•) is the GELU (Gaussian Error Linear Unit) activation function.\nx out = MLP(GELU(Conv 3×3 (MLP(x in )))) + x in" }, { "figure_ref": [ "fig_0", "fig_0", "fig_1", "fig_1", "fig_1" ], "heading": "B. Pluggable Hybrid Decoder", "publication_ref": [ "b58", "b59", "b58", "b59", "b18" ], "table_ref": [], "text": "With multi-scale hierarchical features F i (i = 1, 2, 3, 4) extracted by the backbone feature extractor at hand, we propose a Pluggable Hybrid Decoder (PHD) network to find matched features and reconstruct suspected regions. \"Pluggable\" means our PHD can be assembled with different backbones, and \"Hybrid\" means PHD integrates multiple architectures with different flavors. As shown in Fig. 1, four groups of feature maps are passed through self-correlation computation to compute correlation maps; then, Feature Pyramid Network (FPN) [59] and Pyramid Pooling Module (PPM) [60] are constructed for hierarchical feature integration to get a concatenated tensor C with rich hierarchical information; C is further passed through a multi-scale Cycle fully-connected block for further multiscale information investigation; finally, a mask reconstruction block is constructed to get the predicted mask. 1) Self-correlation Computation: F 2 , F 3 , F 4 are computed under the same self-correlation computation procedure. F 1 is input for self-correlation computation after being downsampled by an overlap patch merging layer with K = 3, S = 2, P a = 1 for the tradeoff between memory costs and the localization performance. The self-correlation computation procedure aims to compute the similarity between every two locations in the feature maps. Firstly, L2 normalization is conducted at each location m of F i :\nFi(m) = f L2 (F i(m) ) = F i(m) ||F i(m) || 2(6)\nwhere || • || 2 denotes L2 norm. Then, the scalar product is computed among every pair of locations:\nC i(m,n) = ( Fi(m) ) T Fi(n)(7)\nby accumulating the computed results, we can get the correlation map hi×wi) . In this paper, we use subscripts with brackets to index the feature map. In fact, a subset of C i contains sufficient information to decide which feature is matched, while the majority of scores in C i are weak-correlated. C i is reshaped to the scale of h i × w i × (h i × w i ), and is sorted along the (h i × w i ) channels, and top-T values are selected:\nC i = {C i(m,n) } ∈ R (hi×wi)×(\nCi(m,n,1:T ) = Top T(Sort(C i(m,n,:) ))(8)\nA monotonic decreasing curve with an abrupt drop at some point should be observed along the T channels, as long as Ci(m,n) is matched and a proper T is selected. Zero-out and normalization operations are conducted on Ci to limit correlation values to certain ranges and filter redundant values:\nCi = f L2 (Max( Ci , 0))(9)\n2) Hierarchical Feature Integration: After self-correlation computation, we can get a set of correlation maps Ci (i = 1, 2, 3, 4) with different scales at different levels. How to integrate these correlation maps becomes a key problem. Here we adopt FPN [59] and PPM [60] for hierarchical multi-scale information investigation. As shown in Fig. 1, C1 , C2 , C3 are respectively recalibrated by 1 × 1 convolution operations. C4 is recalibrated by a PPM module. In the PPM module, four parallel average pooling operations are conducted on C4 with pooling scales as {1, 2, 3, 6}, and four parallel 1 × 1 convolution operations are followed. Then, the computed four sets of feature maps in PPM are resized to the same size as C4 , and integrated by a 3 × 3 convolutional layer.\nLet C ′ i denote the recalibrated correlation maps, low-level features are further integrated with high-level features by sequential upsampling operations f ×2 , i.e., 3) Multi-Scale Cycle Fully-Connected Block: Cycle Fully-Connected layer (Cycle FC) is proposed in CycleMLP [19] to introduce larger receptive fields for MLP. As shown in Fig. 2, Channel FC which is commonly seen in MLP-like networks applies a weighting matrix along the channel dimension on a fixed position (m, n), while Cycle FC introduces a receptive field of (S H , S W ), S H is the stepsize along with the height dimension, S W is the stepsize along with the width dimension. In Fig. 2, we show a simple example of Cycle FC whose S H is 3 and S W is 1. In the previous step, we can get the concatenated feature map C ∈ R Hc×Wc×Cin , where H c = H/8 and W c = W/8. The Cycle FC operator can be formulated as below:\nC ′′ 3 = C ′ 3 +f ×2 (C ′ 4 ), C ′′ 2 = C ′ 2 + f ×2 (C ′′ 3 ), C ′′ 1 = C ′ 1 + C ′′ 2 . Then, C ′′ 1 , C ′′ 2 , C ′′\nCycleFC( C) (m,n,:) = Cin c=0 C (m+δm(c),n+δn(c),c) • W mlp (c,:) + b(10)\nwhere W mlp ∈ R Cin×Cout and b ∈ R Cout are parameters of Cycle FC. δ m (c) and δ n (c) are the spatial offset values of the two axes on the cth channel, which are defined as below:\nδ m (c) = (c modS H ) -1(11)\nδ n (c) = (⌊ c S H ⌋modS W ) -1(12)\nMaking use of Cycle FC, we design a multi-scale Cycle FC block. The receptive field of Cycle FC can be enlarged by setting a larger dilation rate with a small kernel. As shown in Fig. 2, a small kernel S H × S W of 3 × 1 with the dilation rate as 2 has an obviously larger receptive field along H × W . Multi-scale Cycle FC block consists of nine parallel multi-scale Cycle FCs, which have stepsizes S H ×S W \nC = Conv 3×3 ( C + f linear ( 9 r=1 β r CycleFC r ( C))) (13)\nwhere β r is a learnable parameter, f linear is a channelwise linear transform, and C is the output correlation map reinforced by multi-scale Cycle FC. Conv 3×3 denotes a 3 × 3 convolution operation followed by a ReLU function.\n4) Mask Reconstruction: In order to reconstruct the final predicted mask from C, we construct a simple mask reconstruction network:\nM = f convseg (f upscale ( C))(14)\nf upscale ( C) = Conv 1×1 (f ×2 (Conv 1×1 (f ×2 (Conv 1×1 (f ×2 ( C))))))(15)\nwhere M ∈ R H×W ×2 is the predicted mask, f convseg is a 1 × 1 convolutional layer with softmax, f upscale consists of three 1 × 1 convolutional layers Conv 1×1 and three bilinear upsampling layers f ×2 . Conv 1×1 is followed by an activation function of ReLU." }, { "figure_ref": [ "fig_3" ], "heading": "C. PCSD Continual Learning", "publication_ref": [ "b16", "b60" ], "table_ref": [], "text": "Deep learning based copy-move forgery detection faces the performance drop when processing new data which has different distributions with the training data. We propose a PCSD continual learning framework for CMFDFormer to keep comparable performance on both new data and former data. In continual learning, a distillation loss is commonly formulated between the predictions of the previous and current models to alleviate catastrophic forgetting. The pooling operation plays a key role in designing the distillation loss to transfer knowledge. We design a PCSD loss which integrates cube pooling and strip pooling [17], [61], to capture information from both square regions and long-range banded regions. As shown in Fig. 3, the cube pooling mainly contains multi-scale average pooling on the spatial dimension and average pooling on the channel dimension, and strip pooling conducts long narrow pooling along the row and column. Besides, continual learning methods based on knowledge distillation in other computer vision tasks often adopt intermediate features from feature extractors for knowledge distillation. The same setting in the copy-move forgery detection network is difficult to converge. In our PCSD continual learning, we leverage the predicted mask and the feature maps of intermediate layers after selfcorrelation computation in PHD for knowledge distillation.\nThe predicted mask and PHD intermediate feature maps\n{X k |k = 1, • • • , 6} = {M, C, C ′ 1 , C ′ 2 , C ′ 3 , C ′ 4 }\nare selected for distillation. Let K denote the total number of distilled feature maps {X k }, and K = 6 in our formulation. M is the predicted mask, C is the output feature map of multiscale Cycle FC block in Eq. ( 13), {C ′ i } are the recalibrated correlation maps in the hierarchical feature integration module. We conduct cube pooling on X k , i.e., multi-scale average pooling on the spatial dimension and average pooling on the channel dimension. The pooled feature XT,k,p , XS,k,p of the teacher model and the student model can be calculated by the average pooling operation ⊙:\nXT,k,p = P p ⊙ X T,k(16)\nXS,k,p = P p ⊙ X S,k\nwhere P p denotes the pth average pooling kernel, the stride is set to 1, the superscript T indicates the teacher model, and S indicates the student model. For multi-scale average pooling on the spatial dimension, the size of kernel P p belongs to P s = {4, 8, 12, 16, 20, 24}. For average pooling on the channel dimension, P c = {3}. The knowledge distillation loss of cube pooling can be formulated as follows:\nL cpkd = 1 K 1 P K k=1 P p=1 || XT,k,p -XS,k,p || 2(18)\nwhere P = |P s | + |P c | denotes the number of average pooling kernels.\nMulti-scale average pooling on the spatial domain belongs to conventional spatial pooling which has a square shape, and it probes the input feature maps within square windows which limit their flexibility in capturing anisotropy context. Especially, in copy-move forgeries, the duplicated regions may distribute discretely or have a long-range banded structure. Although the large square pooling window can contain longrange information, it inevitably incorporates contaminating information from irrelevant regions. Strip pooling considers a long but narrow kernel, and can capture long-range information with less contaminating information.\nWe adopt a multi-scale strip pooling architecture which divides the input feature map X k into multi-scale blocks and conduct strip pooling on each block. The feature maps after multi-scale strip pooling can be formulated as:\nXT,k = ∪({Ψ q (X T,k )})(19)\nXS,k = ∪({Ψ q (X S,k )})(20)\nwhere ∪(• • • ) denotes the concatenation operation along the channel dimension, and q = 1, • • • , Q denotes how many blocks we divide, e.g., there is only 12 block when q = 1, there are 2 2 blocks when q = 2. In our implementation,\nQ = 2 for k = 2, • • • , 6; Q = 4 for k = 1, i.e., q = 1, 2, 3, 4 for M.\nThe qth strip pooled feature map can be formulated as:\nΨ q (X k ) = ⊔(Φ(X k,0,0 ), • • • , Φ(X k,q-1,q-1 ))(21)\nwhere X k,m,n = X k (mH k /q:(m+1)H k /q,nW k /q:(n+1)W k /q,:) is a sub-region of X k with the scale as\nH k q × W k q × C k , ∀m = 0, • • • , q -1, ∀n = 0, • • • , q -1. ⊔(• • • ) denotes concatenation over the channel axis, e.g., Ψ q (X k ) ∈ R (H k +W k )×q×C k , Φ(X k,m,n ) ∈ R ( H k q + W k q )×C k .\nThe embedded feature of the (m, n) block can be computed as:\nΦ(X k,m,n ) = ⊔(Q w ⊙ X k,m,n , Q h ⊙ X k,m,n ) (22\n)\nwhere Q w denotes the width-pooled kernel and Q h denotes the height-pooled kernel. The knowledge distillation loss of strip pooling can be formulated as follows:\nL spkd = 1 K K k=1 || XT,k -XS,k || 2(23)\nThus, the final loss in the pooled cube and strip distillation stage is formulated as follows:\nL = L ce + λ(L cpkd + L spkd ) (24\n)\nwhere λ is the hyper parameter of distillation loss weight, and L ce is the cross-entropy loss:\nL ce = H m=1 W n=1 C M c=1 G (m,n,c) log(M (m,n,c) )(25)\nwhere G (m,n,c) denotes the ground-truth value at position (m, n, c), M (m,n,c) denotes the predicted value at (m, n, c), C M denotes the channel number of the predicted mask." }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL EVALUATION", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the training and evaluation details in section IV-A, then ablation study is conducted in section IV-B to select an appropriate network architecture and a continual learning scheme, and the proposed framework is compared with the state-of-the-art methods in section IV-C." }, { "figure_ref": [ "fig_4" ], "heading": "A. Training and Evaluation Details", "publication_ref": [ "b3", "b6" ], "table_ref": [ "tab_2" ], "text": "The proposed network, training/continual learning/testing scripts and all compared backbones are implemented based on MMSegmentation1 . All the backbone networks are initialized by parameters pretrained on ImageNet 2 . As for the ablation study of continual learning, we mainly adopt two synthetic copy-move forgery datasets, i.e., USCISI [4], and BESTI3 [7], which have sufficient training images. USCISI has 80, 000 training images, 10, 000 validation images and 10, 000 testing images. BESTI has 120, 000 training images and 1, 000 testing images. In order to demonstrate the effectiveness of our continual learning framework with unbalanced pretraining and continual learning datasets, we split three publicly available datasets:\n• CoMoFoD: There are 200 base forged images and 24 other categories which are made by applying postprocessing/attacks to the base forged images to hide forgery clues. We randomly divide the base forged images into two groups: CoMoFoD-subset1 with 100 base forged images and 24 other categories (total 2, 500 images) for training, CoMoFoD-subset2 with other 100 base forged images and 24 categories for testing. • CASIA: There are 1, 313 copy-move forged images in total. We randomly divide it into two groups: 1, 000 images for training (CASIA-subset1), and 313 images for testing (CASIA-subset2). As for the evaluation metrics, we compute the pixel-level F1-score of each image, and compute their average F1-score of all evaluated images in the testing dataset. Since there is no ground-truth mask for original images in COVERAGE-subset3, we compute the image-level FAR. I, we adopt three different style backbones, i.e., ResNet50 (CNNstyle), CycleMLP-B3 (MLP-style), and MiT-B3 (Transformerstyle). As for PHD, there are three variants, i.e., \"MR\" which only has the mask reconstruction block with concatenated feature maps after self-correlation computation as the input, \"HFI+MR\" which adds the hierarchical feature integration module, and the final \"PHD\" which further adds the Multi-Scale Cycle FC (MSCFC) block. It can be clearly seen from TABLE I that MiT-B3 can achieve better performance than CycleMLP-B3, and CycleMLP-B3 can achieve higher scores than ResNet50. Besides, each component in PHD is helpful to improve the performance. Considering that \"PHD\" is fit for different backbones, which is why it is called \"Pluggable\", we finally select \"PHD\" for the mask prediction. Besides, in Fig. 4, F1-scores along each epoch of the three variants on the BESTI testing set and CoMoFoD subset2 are provided. Ten-epoch training is conducted on the combined datasets of USCISI and BESTI. \"MiT-B3+PHD\" achieves the highest score on BESTI while its scores are lower than \"CycleMLP-B3+PHD\" on CoMoFoD. It indicates that \"MiT-B3+PHD\" has better learning ability, and \"CycleMLP-B3+PHD\" has better generalization ability. Among the three compared variants, \"MiT-B3+PHD\" has more parameters with fewer FLOPs." }, { "figure_ref": [ "fig_4" ], "heading": "B. Ablation Study", "publication_ref": [], "table_ref": [ "tab_4", "tab_3", "tab_4" ], "text": "2) Continual Learning Analyses: PCSD continual learning framework is analysed from two aspects: PCSD continual learning on the synthetic datasets with three different backbones in TABLE II, and the strip/cube pooling evaluation in TABLE III.\nIn TABLE II, \"Train dataset\" denotes the dataset for training or pretraining, \"CL dataset\" denotes the dataset for continual learning, and \"Test dataset\" is the dataset for testing. For some variants, there is no \"CL dataset\", and we use \"-\" to denote null. From TABLE II, we can see that there are clear score drops when the training dataset and the testing dataset are different, e.g., ResNet50 with a drop of 0.384 ((0.698 -0.276 + 0.863 -0.518)/2), CycleMLP-B3 with 0.458, MiT-B3 with 0.475. In fact, the other state-of-theart deep learning based copy-move forgery detection methods (e.g., CMSDNet, SelfDM-SA+PS+CRF) also face this poor generalization ability problem. With the help of PCSD continual learning, the three variants with different backbones can achieve comparable performance on both USCISI and BESTI. The MiT-B3 backbone can achieve the best performance, and all its F1-scores are higher than 0.8 after PCSD continual learning. Especially, the score decreases on former datasets are less than 0.05, and score increases on continual learning datasets are more than 0.30.\nDifficult tradeoffs are made according to the experiments of TABLE I, TABLE II and Fig. 4. \"MiT-B3+PHD\" can achieve excellent performance with different training datasets, and its continual learning performance is even more excellent with less than 0.05 decrease and larger than 0.30 increase. Although \"CycleMLP-B3+PHD\" can achieve good performance on Co-MoFoD with F1-score larger than 0.52, its learning ability on training datasets and its continual learning performance are worse than \"MiT-B3+PHD\". Thus, we select \"MiT-B3+PHD\" as our final solution. In our view, powerful learning ability and stable continual learning are more important. When facing new tasks with sufficient training datasets and appropriate continual learning datasets, \"MiT-B3+PHD\" shows more promising performance. In the following, \"MiT-B3+PHD\" is written as \"CMFDFormer\".\nIn TABLE III, we demonstrate the effectiveness of both strip pooling and cube pooling. \"USCISI→BESTI\" denotes that USCISI is used for pretraining and BESTI is used for continual learning. \"BESTI→USCISI\" denotes that BESTI is used for pretraining and USCISI is used for continual learning. We compute the average F1-scores on USCISI and BESTI after continual learning for comparison. The motivation of combining cube pooling and strip pooling is that the duplicated regions may distribute discretely or have long-range banded structures in copy-move forgeries. Multi-scale square pooling windows in cube pooling are critical to capturing multi-scale local features. When processing discretely distributed regions or long-range banded regions, large square pooling windows would inevitably incorporate contaminating information from irrelevant regions. Strip pooling considers a long but narrow kernel, and can capture long-range information with less contaminating information. In fact, both strip pooling and cube pooling over our CMFDFormer's continual learning framework can achieve comparable performance, while PCSD has better performance." }, { "figure_ref": [ "fig_5", "fig_6", "fig_7" ], "heading": "C. Comparison with Other Methods", "publication_ref": [ "b28", "b25", "b2", "b3", "b5", "b6" ], "table_ref": [], "text": "In this section, CMFDFormer is compared with three conventional copy-move forgery detection methods (LiJ [29], Cozzolino [26], LiY [3]) and three deep learning based copymove forgery detection methods (BusterNet [4], CMSDNet [6], SelfDM-SA+PS+CRF [7]) on five datasets, i.e., USCISI, BESTI, CoMoFoD, CASIA, COVERAGE. All the scores are generated based on the released codes of the original papers. The score increases on finetuned datasets are even smaller than the score decreases on former datasets. Furthermore, it is even difficult to increase the score on COVER-AGE by finetuning. To simultaneously compare with CMS-DNet and SelfDM-SA+PS+CRF, both USCISI and BESTI are adopted as the pretraining datasets, and the subsets of CoMoFoD/CASIA/COVERAGE are respectively adopted for continual learning. After PCSD continual learning, the scores on the testing subsets can be dramatically improved, while the score decreases on USCISI and BESTI are acceptable. This experiment demonstrates that when we only have a small number of images for continual learning, our PCSD continual learning framework is still helpful. In the following, the models after continual learning are denoted as \"CMFDFormer++\".\nIn TABLE VI, the scores on the CoMoFoD-subset2, CASIA-subset2, and COVERAGE subset2/subset3 are provided. On CASIA-subset2, we find that deep learning based methods can achieve better performance than conventional methods. While on CoMoFoD-subset2 and COVERAGE-subset2, deep learning based methods have no obvious advantage than conventional methods. Especially, deep learning based methods have high false alarm rates on COVERAGE-subset3. CMFDFormer++ can achieve higher F1-scores on all 5 https://github.com/yaqiliu-cs/SelfDM-TIP testing datasets, while it also has a high false alarm rate of 0.64 on COVERAGE-subset3. COVERAGE is designed to evaluate the ability of distinguishing copy-move forged regions and similar-but-genuine regions, and duplicated regions suffer copy-move forgeries without complicated transforms. Conventional methods can handle these cases well, while the robustness of deep learning based methods against different transforms cause that they are difficult to distinguish copymove forged regions and similar-but-genuine regions. Fig. 5 further provides the F1-scores on the CoMoFoD-subset2 images under attacks, CMFDFormer++ shows good robustness against different attacks.\nVisual comparisons on CoMoFoD-subset2 are provided in Fig. 6, six challenging examples are listed. The first three columns are large-area duplicated regions, and the last three columns are small duplicated objects. In the first three columns, we find conventional methods even can achieve better performance, especially Cozzolino can generate almost perfect results. The results of compared deep learning based methods are unsatisfied, while CMFDFormer++ can achieve good performance. In the last three columns, it is difficult for conventional methods to detect copy-move forged regions, and the detected regions of compared deep learning based methods are not accurate enough. While CMFDFormer++ can generate more accurate results.\nIn Fig. 7 conventional methods can not detect any suspected regions, while deep learning based methods can detect duplicated regions. CMSDNet detects many false-alarm regions. In the 2nd column, all methods can detect duplicated regions, while CMFDFormer++ is more accurate. In the 3rd and 4th columns, there are many disturbance terms in the backgrounds, there are even multiple duplicated pairs in the 4th column. We find that only CMFDFormer++ can detect meaningful regions. In the 5th column, we find that the compared deep learning based methods are difficult to detect intact and accurate regions, while the results of conventional methods and CMFD-Former++ have high quality. In the last column, there are many black tags which disturb the detection, and Cozzolino, LiY and CMFDFormer++ are more accurate." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a Transformer-style copy-move forgery detection network named as CMFDFormer, and pro- In our work, we make a preliminary attempt in the field of continual learning for copy-move forgery detection. A possible solution is first put forward in the field of copy-move forgery detection: a powerful copy-move forgery detection network which can handle numerous cases may be difficult to get at once, while it can be gradually raised by continual learning when facing new cases. In the future, there are still two main issues which need to be concerned: robust backbones should be thoroughly investigated to balance generalization ability and learning ability; customized continual learning models for copy-move forgery detection should be thoroughly studied with fewer performance drops on former data and better performance on new data." } ]
Copy-move forgery detection aims at detecting duplicated regions in a suspected forged image, and deep learning based copy-move forgery detection methods are in the ascendant. These deep learning based methods heavily rely on synthetic training data, and the performance will degrade when facing new tasks. In this paper, we propose a Transformer-style copy-move forgery detection network named as CMFDFormer, and provide a novel PCSD (Pooled Cube and Strip Distillation) continual learning framework to help CMFDFormer handle new tasks. CMFDFormer consists of a MiT (Mix Transformer) backbone network and a PHD (Pluggable Hybrid Decoder) mask prediction network. The MiT backbone network is a Transformer-style network which is adopted on the basis of comprehensive analyses with CNN-style and MLP-style backbones. The PHD network is constructed based on self-correlation computation, hierarchical feature integration, a multi-scale cycle fully-connected block and a mask reconstruction block. The PHD network is applicable to feature extractors of different styles for hierarchical multi-scale information extraction, achieving comparable performance. Last but not least, we propose a PCSD continual learning framework to improve the forgery detectability and avoid catastrophic forgetting when handling new tasks. Our continual learning framework restricts intermediate features from the PHD network, and takes advantage of both cube pooling and strip pooling. Extensive experiments on publicly available datasets demonstrate the good performance of CMFDFormer and the effectiveness of the PCSD continual learning framework.
CMFDFormer: Transformer-based Copy-Move Forgery Detection with Continual Learning
[ { "figure_caption": "Fig. 1 .1Fig. 1. Overview of CMFDFormer and PCSD continual learning.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Channel FC, Cycle FC (S H × S W of 3 × 1 with the dilation rate as 1) and dilated Cycle FC (S H × S W of 3 × 1 with the dilation rate as 2).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "3 are further processed by 3 × 3 convolution operations respectively and resized to the same size as C ′′ 1 . C ′ 4 is also resized to the same size as C ′′ 1 . All the resized feature maps are concatenated to a tensor C which contains rich hierarchical information.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Cube pooling and strip pooling in PCSD.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. F1-score and model efficiency of ResNet50+PHD, CycleMLP-B3+PHD, MiT-B3+PHD on BESTI testing images and CoMoFoD subset2.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. F1-scores (y-axis) on CoMoFoD subset2 under attacks (x-axis).", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Visual comparison on CoMoFoD-subset2.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Visual comparison on CASIA-subset2 and COVERAGE-subset2.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "AND PHD STEP-BY-STEP F1-SCORE ANALYSES ON BESTI.", "figure_data": "DecoderEncoderResNet50 CycleMLP-B3 MiT-B3MR0.6460.8950.939MR+HFI0.8070.8980.945PHD (MR+HFI+MSCFC)0.8630.9100.951ResNet50+PHD CycleMLP-B3+PHD MiT-B3+PHDFLOPs143.43G56.77G50.78GParams49.23M58.43M64.63M", "figure_id": "tab_2", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "CONTINUAL LEARNING ANALYSES ON THE SYNTHETIC DATASETS.", "figure_data": "VariantTrain dataset CL dataset Test datasetF1-scoreUSCISI-USCISI0.698USCISI-BESTI0.518ResNet50BESTI-USCISI0.276+BESTI-BESTI0.863PHDUSCISIBESTIUSCISI0.630 (-0.068)USCISIBESTIBESTI0.692 (+0.174)BESTIUSCISIUSCISI0.576 (+0.300)BESTIUSCISIBESTI0.767 (-0.096)USCISI-USCISI0.885USCISI-BESTI0.541CycleMLP-B3BESTI-USCISI0.338+BESTI-BESTI0.910PHDUSCISIBESTIUSCISI0.678 (-0.207)USCISIBESTIBESTI0.734 (+0.193)BESTIUSCISIUSCISI0.648 (+0.330)BESTIUSCISIBESTI0.852 (-0.058)USCISI-USCISI0.944USCISI-BESTI0.526MiT-B3BESTI-USCISI0.420+BESTI-BESTI0.951PHDUSCISIBESTIUSCISI0.900 (-0.044)USCISIBESTIBESTI0.844 (+0.318)BESTIUSCISIUSCISI0.832 (+0.412)BESTIUSCISIBESTI0.924 (-0.027)", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "F1-SCORE OF CMFDFORMER AFTER CONTINUAL LEARNING ON USCISI AND BESTI DATASETS.", "figure_data": "Distillation methodUSCISI→BESTI BESTI→USCISI Mean F1-score Mean F1-scoreStrip pooling0.8680.875Cube pooling0.8700.874PCSD0.8720.878", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "BESTI is generated from MS COCO), CMSDNet which is trained on USCISI has clear score decreases on BESTI, and the score of SelfDM-SA+PS+CRF trained on BESTI also decreases on USCISI. CMFDFormer also faces this problem as we discussed in Section IV-B2. While CMFDFormer trained on a single training dataset can achieve the highest score on the corresponding testing set.Both USCISI and BESTI have sufficient training images, while the training images are difficult to obtain in other tasks under the practical circumstances. The training subsets of CoMoFoD/CASIA/COVERAGE are much smaller than the training sets of USCISI and BESTI. With unbalanced pretraining datasets and continual learning datasets, the effectiveness of PCSD continual learning still needs to be verified. Besides, the proposed framework is also compared with the state-of-theart deep learning based copy-move forgery detection methods with publicly available training codes, i.e., CMSDNet 4 which", "figure_data": "TABLE IVCOMPARISONS ON USCISI AND BESTI.MethodUSCISI F1-score BESTI F1-scoreLiJ0.3990.360Cozzolino0.1690.273LiY0.2100.395BusterNet0.4640.421CMSDNet0.6920.429SelfDM-SA+PS+CRF0.3460.831CMFDFormer0.9440.951In TABLE IV, the F1-scores on USCISI and BESTI ofcompared methods are listed. Both USCISI and BESTI aresynthetic datasets in which duplicated regions suffer fromdifferent transformations, e.g., rotation, scale, deformationchanges. It is difficult for conventional methods to handlethese changes, and these methods have lower scores than deeplearning based methods. While deep learning based copy-moveforgery detection methods are confronted with the robustnessproblem against the domain shift. Even both USCISI andBESTI have synthetic images generated from MS COCO [63](USCISI is generated from MS COCO and MIT SUN2012[64],", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "-SCORE ANALYSES OF DEEP LEARNING BASED METHOD FINETUNING OR CONTINUAL LEARNING.", "figure_data": "MethodTrain dataset Finetune/CL datasetUSCISIBESTICoMoFoD-subset2 CASIA-subset2 COVERAGE-subset2-0.6920.4830.4750.772CMSDNetUSCISICoMoFoD-subset1 CASIA-subset10.193 (-0.499) 0.194 (-0.498)0.677 (+0.194)0.555 (+0.080)COVERAGE-subset1 0.331 (-0.361)0.743 (-0.029)-0.8310.5060.4750.803SelfDM-SA +PS+CRFBESTICoMoFoD-subset1 CASIA-subset10.495 (-0.336) 0.683 (-0.148)0.678 (+0.172)0.606 (+0.131)COVERAGE-subset10.731 (-0.100)0.698 (-0.105)-0.9410.9440.4190.3160.526CMFDFormerUSCISI +BESTICoMoFoD-subset1 CASIA-subset10.803 (-0.138) 0.856 (-0.088) 0.823 (-0.118) 0.845 (-0.099)0.684 (+0.265)0.578 (+0.262)COVERAGE-subset1 0.694 (-0.247) 0.808 (-0.136)0.820 (+0.294)", "figure_id": "tab_6", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "ON COMOFOD SUBSET2, CASIA SUBSET2, COVERAGE SUBSET2 AND SUBSET3.", "figure_data": "MethodCoMoFoD-subset2 F1-score CASIA-subset2 F1-score COVERAGE subset2 F1-score COVERAGE-subset3 FARLiJ0.4370.0520.7120.760Cozzolino0.4240.2030.6290.360LiY0.5090.2790.6800.520BusterNet0.5250.4200.7210.880CMSDNet0.4830.4750.7721.000SelfDM-SA+PS+CRF0.5060.4750.8030.960CMFDFormer++0.6840.5780.8200.640is trained on USCISI, and SelfDM-SA+PS+CRF 5 which istrained on BESTI. For fair comparison, we adopt the subsetsof CoMoFoD/CASIA/COVERAGE to finetune the releasedpretrained CMSDNet and SelfDM-SA+PS+CRF models. Asshown in TABLE V, simply finetuning can cause catastrophicforgetting.", "figure_id": "tab_7", "figure_label": "VI", "figure_type": "table" } ]
Yaqi Liu; Chao Xia; Song Xiao; Qingxiao Guan; Wenqian Dong; Yifan Zhang; Nenghai Yu
[ { "authors": "Y Liu; X Zhu; X Zhao; Y Cao", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b0", "title": "Adversarial learning for constrained image splicing detection and localization based on atrous convolution", "year": "2019" }, { "authors": "Y Liu; B Lv; X Jin; X Chen; X Zhang", "journal": "IEEE Signal Processing Letters", "ref_id": "b1", "title": "Tbformer: Two-branch transformer for image forgery localization", "year": "2023" }, { "authors": "Y Li; J Zhou", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b2", "title": "Fast and effective image copy-move forgery detection via hierarchical feature point matching", "year": "2019" }, { "authors": "Y Wu; W Abd-Almageed; P Natarajan", "journal": "", "ref_id": "b3", "title": "Busternet: Detecting copymove image forgery with source/target localization", "year": "2018" }, { "authors": "J.-L Zhong; C.-M Pun", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b4", "title": "An end-to-end dense-inceptionnet for image copy-move forgery detection", "year": "2020" }, { "authors": "B Chen; W Tan; G Coatrieux; Y Zheng; Y Q Shi", "journal": "IEEE Transactions on Multimedia", "ref_id": "b5", "title": "A serial image copy-move forgery localization scheme with source/target distinguishment", "year": "2020" }, { "authors": "Y Liu; C Xia; X Zhu; S Xu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b6", "title": "Two-stage copy-move forgery detection with self deep matching and proposal superglue", "year": "2022" }, { "authors": "Z Liu; Y Lin; Y Cao; H Hu; Y Wei; Z Zhang; S Lin; B Guo", "journal": "", "ref_id": "b7", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "W Wang; E Xie; X Li; D.-P Fan; K Song; D Liang; T Lu; P Luo; L Shao", "journal": "", "ref_id": "b8", "title": "Pyramid vision transformer: A versatile backbone for dense prediction without convolutions", "year": "2021" }, { "authors": "", "journal": "Computational Visual Media", "ref_id": "b9", "title": "Pvt v2: Improved baselines with pyramid vision transformer", "year": "2022" }, { "authors": "I O Tolstikhin; N Houlsby; A Kolesnikov; L Beyer; X Zhai; T Unterthiner; J Yung; A Steiner; D Keysers; J Uszkoreit", "journal": "Advances in neural information processing systems", "ref_id": "b10", "title": "Mlp-mixer: An all-mlp architecture for vision", "year": "2021" }, { "authors": "H Liu; Z Dai; D So; Q V Le", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Pay attention to mlps", "year": "2021" }, { "authors": "H Touvron; P Bojanowski; M Caron; M Cord; A El-Nouby; E Grave; G Izacard; A Joulin; G Synnaeve; J Verbeek", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b12", "title": "Resmlp: Feedforward networks for image classification with data-efficient training", "year": "2022" }, { "authors": "J Kirkpatrick; R Pascanu; N Rabinowitz; J Veness; G Desjardins; A A Rusu; K Milan; J Quan; T Ramalho; A Grabska-Barwinska", "journal": "Proceedings of the national academy of sciences", "ref_id": "b13", "title": "Overcoming catastrophic forgetting in neural networks", "year": "2017" }, { "authors": "M Masana; X Liu; B Twardowski; M Menta; A D Bagdanov; J Van De Weijer", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Class-incremental learning: survey and performance evaluation on image classification", "year": "2023" }, { "authors": "A Douillard; Y Chen; A Dapogny; M Cord", "journal": "", "ref_id": "b15", "title": "Plop: Learning without forgetting for continual semantic segmentation", "year": "2021" }, { "authors": "C.-B Zhang; J.-W Xiao; X Liu; Y.-C Chen; M.-M Cheng", "journal": "", "ref_id": "b16", "title": "Representation compensation networks for continual semantic segmentation", "year": "2022" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b17", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "S Chen; E Xie; G Chongjian; R Chen; D Liang; P Luo", "journal": "", "ref_id": "b18", "title": "Cyclemlp: A mlp-like architecture for dense prediction", "year": "2022" }, { "authors": "E Xie; W Wang; Z Yu; A Anandkumar; J M Alvarez; P Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "B Peng; W Wang; J Dong; T Tan", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b20", "title": "Optimized 3d lighting environment estimation for image forgery detection", "year": "2017" }, { "authors": "Y Liu; Q Guan; X Zhao; Y Cao", "journal": "", "ref_id": "b21", "title": "Image forgery localization based on multi-scale convolutional neural networks", "year": "2018" }, { "authors": "D Cozzolino; L Verdoliva", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b22", "title": "Noiseprint: A cnn-based camera model fingerprint", "year": "2019" }, { "authors": "J Wang; Z Wu; J Chen; X Han; A Shrivastava; S.-N Lim; Y.-G Jiang", "journal": "", "ref_id": "b23", "title": "Objectformer for image manipulation detection and localization", "year": "2022" }, { "authors": "S.-J Ryu; M Kirchner; M.-J Lee; H.-K Lee", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b24", "title": "Rotation invariant localization of duplicated image regions based on zernike moments", "year": "2013" }, { "authors": "D Cozzolino; G Poggi; L Verdoliva", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b25", "title": "Efficient dense-field copymove forgery detection", "year": "2015" }, { "authors": "X Bi; C.-M Pun", "journal": "Information Sciences", "ref_id": "b26", "title": "Fast reflective offset-guided searching method for copy-move forgery detection", "year": "2017" }, { "authors": "", "journal": "Pattern Recognition", "ref_id": "b27", "title": "Fast copy-move forgery detection using local bidirectional coherency error refinement", "year": "2018" }, { "authors": "J Li; X Li; B Yang; X Sun", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b28", "title": "Segmentation-based image copymove forgery detection scheme", "year": "2015" }, { "authors": "C.-M Pun; X.-C Yuan; X.-L Bi", "journal": "ieee transactions on information forensics and security", "ref_id": "b29", "title": "Image forgery detection using adaptive oversegmentation and feature point matching", "year": "2015" }, { "authors": "E Ardizzone; A Bruno; G Mazzola", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b30", "title": "Copy-move forgery detection by matching triangles of keypoints", "year": "2015" }, { "authors": "E Silva; T Carvalho; A Ferreira; A Rocha", "journal": "Journal of Visual Communication and Image Representation", "ref_id": "b31", "title": "Going deeper into copy-move forgery detection: Exploring image telltales via multi-scale analysis and voting processes", "year": "2015" }, { "authors": "C Wang; Z Huang; S Qi; Y Yu; G Shen; Y Zhang", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b32", "title": "Shrinking the semantic gap: spatial pooling of local moment invariants for copymove forgery detection", "year": "2023" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b33", "title": "Attention is all you need", "year": "2017" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b34", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "" }, { "authors": "S Zheng; J Lu; H Zhao; X Zhu; Z Luo; Y Wang; Y Fu; J Feng; T Xiang; P H Torr", "journal": "", "ref_id": "b35", "title": "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers", "year": "2021" }, { "authors": "N Carion; F Massa; G Synnaeve; N Usunier; A Kirillov; S Zagoruyko", "journal": "Springer", "ref_id": "b36", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "R Strudel; R Garcia; I Laptev; C Schmid", "journal": "", "ref_id": "b37", "title": "Segmenter: Transformer for semantic segmentation", "year": "2021" }, { "authors": "T Meinhardt; A Kirillov; L Leal-Taixe; C Feichtenhofer", "journal": "", "ref_id": "b38", "title": "Trackformer: Multi-object tracking with transformers", "year": "2022" }, { "authors": "H Chen; Y Wang; T Guo; C Xu; Y Deng; Z Liu; S Ma; C Xu; C Xu; W Gao", "journal": "", "ref_id": "b39", "title": "Pre-trained image processing transformer", "year": "2021" }, { "authors": "S He; H Luo; P Wang; F Wang; H Li; W Jiang", "journal": "", "ref_id": "b40", "title": "Transreid: Transformer-based object re-identification", "year": "2021" }, { "authors": "T Yu; X Li; Y Cai; M Sun; P Li", "journal": "", "ref_id": "b41", "title": "S2-mlp: Spatial-shift mlp architecture for vision", "year": "2022" }, { "authors": "Q Hou; Z Jiang; L Yuan; M.-M Cheng; S Yan; J Feng", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b42", "title": "Vision permutator: A permutable mlp-like architecture for visual recognition", "year": "2022" }, { "authors": "D Lian; Z Yu; X Sun; S Gao", "journal": "", "ref_id": "b43", "title": "As-mlp: An axial shifted mlp architecture for vision", "year": "" }, { "authors": "M De Lange; R Aljundi; M Masana; S Parisot; X Jia; A Leonardis; G Slabaugh; T Tuytelaars", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b44", "title": "A continual learning survey: Defying forgetting in classification tasks", "year": "2021" }, { "authors": "S.-A Rebuffi; A Kolesnikov; G Sperl; C H Lampert", "journal": "", "ref_id": "b45", "title": "icarl: Incremental classifier and representation learning", "year": "2017" }, { "authors": "H Shin; J K Lee; J Kim; J Kim", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "Continual learning with deep generative replay", "year": "2017" }, { "authors": "A Chaudhry; M Ranzato; M Rohrbach; M Elhoseiny", "journal": "", "ref_id": "b47", "title": "Efficient lifelong learning with a-gem", "year": "2018" }, { "authors": "Z Li; D Hoiem", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b48", "title": "Learning without forgetting", "year": "2017" }, { "authors": "A Mallya; S Lazebnik", "journal": "", "ref_id": "b49", "title": "Packnet: Adding multiple tasks to a single network by iterative pruning", "year": "2018" }, { "authors": "A A Rusu; N C Rabinowitz; G Desjardins; H Soyer; J Kirkpatrick; K Kavukcuoglu; R Pascanu; R Hadsell", "journal": "", "ref_id": "b50", "title": "Progressive neural networks", "year": "2016" }, { "authors": "T Feng; M Wang; H Yuan", "journal": "", "ref_id": "b51", "title": "Overcoming catastrophic forgetting in incremental object detection via elastic response distillation", "year": "2022" }, { "authors": "B Yang; X Deng; H Shi; C Li; G Zhang; H Xu; S Zhao; L Lin; X Liang", "journal": "", "ref_id": "b52", "title": "Continual object detection via prototypical task correlation guided gating mechanism", "year": "2022" }, { "authors": "L Yin; J M Perez-Rua; K J Liang", "journal": "", "ref_id": "b53", "title": "Sylph: A hypernetwork framework for incremental few-shot object detection", "year": "2022" }, { "authors": "F Cermelli; M Mancini; S R Bulo; E Ricci; B Caputo", "journal": "", "ref_id": "b54", "title": "Modeling the background for incremental learning in semantic segmentation", "year": "2020" }, { "authors": "C Shang; H Li; F Meng; Q Wu; H Qiu; L Wang", "journal": "", "ref_id": "b55", "title": "Incrementer: Transformer for class-incremental semantic segmentation with knowledge distillation focusing on old class", "year": "2023" }, { "authors": "T Kalb; J Beyerer", "journal": "", "ref_id": "b56", "title": "Principles of forgetting in domain-incremental semantic segmentation in adverse weather conditions", "year": "2023" }, { "authors": "K Nguyen; S Todorovic", "journal": "", "ref_id": "b57", "title": "ifs-rcnn: An incremental few-shot instance segmenter", "year": "2022" }, { "authors": "T.-Y Lin; P Dollár; R Girshick; K He; B Hariharan; S Belongie", "journal": "", "ref_id": "b58", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "H Zhao; J Shi; X Qi; X Wang; J Jia", "journal": "", "ref_id": "b59", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "Q Hou; L Zhang; M.-M Cheng; J Feng", "journal": "", "ref_id": "b60", "title": "Strip pooling: Rethinking spatial pooling for scene parsing", "year": "2020" }, { "authors": "B Wen; Y Zhu; R Subramanian; T.-T Ng; X Shen; S Winkler", "journal": "IEEE", "ref_id": "b61", "title": "Coverage-a novel database for copy-move forgery detection", "year": "2016" }, { "authors": "T.-Y Lin; M Maire; S Belongie; J Hays; P Perona; D Ramanan; P Dollár; C L Zitnick", "journal": "Springer", "ref_id": "b62", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "J Xiao; J Hays; K A Ehinger; A Oliva; A Torralba", "journal": "IEEE", "ref_id": "b63", "title": "Sun database: Large-scale scene recognition from abbey to zoo", "year": "2010" } ]
[ { "formula_coordinates": [ 4, 74.36, 230.04, 225.67, 12.69 ], "formula_id": "formula_0", "formula_text": "f ems (Q j , K j , V j ) = ∪(head 1 , • • • , head Nj )W O j (1)" }, { "formula_coordinates": [ 4, 54.77, 249.46, 241.38, 12.69 ], "formula_id": "formula_1", "formula_text": "head nj = Atten(Q j W Q nj , f sr (K j )W K nj , f sr (V j )W V nj )(2" }, { "formula_coordinates": [ 4, 48.96, 304.5, 251.06, 25.08 ], "formula_id": "formula_2", "formula_text": "W O j ∈ R Cj ×Cj , W Q nj ∈ R Cj ×dj , W K nj ∈ R Cj ×dj and W V nj ∈ R Cj ×dj" }, { "formula_coordinates": [ 4, 96.12, 384.35, 203.9, 12.69 ], "formula_id": "formula_3", "formula_text": "f sr (x) = Norm(Reshape(x, R j )W S j )(3)" }, { "formula_coordinates": [ 4, 48.96, 478.29, 174.65, 17.12 ], "formula_id": "formula_4", "formula_text": "k nj ∈ R h j ×w j R j ×dj , value v nj ∈ R h j ×w j R j" }, { "formula_coordinates": [ 4, 78.46, 513.4, 221.56, 27.3 ], "formula_id": "formula_5", "formula_text": "Atten(q nj , k nj , v nj ) = Softmax( q nj k T nj d j )v nj(4)" }, { "formula_coordinates": [ 4, 62.03, 614.82, 212.8, 9.68 ], "formula_id": "formula_6", "formula_text": "x out = MLP(GELU(Conv 3×3 (MLP(x in )))) + x in" }, { "formula_coordinates": [ 4, 369.36, 316.23, 193.68, 24.25 ], "formula_id": "formula_7", "formula_text": "Fi(m) = f L2 (F i(m) ) = F i(m) ||F i(m) || 2(6)" }, { "formula_coordinates": [ 4, 385.01, 376.35, 178.02, 12.5 ], "formula_id": "formula_8", "formula_text": "C i(m,n) = ( Fi(m) ) T Fi(n)(7)" }, { "formula_coordinates": [ 4, 368.79, 407.96, 133.49, 11.53 ], "formula_id": "formula_9", "formula_text": "C i = {C i(m,n) } ∈ R (hi×wi)×(" }, { "formula_coordinates": [ 4, 359.41, 497.4, 203.62, 12.5 ], "formula_id": "formula_10", "formula_text": "Ci(m,n,1:T ) = Top T(Sort(C i(m,n,:) ))(8)" }, { "formula_coordinates": [ 4, 391.38, 583.37, 171.66, 12.2 ], "formula_id": "formula_11", "formula_text": "Ci = f L2 (Max( Ci , 0))(9)" }, { "formula_coordinates": [ 5, 48.96, 296.47, 251.06, 24.16 ], "formula_id": "formula_12", "formula_text": "C ′′ 3 = C ′ 3 +f ×2 (C ′ 4 ), C ′′ 2 = C ′ 2 + f ×2 (C ′′ 3 ), C ′′ 1 = C ′ 1 + C ′′ 2 . Then, C ′′ 1 , C ′′ 2 , C ′′" }, { "formula_coordinates": [ 5, 69.43, 529.97, 230.59, 44.72 ], "formula_id": "formula_13", "formula_text": "CycleFC( C) (m,n,:) = Cin c=0 C (m+δm(c),n+δn(c),c) • W mlp (c,:) + b(10)" }, { "formula_coordinates": [ 5, 123.09, 625.31, 176.93, 9.65 ], "formula_id": "formula_14", "formula_text": "δ m (c) = (c modS H ) -1(11)" }, { "formula_coordinates": [ 5, 114.85, 638.81, 185.17, 23.23 ], "formula_id": "formula_15", "formula_text": "δ n (c) = (⌊ c S H ⌋modS W ) -1(12)" }, { "formula_coordinates": [ 5, 325.91, 261.93, 237.12, 30.2 ], "formula_id": "formula_16", "formula_text": "C = Conv 3×3 ( C + f linear ( 9 r=1 β r CycleFC r ( C))) (13)" }, { "formula_coordinates": [ 5, 382.2, 389.21, 180.84, 9.68 ], "formula_id": "formula_17", "formula_text": "M = f convseg (f upscale ( C))(14)" }, { "formula_coordinates": [ 5, 319.45, 408.53, 243.59, 26 ], "formula_id": "formula_18", "formula_text": "f upscale ( C) = Conv 1×1 (f ×2 (Conv 1×1 (f ×2 (Conv 1×1 (f ×2 ( C))))))(15)" }, { "formula_coordinates": [ 6, 48.96, 131.61, 198.67, 12.2 ], "formula_id": "formula_19", "formula_text": "{X k |k = 1, • • • , 6} = {M, C, C ′ 1 , C ′ 2 , C ′ 3 , C ′ 4 }" }, { "formula_coordinates": [ 6, 132.5, 269.26, 167.53, 12.09 ], "formula_id": "formula_20", "formula_text": "XT,k,p = P p ⊙ X T,k(16)" }, { "formula_coordinates": [ 6, 86.9, 397.69, 213.12, 30.55 ], "formula_id": "formula_22", "formula_text": "L cpkd = 1 K 1 P K k=1 P p=1 || XT,k,p -XS,k,p || 2(18)" }, { "formula_coordinates": [ 6, 125.91, 651.46, 174.11, 11.39 ], "formula_id": "formula_23", "formula_text": "XT,k = ∪({Ψ q (X T,k )})(19)" }, { "formula_coordinates": [ 6, 126.42, 670.14, 173.6, 11.39 ], "formula_id": "formula_24", "formula_text": "XS,k = ∪({Ψ q (X S,k )})(20)" }, { "formula_coordinates": [ 6, 48.96, 727.12, 251.06, 20.91 ], "formula_id": "formula_25", "formula_text": "Q = 2 for k = 2, • • • , 6; Q = 4 for k = 1, i.e., q = 1, 2, 3, 4 for M." }, { "formula_coordinates": [ 6, 335.43, 74.24, 227.61, 11.03 ], "formula_id": "formula_26", "formula_text": "Ψ q (X k ) = ⊔(Φ(X k,0,0 ), • • • , Φ(X k,q-1,q-1 ))(21)" }, { "formula_coordinates": [ 6, 311.98, 106.75, 251.06, 49.95 ], "formula_id": "formula_27", "formula_text": "H k q × W k q × C k , ∀m = 0, • • • , q -1, ∀n = 0, • • • , q -1. ⊔(• • • ) denotes concatenation over the channel axis, e.g., Ψ q (X k ) ∈ R (H k +W k )×q×C k , Φ(X k,m,n ) ∈ R ( H k q + W k q )×C k ." }, { "formula_coordinates": [ 6, 333.66, 175.97, 225.23, 11.72 ], "formula_id": "formula_28", "formula_text": "Φ(X k,m,n ) = ⊔(Q w ⊙ X k,m,n , Q h ⊙ X k,m,n ) (22" }, { "formula_coordinates": [ 6, 558.89, 178.36, 4.15, 8.64 ], "formula_id": "formula_29", "formula_text": ")" }, { "formula_coordinates": [ 6, 369.4, 237.32, 193.64, 30.55 ], "formula_id": "formula_30", "formula_text": "L spkd = 1 K K k=1 || XT,k -XS,k || 2(23)" }, { "formula_coordinates": [ 6, 377.43, 305.29, 181.46, 9.65 ], "formula_id": "formula_31", "formula_text": "L = L ce + λ(L cpkd + L spkd ) (24" }, { "formula_coordinates": [ 6, 558.89, 305.61, 4.15, 8.64 ], "formula_id": "formula_32", "formula_text": ")" }, { "formula_coordinates": [ 6, 350.48, 352.61, 212.56, 30.32 ], "formula_id": "formula_33", "formula_text": "L ce = H m=1 W n=1 C M c=1 G (m,n,c) log(M (m,n,c) )(25)" } ]
10.1162/tacl_a_00373
2024-01-08
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b10", "b26", "b29", "b17", "b0", "b34", "b7", "b9", "b33", "b38", "b11", "b36", "b43", "b16", "b20", "b23", "b44", "b2", "b24" ], "table_ref": [], "text": "\"Administrative burden is real, widespread and has serious consequences (Heuer, 2022).\"\nAlthough the Electronic Health Record (EHR) has its benefits, the consequences regarding time and effort are increasingly noticed by medical personnel (Moy et al., 2021;Olivares Bøgeskov & Grimshaw-Aagaard, 2019). In addition to less direct patient care (Lavander et al., 2016), documentation sometimes shifts to after hours: studies of Anderson et al., 2020 andSaag et al., 2019 show that physicians spend hours on documentation at home. Working after hours, a poor work/life balance, stress, and using Health Information Technology such as EHRs are as-sociated with less work-life satisfaction and the risk of professional burnout (Gardner et al., 2018;Hauer et al., 2018;Robertson et al., 2017;Shanafelt et al., 2016).\nThe notation used for documentation of General Practitioner (GP) consultations is the Subjective, Objective, Assessment and Plan (SOAP) notation, which has been widely used for clinical notation, and dates back to 1968 (Heun et al., 1998;Sapkota et al., 2022;Weed, 1968). Based on a consultation, the SOAP note has to be written by the clinician, to update the EHR. To reduce the administrative burden, generative Artificial Intelligence (AI) can be used to summarize transcripts of medical consultations into automated reports, which is also the purpose of the Care2Report program to which this study belongs (Kwint et al., 2023;Maas et al., 2020;Molenaar et al., 2020;Wegstapel et al., 2023). Care2Report (C2R) aims at automated medical reporting based on multimodal recording of a consultation and the generation and uploading of the report in the electronic medical record system (Brinkkemper, 2022). However, for these reports to be useful, the accuracy of the generated report has to be determined. Several metrics exist to compare the accuracy of generated text (Moramarco et al., 2022), but the application in the Dutch medical domain has not been researched.\nTherefore, this paper proposes research towards metrics in the Dutch medical domain, resulting in the following research question:\nRQ What is the preferred metric for measuring the difference between an automatically generated medical report and a general practitioner's report?\nThis study contributes to the field of AI generated medical reports by providing a case-level look and adds to the larger field of Natural Language Generation (NLG). Furthermore, this work has societal relevance by providing the preferred accuracy measure for AI-generated Dutch medical reports. Being able to identify the accuracy of a medical report, research towards the generation of the reports can be extended, to ensure a high accuracy. Namely, since reports play a crucial role in patient care, diagnosis, and treatment decisions it is vital that generated reports are correct and complete. Reports with high accuracy would prevent the medical staff from spending a lot of time writing the report themselves or correcting the generated reports, which reduces the administrative burden.\nFirst, a literature review will be presented in Section 2. The method and findings will be discussed in Section 3 and Section 4 respectively, after which these will be discussed in Section 5. Conclusions will be drawn and directions for future work will be given in Section 6." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "Different accuracy metrics for NLG exist, which can be compared in various ways." }, { "figure_ref": [], "heading": "Accuracy Metrics", "publication_ref": [ "b30", "b19", "b1", "b3", "b5", "b24", "b3", "b35", "b35", "b3", "b24", "b18", "b40", "b25", "b25", "b27", "b13", "b6", "b39", "b4", "b15", "b45", "b46", "b30", "b19", "b1", "b32", "b21", "b24", "b3" ], "table_ref": [ "tab_0" ], "text": "Over the years, different evaluation metrics for measuring the accuracy of NLG systems have been developed, such as BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), and METEOR (Banerjee & Lavie, 2005). All these metrics compare a generated text with a reference text. In recent years, a number of studies have provided an overview of these metrics and divided them into different categories (Celikyilmaz et al., 2020;Fabbri et al., 2020;Moramarco et al., 2022; (Celikyilmaz et al., 2020). MT: Machine Translation, IC: Image Captioning, SR: Speech Recognition, SUM: Summarization, DG: Document or Story Generation, Visual-Story Generation, QG: Question Generation, RG: Dialog Response Generation. b EMD = Earth Mover's Distance. Sai et al., 2020). Our study adopts most of the metrics and categories of Moramarco et al., 2022, which was inspired by the categories stated by Sai et al., 2020 andCelikyilmaz et al., 2020. Metrics that are not specifically developed for summarization are also included, to ensure that the study does not become too narrow. Moramarco et al., 2022 also introduce a new metric with its own category, the Stanza+Snomed metric, which is not included in this current study since there is no other known work or use of this metric. The remaining three groups of metrics are:\n• Edit distance metrics count how many characters or words are required to convert the output of the system into the reference text. They include Levenshtein (Levenshtein et al., 1966), Word Error Rate (WER) (Su et al., 1992), Match Error Rate (MER) (Morris et al., 2004), and Word Information Lost (WIL) (Morris et al., 2004).\n• Embedding metrics use encode units of text and pre-trained models to compute cosine similarity to find a similarity between the units. For this, they use word-level, byte-level, and sentence-level embeddings. The metrics include: ROUGE-WE (Ng & Abrecht, 2015), Skipthoughts (Kiros et al., 2015), VectorExtrema (Forgues et al., 2014), GreedyMatching (Sharma et al., 2017), Universal Sentence Encoder (USE) (Cer et al., 2018), Word Mover's Distance (WMD) (Kusner et al., 2015), BertScore (Zhang et al., 2019), and MoverScore (Zhao et al., 2019).\n• Text overlap metrics rely on string matching, and measure the amount of characters, words, or ngrams that match between the generated text en the reference. These metrics include BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), ME-TEOR (Banerjee & Lavie, 2005), and Character n-gram F-score (CHRF) (Popović, 2015). The F-measure, based on precision and recall (Maroengsit et al., 2019), also falls under this category.\nTable 1 provides an overview of all the metrics, along with their category, property, and common use. This table extends data from Moramarco et al., 2022 andCelikyilmaz et al., 2020. Information that was not available in these studies was derived from the original papers of the metrics." }, { "figure_ref": [], "heading": "Comparison of Metrics", "publication_ref": [ "b8", "b5", "b14", "b41", "b5", "b8", "b24" ], "table_ref": [], "text": "In order to perform an evaluation of the accuracy metrics, a reference accuracy score is necessary to compare the calculated scores. There are different ways to determine these reference scores, which could involve a human evaluation. One could simply ask hu-man evaluators to compare the generated text with a reference text and rate the general factual accuracy on a scale of 1 to 5 (Goodrich et al., 2019). However, this is a very broad measure that is heavily influenced by subjectivity. Alternatively, other studies used different dimensions to compare generated texts, such as Adequacy, Coherence, Fluency, Consistency, and Relevance (Fabbri et al., 2020;Kryściński et al., 2019;Turian et al., 2003). Moramarco et al., 2022, use Omissions, Incorrect statements, and Postedit times to evaluate automatically generated medical reports.\nThe ratings of the human evaluators can be compared with the results of the metrics, using correlation measures such as the Spearman, Pearson, or Kendall's τ coefficient (Fabbri et al., 2020;Goodrich et al., 2019;Moramarco et al., 2022)." }, { "figure_ref": [], "heading": "SOEP Reporting for GPs", "publication_ref": [ "b42", "b31", "b37" ], "table_ref": [], "text": "In the Netherlands, the SOEP convention is used by GPs for medical reporting, which is the Dutch alternative to SOAP ( Van der Werf, 1996). Subjective (S) represents the state, medical history and symptoms of the patient. Objective (O) contains measurable data obtained from past records or results of the examination and Evaluation (E) (or Assessment (A)) offers the opportunity to note the assessment of the health problem and diagnosis. Finally, Plan (P) contains the consultant's plan for the patient. Close attention should be paid to the division between the symptoms and signs (subjective descriptions and objective findings) since this is a common pitfall while writing SOEP notes (Podder et al., 2022;Seo et al., 2016)." }, { "figure_ref": [ "fig_0" ], "heading": "RESEARCH METHOD", "publication_ref": [], "table_ref": [], "text": "An overview of the research method of our study is shown in Figure 1, which will be explained in the subsections. The blue outline shows our research focus." }, { "figure_ref": [], "heading": "Materials", "publication_ref": [ "b20", "b12", "b22", "b20" ], "table_ref": [], "text": "The C2R program provides data from seven transcripts of medical consultations between GPs and their patients concerning ear infections, namely Otitis Externa (n = 4) and Otitis Media Acuta (n = 3) (Maas et al., 2020). These transcripts are derived from video recordings, for which both the patients and GPs provided informed consent. The recordings were made as part of a study by Nivel (Netherlands institute for health services research) and Radboudumc to improve GP communication (Houwen et al., 2017;Meijers et al., 2019).\nBased on the transcripts, GPs wrote a SOEP report (referred to as GP report), which is considered the ground truth for this study. These GPs did not perform the consultation but wrote the report solely based on the transcripts. Furthermore, software of the C2R program which runs on GPT 4.0 was used. The temperature was set to 0 to limit the diversity of the generated text. Based on the formulated prompt and transcript, the GPT generates a SOEP report (referred to as AI report) (Maas et al., 2020)." }, { "figure_ref": [], "heading": "Pre-study", "publication_ref": [ "b31", "b37", "b0", "b17", "b26", "b29", "b34" ], "table_ref": [], "text": "Upon first inspection of the GP reports, it was noticed that abbreviations such as \"pcm\" meaning \"paracetamol\" are frequently used. To gain more insights into the experience and preferences of medical staff regarding the formulation of SOEP reports, a prestudy was conducted among Dutch medical staff (n = 5; 1 physiotherapist, 1 paediatrician, 1 junior doc-tor / medical director, 1 nurse and 1 nursing student). The participants were asked about their experience with (SOEP) medical reporting, important factors of SOEP, the use of abbreviations, and general feedback on the Care2Report program. All participants indicated to have knowledge of SOEP reporting, and have experience with writing medical reports, using SOEP or similar methods. Distinguishing between Subjective and Objective information was indicated to be a common mistake, which is in line with research (Podder et al., 2022;Seo et al., 2016). In addition, the notation of the Evaluation is important since this is \"the essence of the consult\", but is sometimes not filled in completely. Regarding abbreviations, the participants were divided. Some of them indicated that they preferred using abbreviations, to enable faster reading, but discouraged the use of difficult abbreviations. The other participants indicated always favouring written terms since this improves readability. All participants favoured using written terms when multiple staff members (from different backgrounds) were involved, for example when it came to patient transfer, with the exception of general abbreviations.\nIn general, the medical staff agreed that an AI report would be \"a great solution\" that \"saves time, which enables more consultation time\". In addition, two of the medical staff indicated writing reports after the consult due to time limits, which can cause a loss of information. This insight is in line with previous research (Anderson et al., 2020;Lavander et al., 2016;Moy et al., 2021;Olivares Bøgeskov & Grimshaw-Aagaard, 2019;Saag et al., 2019)." }, { "figure_ref": [], "heading": "Prompt for Report Generation", "publication_ref": [], "table_ref": [], "text": "The GPT software does not have the knowledge or capability to use medical abbreviations like GPs use in their SOEP report. This can result in the metrics falsely identifying differences between written terms and their abbreviations. That, in combination with the fact that using written terms was preferred by half of the medical staff of the pre-study and since the reports will be read by staff members from different disciplines, led to the decision to change abbreviations in the GP's report to the full expression.\nThe GPT was given a Dutch prompt, of which the translated text is given in Listing 1. The formulation of the prompt was based on existing research within the C2R program (line 1, 2, 3, 4, 5, 7, 9), and has been adapted to incorporate the input of the medical staff (line 3, 4, 5, 6, 8) and literature (line 3, 4, 5). Mainly, the division between symptoms and signs (Subjective and Objective) and the definition of the Evaluation category have been added.\n1 Write a medical s.o.e.p report based on a conversation between a gp and a patient and use short and concise sentences . 2 Report in the categories of subjective , objective , evaluation , and plan . 3 Make sure that for subjective the description of the complaints of the patient is noted . 4 Also , possible pain medication which is used by the patient and the information that emerges from the anamnesis may be noted here . 5 At objective , the observation of the symptoms by the gp and the results of the physical examination must be noted . 6 The evaluation contains the judgement of the examination and the diagnosis . 7 The treatment plan must be clear from the plan . 8 Make sure that the medical terms , such as the name of the medication are noted . 9 The content of the report must be derived from the given transcript .\nListing 1: Prompt used as input for the GPT." }, { "figure_ref": [], "heading": "Metric Selection and Execution", "publication_ref": [ "b28" ], "table_ref": [], "text": "For this study, a spread of metrics between categories (see subsection 2.1) was chosen. More popular or common metrics were preferred due to frequent application and public availability. The following 10 metrics are part of the selection: Levenshtein, WER, ROUGE-1, ROUGE-2, ROUGE-L, BLEU, F-Measure, METEOR, BertScore, WMD.\nEach accuracy metric is applied to the AI report with the GP report as reference. Five of the metrics could be run via an online application and the F-Measure was calculated using an R function. In addition, the embedding metrics, BertScore and WMD, required running Python code. For these metrics, Dutch embeddings were used. BertScore supports more than 100 languages via multilingual BERT, including Dutch, and for WMD the \"dutch-word-embeddings\" Word2Vec model was used (Nieuwenhuijse, 2018). METEOR uses n-gram comparison with synonym match. At the time of writing, there is no alternative for Dutch texts. Therefore, METEOR will mostly rely on n-gram comparison." }, { "figure_ref": [], "heading": "Human evaluation", "publication_ref": [ "b24" ], "table_ref": [ "tab_1" ], "text": "Concurrently, the AI reports are compared with the GP reports by the first authors, i.e., the human eval-uation. This is inspired by the work of Moramarco et al., 2022. This method of evaluation is adopted because it is a domain-specific method that includes the accuracy of the report and provides insight into the amount of work needed by the GP as well. For each AI report, seven aspects will be counted, as can be seen in Table 2. Firstly, the number of Missing statements and Incorrect statements, which include wrongly stated information. Next, the Additional statements, which are divided into On-topic and Off-topic. Added On-topic statements contain information that is not present in the GP report but relates to the content, e.g \"There is no pus visible, but there is blood leaking from the ear\". Added Off-topic statements contain information that is not present in the GP report and does not relate to the content, e.g. \"The patient called in sick for work\". In addition, the Number of characters and the number of words will be counted, to calculate the Word length. An independent samples t-test will be performed on the Number of characters and the Word length between the AI report and the GP report to gain more insight into a potential difference in report length. Lastly, the Post-edit time describes the time it takes to correct the AI report, i.e., adding Missing, changing Incorrect and removing Additional statements. This is interesting to consider since the goal of AI reporting is to reduce the time spent on reporting by GPs.\nAfter performing the human evaluation, Pearson correlation coefficients are calculated between the aspects of the human evaluation and every metric, excluding the Word length, and Number of characters. In theory, the stronger the negative correlation, the more effective the metric is in the domain of medical reporting. Namely, an AI report ideally has low missing statements, low incorrect and low additional statements.\nTo compare the metrics, a single Composite Accuracy Score (CAS) is calculated for each metric. For this, the correlations per Missing, Incorrect and Added statements with the metric are normalised on a scale from 0 to 1, where 0 is the lowest (negative) correlation and 1 is the highest. Based on the normalised correlations with the Missing (MIS), Incorrect (INC), Added On-topic (ADD ON ), and Added Offtopic (ADD OFF ) statements, the Composite Accuracy Score (CAS) is calculated using Formula 1.\nCAS = MIS + INC + ADD OFF + 0.5 × ADD ON 3.5 (1)\nEvery score has a weight of 1.0, except the Added On-topic statements, which have been attributed a weight of 0.5 since their presence in the AI report is deemed less severe than the other aspects. The Postedit time is not part of the Composite Accuracy Score because it is dependent on the other aspects of the human evaluation.\nWith respect to editing, a metric can be considered preferred if it has a low Composite Accuracy Score as well as a strong negative correlation with the Postedit time. If a metric fulfils these requirements, it is an adequate tool to measure the accuracy of the report itself as well as the administrative burden." }, { "figure_ref": [], "heading": "FINDINGS", "publication_ref": [], "table_ref": [], "text": "Performing the method resulted in measured human evaluation aspects, correlations and the Composite Accuracy Scores." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "As mentioned in subsection 3.5, the seven human evaluation aspects were counted for each AI report, of which the first five can be seen in Table 3. Often, the ear in question was Missing in the AI report. Incorrect statements were statements which were wrongly stated or wrongly defined as being said by the GP. Of the statements that were not in the GP report (Added), the distinction between On-topic and Off-topic was less direct. Mainly, On-topic statements contained additional information regarding the medical history, complaints or treatment. Statements regarding other topics then discussed in the GP report and explanations to the patient were classified as Offtopic because these would not be of any relevance to the SOEP report, written by the GP.\nThe Number of characters and the number of words were used to calculate the Word length for both the GP report and AI report. The results of the independent samples t-test show that the AI reports are significantly longer in terms of characters (1199.29 ± 197) than the GP report (410.71 ± 94.32),t(12) = -9.520, p < 0.001. The words used in the AI report (6.04 ± 0.33) are significantly shorter than in the GP report (7.62 ± 0.29),t(12) = 9.555, p < 0.001." }, { "figure_ref": [], "heading": "Correlation between Metrics and Human Evaluation Aspects", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Using the metric scores and the human evaluation aspects, the mutual correlation has been calculated. In contrast to the other metrics, for the edit distance metrics (Levenshtein and WER) a low score equals a good accuracy. To enable easier comparison between the metrics, the correlations of the edit distance metrics have been inverted by multiplying with -1. All correlations between the metrics and human evaluation aspects are shown in Table 4. Ideally, these correlations are strongly negative since the medical report should be concise and contain all, and only, relevant information. The three strongest negative correlations with the Post-edit time and the three lowest Composite Accuracy Scores have been indicated in bold in Table 4" }, { "figure_ref": [], "heading": "DISCUSSION", "publication_ref": [], "table_ref": [], "text": "Based on the findings in Section 4, notable observations were found." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The results of the human evaluation (Table 3) show that all AI reports contain on average 9 Added statements, compared to the GP report. Consequently, the AI reports are longer than the GP reports. Most Added statements are On-topic. Despite the additional information and length of the AI report, each report misses on average 8 statements." }, { "figure_ref": [], "heading": "Added statements", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The first noticeable correlation in Table 4 appears between more than half of the metrics correlating moderately (-0.3 < r < -0.5) or strongly (r < -0.5) negatively with Added Off-topic statements. Interestingly, only three metrics have a moderate negative correlation with Added On-topic statements. This can be explained by the On-topic statements adding extra, relevant, information to the content of the SOEP report. Even though these statements are added, the metrics could define this as relevant information thus not having a strong negative correlation.\nComparison with Moramarco et al., " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "AI generated medical reports could provide support for medical staff. These reports should be as accurate as possible, to limit the time needed by the medical staff making corrections. To determine the accuracy of a text, metrics can be used. This research investigated the performance of 10 accuracy metrics by calculating the correlation between the metric score and the following human evaluation aspects: Missing statements, Incorrect statements, Additional statements and Post-edit time.\nFor each metric, the Composite Accuracy Score has been calculated, indicating its performance. Based on the CAS and the correlation with the Postedit time, the ROUGE-L and Word Mover's Distance (WMD) metrics are preferred to use in the context of medical reporting, answering the research question:\nWhat is the preferred metric for measuring difference between an automatically generated medical report and a general practitioner's report?\nBased on the results, we see that there is a diversity among the applications of the different metrics. Both strong positive and negative correlations with the human evaluation aspects are found, which can be explained by the different methods used by the metrics. The preferred method depends heavily on the context of use and which aspect is deemed more important. Therefore, no unambiguous answer can be given. However, we created the CAS score based on our context of use, identifying the preferred metrics in the context of medical reporting." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The outcome of this study is not in line with previous research, which could be due to the limitations. There are three main limitations to this study.\nFirstly, the data set used for running the accuracy metrics consists of just seven AI reports. Additionally, the transcripts used are all from GP consultations on Otitis Externa and Otitis Media Acuta, making the data limited in its medical diversity. These factors make it difficult to draw general conclusions on accuracy metrics that work for all AI generated medical reports.\nAdding to that, the GP reports were written solely based on the transcripts, and not by the GP who performed the consultation, which is not standard practice.\nLastly, the human evaluation was performed by researchers who have no prior experience in writing medical reports. Even though medical staff was consulted in this study, it would be preferred if the human evaluation was done by people with medical expertise. That way, those with a deeper comprehension of what should be included in a report could handle the more challenging cases of evaluating the generated statements' relevance." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [], "table_ref": [], "text": "The main limitations should be addressed in future work. Mainly, the study should be repeated with more medical reporting, on other pathologies. Besides, it would improve the quality of the study if the human evaluation were executed by healthcare professionals. Furthermore, the current AI reports result in low accuracy scores for each metric. Therefore, it would be beneficial if further research was done into optimising the prompt formulation, resulting in more accurate AI reports. Additionally, the human evaluation of this study does not take wrongly classified statements to the SOEP categories into account, which could be adopted in future work. Finally, the use of abbreviations in generated reports could be further explored, since this was taken out of the equation for this study." }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "Our thanks go to the medical staff who helped us with the pre-study. In addition, the icons of Flaticon.com enabled us to create Figure 1. Finally, many thanks go to Bakkenist for the support of this research project." } ]
Generative Artificial Intelligence (AI) can be used to automatically generate medical reports based on transcripts of medical consultations. The aim is to reduce the administrative burden that healthcare professionals face. The accuracy of the generated reports needs to be established to ensure their correctness and usefulness. There are several metrics for measuring the accuracy of AI generated reports, but little work has been done towards the application of these metrics in medical reporting. A comparative experimentation of 10 accuracy metrics has been performed on AI generated medical reports against their corresponding General Practitioner's (GP) medical reports concerning Otitis consultations. The number of missing, incorrect, and additional statements of the generated reports have been correlated with the metric scores. In addition, we introduce and define a Composite Accuracy Score which produces a single score for comparing the metrics within the field of automated medical reporting. Findings show that based on the correlation study and the Composite Accuracy Score, the ROUGE-L and Word Mover's Distance metrics are the preferred metrics, which is not in line with previous work. These findings help determine the accuracy of an AI generated medical report, which aids the development of systems that generate medical reports for GPs to reduce the administrative burden.
Comparative Experimentation of Accuracy Metrics in Automated Medical Reporting: The Case of Otitis Consultations
[ { "figure_caption": "Figure 1 .1Figure 1. Research Method Diagram, showing the method along with its input and intended output.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Overview of existing accuracy metrics for Natural Language Generation.", "figure_data": "CategoryMetricsPropertyCommon use aLevenshteinCosine similarityMT, IC, SR, SUM, DG & RGEdit distanceWER MER% of insert, delete, and replace Proportion word matches errorsSR SRWILProportion of word information lost SRROUGE-WEROUGE + word embeddingsSUMSkipthoughtsVector based similarityMTVectorExtremaVector based similarityMTEmbeddingGreedyMatching Cosine similarity of embeddingsRGUSESentence level embeddingsMTWMDEMD b on wordsIC & SUMBertScoreSimilarity with context embeddings DGMoverScoreContext embeddings + EMD bDGPrecision% relevant of all textMT, IC, SR, SUM, DG, QG, & RGRecall% relevant of all relevantMT, IC, SR, SUM, DG, QG, & RGF-ScorePrecision and recallMT, IC, SR, SUM, DG, QG, & RGText overlapBLEUn-gram precisionMT, IC, DG, QG, & RGROUGE-nn-gram recallSUM & DGROUGE-LLongest common subsequenceSUM & DGMETEORn-gram with synonym matchMT, IC, & DGCHRFn-gram F-scoreMTa Abbreviations for the subfield, as introduced by", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Human evaluation aspects along with their descriptions and abbreviations.", "figure_data": "AspectDescriptionAbr.MissingMissing in AI reportMISIncorrectIncorrect in AI reportINCAdded On-topicNot in GP report, on-topicADD ONAdded Off-topic Not in GP report, off-topicADD OFFPost-edit timeTime (s) to correct AI reportPETNr. of characters Nr. of characters in AI reportNRCWord lengthAvg. word length in AI report WLE", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Human evaluation aspects per AI report. The averages of the aspects are rounded to whole numbers.", "figure_data": "Human Evaluation Aspects R1R2R3R4R5R6R7AverageMissing statements125879878Incorrect statements21121512Added statements -On-topic69578376Added statements -Off-topic55021503Post-edit time (s)378 196 170 213 193 186 169215", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Pearson correlation between human evaluation aspects and metrics, along with the Composite Accuracy Score. The negative correlations are indicated by different intensities of orange, and the positive correlations are indicated by different intensities of blue. The three lowest Composite Accuracy Scores and PET correlations are in bold.", "figure_data": "MetricMiss.Incorr.Additional On-topic Off-topicCASPETLevenshtein0.122-0.178-0.011-0.7980.229-0.320WER0.673-0.042-0.409-0.3150.4340.385BertScore-0.2720.1260.3190.7590.6180.329WMD-0.564-0.1680.381-0.2890.241-0.591ROUGE-1-0.0630.123-0.394-0.6340.284-0.483ROUGE-2-0.1530.131-0.021-0.2590.401-0.201ROUGE-L-0.233-0.109-0.056-0.5970.209-0.461BLEU0.109-0.0130.002-0.4620.364-0.258F-Measure0.6980.119-0.434-0.3390.5010.333METEOR0.3150.467-0.2200.0450.6770.103", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "2022 Except for the WMD metric, none of the metrics strongly correlate negatively with Missing statements and Postedit time, which is not in line with the findings ofMoramarco et al., 2022. In their findings, METEOR and BLEU scored good on detecting Missing statements and Levenshtein and METEOR rank highly on the Post-edit time. Additionally, none of the metrics moderately or strongly correlate negatively with Incorrect statements, which is also not in line with the findings ofMoramarco et al., 2022, where ROUGE scored good on identifying these statements.Post-edit time Six metrics have a negative correlation with the Post-edit time. WMD, ROUGE-1, and ROUGE-L have the strongest negative correlation, meaning that they are the preferred metrics concerning the correlation with Post-edit time.WMD scores better than ROUGE-L when looking at the Post-edit time.Moramarco et al., 2022 found that Levenshtein, BertScore, and METEOR are the most suitable metrics, which does not correspond with the findings of our work.", "figure_data": "Opposite of preferred correlations The WER andF-Measure strongly correlate (r > 0.5) Missing state-ments with better accuracy, and BertScore correlatesa high number of Off-topic statements with better ac-curacy. These results indicate exactly the opposite ofwhat is preferred and therefore seem to be less suit-able for the evaluation of automatically generated re-ports.Composite Accuracy Score When looking at theCAS, The WER, BertScore, F-Measure and ME-TEOR metrics score high (> 0.5), indicating thatthese metrics are not suitable for the current appli-cation. The high CAS of the BertScore is remarkablesince this metric performed as one of the best in thestudy of Moramarco et al., 2022. The high CAS ofMETEOR could be explained due to the fact that theused transcripts are Dutch, which is an unsupportedlanguage by the metric. Therefore, it cannot use syn-onym matching, which is the added benefit of the ME-TEOR metric compared to other text overlap metrics.There is no consensus within the categories of editdistance, embedded and text overlap metrics. Conse-quently, no conclusions can be drawn regarding pre-ferred performing categories.Preferred metrics Based on the CAS and the Post-edit time correlations, ROUGE-L and WMD are thepreferred metrics, since they are in the top 3 for both.The WMD scores slightly worse in terms of CAS,which can be explained by the fact that it has a posi-tive correlation (0.381) with the Added On-topic state-ments, whereas ROUGE-L has only negative correla-tions with the human evaluation metrics. However,", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Wouter Faber; Renske Eline Bootsma; Tom Huibers; Sandra Van Dulmen; Sjaak Brinkkemper
[ { "authors": "J Anderson; J Leubner; S Brown", "journal": "Family Medicine", "ref_id": "b0", "title": "Ehr overtime: An analysis of time spent after hours by family physicians", "year": "2020" }, { "authors": "S Banerjee; A Lavie", "journal": "", "ref_id": "b1", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "S Brinkkemper", "journal": "", "ref_id": "b2", "title": "Reducing the administrative burden in healthcare: Speech and action recognition for automated medical reporting", "year": "2022" }, { "authors": "A Celikyilmaz; E Clark; J Gao", "journal": "", "ref_id": "b3", "title": "Evaluation of text generation: A survey", "year": "2020" }, { "authors": "D Cer; Y Yang; S.-Y Kong; N Hua; N Limtiaco; R S John; N Constant; M Guajardo-Cespedes; S Yuan; C Tar", "journal": "", "ref_id": "b4", "title": "Universal sentence encoder", "year": "2018" }, { "authors": "A R Fabbri; W Kryściński; B Mccann; C Xiong; R Socher; D Radev", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "Summeval: Reevaluating summarization evaluation", "year": "2020" }, { "authors": "G Forgues; J Pineau; J.-M Larchevêque; R Tremblay", "journal": "", "ref_id": "b6", "title": "Bootstrapping dialog systems with word embeddings. Nips, modern machine learning and natural language processing workshop", "year": "2014" }, { "authors": "R L Gardner; E Cooper; J Haskell; D A Harris; S Poplau; P J Kroth; M Linzer", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b7", "title": "Physician stress and burnout: the impact of health information technology", "year": "2018" }, { "authors": "B Goodrich; V Rao; P J Liu; M Saleh", "journal": "", "ref_id": "b8", "title": "Assessing the factual accuracy of generated text", "year": "2019" }, { "authors": "A Hauer; H Waukau; P Welch", "journal": "Wmj", "ref_id": "b9", "title": "Physician burnout in wisconsin: An alarming trend affecting physician wellness", "year": "2018" }, { "authors": "A J Heuer", "journal": "International Journal of Health Policy and Management", "ref_id": "b10", "title": "More evidence that the healthcare administrative burden is real, widespread and has serious consequences comment on\" perceived burden due to registrations for quality monitoring and improvement in hospitals: A mixed methods study", "year": "2022" }, { "authors": "L Heun; D T Brandau; X Chi; P Wang; J Kangas", "journal": "International Journal of Medical Informatics", "ref_id": "b11", "title": "Validation of computer-mediated open-ended standardized patient assessments", "year": "1998" }, { "authors": "J Houwen; P L Lucassen; H W Stappers; W J Assendelft; S Van Dulmen; T C Olde Hartman", "journal": "British Journal of General Practice", "ref_id": "b12", "title": "Improving gp communication in consultations on medically unexplained symptoms: A qualitative interview study with patients in primary care", "year": "2017" }, { "authors": "R Kiros; Y Zhu; R R Salakhutdinov; R Zemel; R Urtasun; A Torralba; S Fidler", "journal": "", "ref_id": "b13", "title": "Skip-thought vectors", "year": "2015" }, { "authors": "W Kryściński; N S Keskar; B Mccann; C Xiong; R Socher", "journal": "", "ref_id": "b14", "title": "Neural text summarization: A critical evaluation", "year": "2019" }, { "authors": "M Kusner; Y Sun; N Kolkin; K Weinberger", "journal": "", "ref_id": "b15", "title": "From word embeddings to document distances", "year": "2015" }, { "authors": "E Kwint; A Zoet; K Labunets; S Brinkkemper", "journal": "Proceedings of BIOSTEC", "ref_id": "b16", "title": "How different elements of audio affect the word error rate of transcripts in automated medical reporting", "year": "2023" }, { "authors": "P Lavander; M Meriläinen; L Turkki", "journal": "Journal of Nursing Management", "ref_id": "b17", "title": "Working time use and division of labour among nurses and health-care workers in hospitals-a systematic review", "year": "2016" }, { "authors": "V I Levenshtein", "journal": "Soviet physics doklady", "ref_id": "b18", "title": "Binary codes capable of correcting deletions, insertions, and reversals", "year": "1966" }, { "authors": "C.-Y Lin", "journal": "Text summarization branches out", "ref_id": "b19", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "L Maas; M Geurtsen; F Nouwt; S Schouten; R Van De Water; S Van Dulmen; F Dalpiaz; K Van Deemter; S Brinkkemper", "journal": "HICSS", "ref_id": "b20", "title": "The care2report system: Automated medical reporting as an integrated solution to reduce administrative burden in healthcare", "year": "2020" }, { "authors": "W Maroengsit; T Piyakulpinyo; K Phonyiam; S Pongnumkul; P Chaovalit; T Theeramunkong", "journal": "", "ref_id": "b21", "title": "A survey on evaluation methods for chatbots", "year": "2019" }, { "authors": "M C Meijers; J Noordman; P Spreeuwenberg; T C Olde Hartman; S Van Dulmen", "journal": "Family practice", "ref_id": "b22", "title": "Shared decision-making in general practice: An observational study comparing 2007 with 2015", "year": "2019" }, { "authors": "S Molenaar; L Maas; V Burriel; F Dalpiaz; S Brinkkemper", "journal": "Springer International Publishing", "ref_id": "b23", "title": "Medical dialogue summarization for automated reporting in healthcare", "year": "2020" }, { "authors": "F Moramarco; A P Korfiatis; M Perera; D Juric; J Flann; E Reiter; A Savkov; A Belz", "journal": "", "ref_id": "b24", "title": "Human evaluation and correlation with automatic metrics in consultation note generation", "year": "2022" }, { "authors": "A C Morris; V Maier; P Green", "journal": "", "ref_id": "b25", "title": "From wer and ril to mer and wil: Improved evaluation measures for connected speech recognition", "year": "2004" }, { "authors": "A J Moy; J M Schwartz; R Chen; S Sadri; E Lucas; K D Cato; S C Rossetti", "journal": "Journal of the American Medical Informatics Association", "ref_id": "b26", "title": "Measurement of clinical documentation burden among physicians and nurses using electronic health records: a scoping review", "year": "2021" }, { "authors": "J P Ng; V Abrecht", "journal": "", "ref_id": "b27", "title": "Better summarization evaluation with word embeddings for rouge", "year": "1925" }, { "authors": "A Nieuwenhuijse", "journal": "", "ref_id": "b28", "title": "Coosto -Dutch Word Embeddings", "year": "2018-05" }, { "authors": "B Olivares Bøgeskov; S L S Grimshaw-Aagaard", "journal": "Nordic Journal of Nursing Research", "ref_id": "b29", "title": "Essential task or meaningless burden? nurses' perceptions of the value of documentation", "year": "2019" }, { "authors": "K Papineni; S Roukos; T Ward; W.-J Zhu", "journal": "", "ref_id": "b30", "title": "Bleu: A method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "V Podder; V Lew; S Ghassemzadeh", "journal": "StatPearls Publishing", "ref_id": "b31", "title": "Soap records", "year": "2022-08" }, { "authors": "M Popović", "journal": "", "ref_id": "b32", "title": "Chrf: Character n-gram f-score for automatic mt evaluation", "year": "2015" }, { "authors": "S L Robertson; M D Robinson; A Reid", "journal": "Journal of graduate medical education", "ref_id": "b33", "title": "Electronic health record effects on work-life balance and burnout within the i3 population collaborative", "year": "2017" }, { "authors": "H S Saag; K Shah; S A Jones; P A Testa; L I Horwitz", "journal": "Journal of general internal medicine", "ref_id": "b34", "title": "Pajama time: Working after work in the electronic health record", "year": "2019" }, { "authors": "A B Sai; A K Mohankumar; M M Khapra", "journal": "ACM Computing Surveys", "ref_id": "b35", "title": "A survey of evaluation metrics used for nlg systems", "year": "2020" }, { "authors": "B Sapkota; R Shrestha; S Giri", "journal": "Medicine", "ref_id": "b36", "title": "Community pharmacy-based soap notes documentation", "year": "2022" }, { "authors": "J.-H Seo; H.-H Kong; S.-J Im; H Roh; D.-K Kim; H.-O Bae; Y.-R Oh", "journal": "Korean journal of medical education", "ref_id": "b37", "title": "A pilot study on the evaluation of medical student documentation: Assessment of soap notes", "year": "2016" }, { "authors": "T D Shanafelt; L N Dyrbye; C Sinsky; O Hasan; D Satele; J Sloan; C P West", "journal": "Mayo Clinic Proceedings", "ref_id": "b38", "title": "Relationship between clerical burden and characteristics of the electronic environment with physician burnout and professional satisfaction", "year": "2016" }, { "authors": "S Sharma; L El Asri; H Schulz; J Zumer", "journal": "", "ref_id": "b39", "title": "Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation", "year": "2017" }, { "authors": "K.-Y Su; M.-W Wu; J.-S Chang", "journal": "", "ref_id": "b40", "title": "A new quantitative quality measure for machine translation systems", "year": "1992" }, { "authors": "J P Turian; L Shen; I D Melamed", "journal": "", "ref_id": "b41", "title": "Evaluation of machine translation and its evaluation", "year": "2003" }, { "authors": "G T Van Der Werf", "journal": "Huisarts Wet", "ref_id": "b42", "title": "Probleemlijst, soep en icpc", "year": "1996" }, { "authors": "L Weed", "journal": "New England Journal of Medicine", "ref_id": "b43", "title": "Medical records that guide and teach", "year": "1968" }, { "authors": "J Wegstapel; T Den Hartog; B S Mick Sneekes; E Van Der Scheer-Horst; S Van Dulmen; S Brinkkemper", "journal": "BIOSTEC", "ref_id": "b44", "title": "Automated identification of yellow flags and their signal terms in physiotherapeutic consultation transcripts", "year": "2023" }, { "authors": "T Zhang; V Kishore; F Wu; K Q Weinberger; Y Artzi", "journal": "", "ref_id": "b45", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "W Zhao; M Peyrard; F Liu; Y Gao; C M Meyer; S Eger", "journal": "", "ref_id": "b46", "title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance", "year": "2019" } ]
[ { "formula_coordinates": [ 6, 78.18, 181.35, 208.13, 22.54 ], "formula_id": "formula_0", "formula_text": "CAS = MIS + INC + ADD OFF + 0.5 × ADD ON 3.5 (1)" } ]
2024-01-19
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b14", "b42", "b42", "b1", "b7", "b32", "b12", "b10", "b23", "b4", "b13", "b35", "b4", "b35" ], "table_ref": [], "text": "The application of Artificial Intelligence (AI), notably Machine Learning (ML), to enhance healthcare and assist medical decision-making is a rapidly growing field (Hicks et al., 2022). Large Language Models (LLM) are effectively tackling challenging healthcare tasks, such as disease diagnosis, treatment planning, and medical reporting, using personalized medical prompts, even with limited data (Wang et al., 2023). Prompt engineering in the medical domain, including classification, data generation, anomaly detection, content augmentation, question answering, and medical inference, is crucial in improving these healthcare outcomes (Wang et al., 2023). Ensuring high levels of accuracy and reliability in these AI-driven healthcare applications is essential for their successful integra-tion into medical support systems (Balagurunathan et al., 2021).\nExpanding on the role of AI and ML in healthcare, Electronic Health Records (EHRs) have become a pivotal focus, revolutionizing medical data management and communication (Coorevits et al., 2013). EHR documentation has led to significant changes in medical practice with an increase in data access and communication among medical professionals compared to paper records (Overhage & McCallie Jr, 2020). However, one significant challenge has been the time-consuming data input and hindrances to inperson patient care, resulting in professional dissatisfaction (Friedberg et al., 2014). In response, to lessen this administrative burden, automation of this process was developed by several research initiatives, as demonstrated with the Systematic Literature Review of van Buchem et al., 2021. Care2Report (C2R) is the only scientific initiative that focuses on the Dutch medical field and automates medical reporting by utilizing multimodal consultation recordings (audio, video, and Bluetooth), enabling knowledge rep-resentation, ontological dialogue interpretation, report production, and seamless integration with electronic medical record systems (ElAssy et al., 2022;Maas et al., 2020). This automated medical reporting serves as a prime example of prompt engineering, specifically in the domain of medical dialogue summarization (MDS), illustrating how technology can streamline healthcare processes.\nIn automated MDS, the generation of automated medical reporting relies on utilizing state-of-theart LLMs like Generative Pre-trained Transformers (GPT). The level of detail and specificity in the prompts directly influences the model's comprehension and its ability to produce the expected results (Bigelow, 2023;Heston, 2023;Robinson, 2023). Several articles on the effective crafting of prompts, emphasize the significance of context and clarity in the prompts, including the provision of additional relevant information for optimal results (Bigelow, 2023;Robinson, 2023).\nAlthough prompt engineering has a substantial impact on the performance of LLMs, its full potential in the domain of medical problem-solving remains largely unexplored. Thus, this research aims to answer the following research question: RQ: Which prompt formulation detail yields high performance in automated medical reporting?\nTo answer the research question we focus on prompt engineering related to automated medical reporting. First, we reviewed existing literature for research within prompt engineering, automatic text summarization, and medical dialogue summarization (Section 2). Subsequently, Section 3 reports on prompt formulation, execution, and analysis. The findings are presented and discussed (Section 4). Finally, the work is summarized and suggestions are provided for future work (Section 5)." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "This study builds on prior research in the realm of prompt engineering, aiming to employ diverse prompting methodologies for generating automated medical reports within MDS, a subset of Automatic Text Summarization (ATS)." }, { "figure_ref": [], "heading": "Prompt Engineering", "publication_ref": [ "b43", "b13" ], "table_ref": [], "text": "A human-initiated prompt serves as the initial step for GPT in comprehending the context and meeting user expectations by producing the desired output (White et al., 2023). This process includes designing, implementing, and refining prompts to optimize their efficacy in eliciting this intended result (Heston, 2023). An example prompt in the context of this work is shown in Listing 1. Based on literature, we decided to use the shot prompting and pattern prompting methods to achieve the highest-performing output since these provide an opportunity to demonstrate an example of the expected output and to delineate the context." }, { "figure_ref": [], "heading": "Shot Prompting", "publication_ref": [ "b8", "b0", "b0", "b0", "b34", "b6", "b8", "b21", "b0", "b9", "b39" ], "table_ref": [], "text": "In-context learning is a method where language models learn tasks through a few examples provided as demonstrations (Dong et al., 2022). Shot prompting employs in-context learning to guide the model's output. There are three strategies: zero-shot, oneshot, and few-shot prompting (Anil, 2023). Zero-shot prompting, also known as direct prompting, involves giving the model a task without specific examples, relying solely on the knowledge acquired during training (Anil, 2023). In contrast, one-shot and few-shot prompting provide examples or 'shots' to the model at run-time, serving as references for the expected response's structure or context (Anil, 2023;Reynolds & McDonell, 2021). The model then infers from these examples to perform the task. Since examples are presented in natural language, they provide an accessible way to engage with language models and facilitate the incorporation of human knowledge into these models through demonstrations and templates (Brown et al., 2020;Dong et al., 2022;P. Liu et al., 2023). Currently, there is no universally standardized methodology for providing examples in shot-prompting (Anil, 2023;Dragon, 2023;Tam, 2023). For more straightforward tasks, like language translation or classification, a prompt could be formulated as demonstrated in Listing 2. For more complex tasks, like content generation, a prompt can be constructed as demonstrated in Listing 3. " }, { "figure_ref": [], "heading": "Pattern Prompting", "publication_ref": [ "b43", "b43", "b43", "b43", "b43" ], "table_ref": [], "text": "Pattern prompting involves the availability of various patterns that can be chosen and employed as the basis for the formulation of prompts. These patterns facilitate interactions with conversational LLMs across various contexts, extending beyond just discussing interesting examples or domain-specific prompts (White et al., 2023). The aim is to codify this knowledge into pattern structures that enhance the ability to apply it in different contexts and domains where users encounter similar challenges, although not necessarily identical ones. This approach promotes greater reuse and adaptability of these patterns for diverse use cases and situations (White et al., 2023).\nThe study of White et al., 2023 introduces, among others, the context control pattern category. Context control captures the context manager pattern, which enables users to specify or remove context from the prompt. \"By focusing on explicit contextual statements or removing irrelevant statements, users can help the LLM better understand the question and generate more accurate responses\" (White et al., 2023). The greater the clarity in the statements, the higher the likelihood that the LLM will respond with the intended action. Possible context statements are: \"within the scope of X\", \"consider Y\", \"ignore Z\"; an example is shown in Listing 4 (White et al., 2023).\n1 Listen to this transcript between doctor and patient and make a EHR entry from it . 2 Consider the medical guidelines . 3 Do not consider irrelevant statements .\nListing 4: Example of a prompt using the context manager pattern." }, { "figure_ref": [], "heading": "Automatic Text Summarization", "publication_ref": [ "b30", "b30", "b33", "b30", "b44", "b44", "b44", "b45" ], "table_ref": [], "text": "Since the introduction of transformer-based methods in ATS, the usage of prompt engineering has been instrumental in enhancing the performance of ATS processes. In ATS, various pragmatic algorithms can be integrated into computers to generate concise summaries of information (Mridha et al., 2021). When used in Natural Language Processing (NLP), ATS is used to evaluate, comprehend, and extract information from human language (Mridha et al., 2021). The introduction of transformer-based models like GPT (Radford et al., 2019) shows improved performance in NLP-tasks (Mridha et al., 2021) which is beneficial for abstractive summarization.\nAbstractive summarization creates summaries by introducing new phrases or words not present in the original text. To achieve accurate abstractive summaries, the model must thoroughly comprehend the document and express that comprehension concisely through new terms or alternative expressions (Widyassari et al., 2019). The opposite of abstractive summarization, is extractive summarization, a method where the summary consists entirely of extracted content (Widyassari et al., 2019). Extractive summarization has been used most frequently because it is easier, but the summaries generated are far from human-made summaries, in contrast to abstractive summarization (Widyassari et al., 2019;Yadav et al., 2022)." }, { "figure_ref": [], "heading": "Medical Dialogue Summarization", "publication_ref": [ "b16", "b16", "b19", "b31", "b26", "b24" ], "table_ref": [], "text": "In MDS, it is important that the summaries are at least partly abstractive. In one respect, the reports are generated from dialogue, so extracting literal (sub-)sentences will not lead to a coherent report; conversely, the summaries must be comparable to the human-made versions of the general practitioners (GP). In MDS, the relevant medical facts, information, symptoms, and diagnosis must be retrieved from the dialogue and presented either in the form of structured notes or unstructured summaries (Jain et al., 2022). The most common type of medical notes are SOAP notes: Subjective information reported by the patient, Objective observations, Assessment by medical professional and future Plans (Jain et al., 2022;Krishna et al., 2021).\nPrevious work in MDS has produced the transformer-based approaches of MEDSUM-ENT (Nair et al., 2023), MedicalSum (Michalopoulos et al., 2022), and SummQA (Mathur et al., 2023). " }, { "figure_ref": [], "heading": "GP:", "publication_ref": [ "b31", "b26", "b24" ], "table_ref": [], "text": "We're just going to take a look. There is some fluid. Also, air bubbles behind the eardrum. That is clearly visible.\nP: Advice xylomethazine 1 wk, continue antibiotics, review symptoms in 1 week. Consider prescribing Flixonase, referral to ENT? P: Yes, yes, yes, that's correct. It gurgles and it rattles and it rings. And it's just blocked. GP: Yes, I believe that when I see it like this. It doesn't look red. It doesn't appear to be really inflamed.\n... GP: I think, for now, at least, you should finish the antibiotics. P: That's two more days. GP: Yes, and continue using the nasal spray, or the other nasal spray, for another week and see how it goes. Just come back if it's still not better after a week. And if it persists, well, maybe then you should see the ENT specialist. • \"MEDSUM-ENT is a medical conversation summarization model that takes a multi-stage approach to summarization, using GPT-3 as the backbone\". MEDSUM-ENT first extracts medical entities and their affirmations and then includes these extractions as additional input that informs the final summarization step through prompt chaining. Additionally, MEDSUM-ENT exploits few-shot prompting for medical concept extraction and summarization through in-context example selection. Their study concludes that summaries generated using this approach are clinically accurate and preferable to naive zero-shot summarization with GPT-3 (Nair et al., 2023).\n• MedicalSum is a sequence-to-sequence architecture for summarizing medical conversations by integrating medical domain knowledge from the Unified Medical Language System (UMLS) to increase the likelihood of relevant medical facts being included in the summarized output. Their analysis shows that MedicalSum produces accurate AI-generated medical documentation (Michalopoulos et al., 2022).\n• SummQA is a \"two-stage process of selecting semantically similar dialogues and using the top-k similar dialogues as in-context examples for GPT-4\". They generate section-wise summaries and classify these summaries into appropriate section headers. Their results highlight the effectiveness of few-shot prompting for this task (Mathur et al., 2023).\nThe present study not only builds upon this existing knowledge base by integrating a combination of shot prompting and context patterns into prompt engineering but also includes a crucial human evaluation component, in addition to the accuracy measurement. This human evaluation provides comprehensive insights into prompt performance beyond computerbased metrics. Leveraging GPT-4 for Dutch consultations, we ensure that the resulting medical reports adhere to the widely recognized SOAP guidelines. It is noteworthy that while prior studies have demonstrated the efficacy of shot-prompting, this is the first published study to incorporate both shot prompting and context pattern prompting in the domain of Dutch MDS, thereby making a significant contribution to the Dutch medical field." }, { "figure_ref": [ "fig_1" ], "heading": "STUDY DESIGN", "publication_ref": [ "b38", "b23", "b25", "b15" ], "table_ref": [ "tab_2" ], "text": "We conducted a causal-comparative study to identify the cause-effect relationship between the formulation detail of the prompt and the performance of the auto-mated medical report (Schenker & Rumrill Jr, 2004). We followed the approach of the C2R program, by using transcripts that were made of the verbal interaction during a series of video-recorded consultations between GPs and their patients (Figure 1) (Maas et al., 2020;Meijers et al., 2019). The recordings, for which patients as well as GPs provided informed consent, were made as part of previous communication projects carried out by researchers at Radboudumc and Nivel (Netherlands institute for health services research) (Houwen et al., 2017). Subsequently, medical professionals examined these transcripts to generate SOAP medical reports, with an illustrative example presented in Table 1. These SOAP reports are used in the study as a human reference for comparison with the automatically generated reports. The automatically generated medical reports were produced by GPT based on various prompt formulations. Using prompt engineering, the prompts were created using the shot prompting and context manager pattern techniques. Each executed prompt resulted in medical reports that were analyzed to determine which prompt yielded the best results." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Formulation of Prompts", "publication_ref": [ "b2", "b17" ], "table_ref": [], "text": "The prompts formulated in this work combine shot prompting and context pattern prompting (Figure 2). First, a base prompt was established upon which all other elements in the prompt could be built. Variability in performance can then be attributed solely to differences in shots or context, rather than possible other factors. The base prompt compels the GPT to solely utilize elements present in the transcript to prevent hallucinations (Banerjee et al., 2023;Ji et al., 2023).\nThis base prompt was initially employed to construct three versions of shot-prompting: zero-shot, one-shot, and two-shot. The most effective shotprompting among these three prompts was selected. Using the context manager pattern, an increase in context was added to the prompt to measure the effect of incorporating more context into the prompt. The context is divided into two types of contexts: scope Each of these statements, as well as various combinations of them, as illustrated in Figure 2, were included to assess their individual, as well as their combined effects." }, { "figure_ref": [], "heading": "Running of Prompts", "publication_ref": [ "b27" ], "table_ref": [], "text": "The crafted prompts served as a means to collect and assess the data concerning their performance in practice. The formulated prompts were run in a selfwritten prompt engineering software supported by the Azure OpenAI Service, which is a fully managed service that allows developers to easily integrate OpenAI models into applications (Mickey, 2023). GPT-4 was used with a temperature of 0; GPT-4 is the current best-performing GPT-version and a temperature of 0 minimizes the creativity and diversity of the text generated by the GPT model (Y. Liu et al., 2023).\nAs a data source, seven real-world Dutch consultations between a GP and patients, concerning Otitis Externa and Otitis Media Acuta, were utilized and employed in three distinct manners:\n• Five transcriptions of these consultations served as input data to create automated medical reports.\n• Five manually created SOAP reports (of these five transcriptions) by doctors were employed as a human reference for the automated medical reports.\n• Two manually created SOAP reports by doctors were used as examples in shot-prompting.\nIt has been ensured that both an external and middle ear infection consultation are included in the examples, but the distinction between input and example data has been randomly made. On average, the dialogue transcriptions consisted of 1209 words (SD = 411), ranging between 606 and 1869 words. The manually created SOAP reports consisted, on average, of 60 words (SD = 17), ranging between 37 and 87 words." }, { "figure_ref": [], "heading": "Analysis of Prompts", "publication_ref": [ "b14" ], "table_ref": [], "text": "Despite growing interest in ML as a tool for medicine, there is a lack of knowledge about how these models operate and how to properly assess them using various metrics (Hicks et al., 2022). In this study, the resulting automated reports were evaluated against the human reference reports using accuracy metrics and a human evaluation. By combining these quantitative and qualitative insights, this two-step review approach gives a comprehensive assessment of the automated reports' performance." }, { "figure_ref": [], "heading": "Accuracy Measurement", "publication_ref": [ "b3", "b40", "b20", "b40" ], "table_ref": [], "text": "We used ROUGE as an accuracy metric since this is the most used text summarization evaluation metric to automatically evaluate the quality of a generated summary by comparing it to a human reference and it is suitable for our Dutch reports (Barbella & Tortora, 2022;Tangsali et al., 2022). The ROUGE metric code offered by the HuggingFace library was used to calculate the ROUGE1, and ROUGEL scores of the automated medical reports (Lin, 2004). ROUGE1 assessed the unigram similarities, and ROUGEL the longest common subsequence of words, between the automated report and the human reference reports (Tangsali et al., 2022).\nThe generation of the automated medical report is stochastic because generative AI models frequently display variety in their replies to a given prompt. To account for this variability every prompt was run five times on all transcripts, yielding distinct responses with each run. The prompts were run five times to strike a balance between robustness and computational efficiency, taking the trade-off between thorough analysis and computational costs into account. For every run, ROUGE was calculated. The overall performance and consistency of the automated medical reports are indicated by computing the average ROUGE score per consultation. Finally, an overall mean of these averages, with their standard deviations, was calculated and presented in the findings." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b11", "b37", "b29" ], "table_ref": [], "text": "It is important to note that none of the automatic evaluation metrics are perfect and that human evaluation is still essential to ensure the quality of generated summaries (Falcão, 2023). For the human evaluation, the generated reports were manually analyzed. The words in the reports were categorized into three groups based on whether they were identical, paraphrased, or additional to the human reference. We also identified and classified the additional statements in the automatic reports into distinct categories. The identified categories were: duration of complaints, duration of treatment, previously tried treatments, doctor's observations, specific complaints (all reported symptoms by the patient), refer-ral to which hospital, wait for results, discussed treatment (all specific steps that the GP reports to the patient), expected patient actions, and other complaints that are ultimately not related to the diagnosis made in the human reference. Based on the clinical report idea of Savkov et al., 2022 six medical professionals were asked to evaluate the importance of these classified additions in a SOAP report. Based on the response of the medical professionals, the additions were classified according to an adapted version of the taxonomy of error types by Moramarco et al., 2022. Not all of their errors were observed in our study, besides, we replaced their \"incorrect order of statements error\" with a \"categorization error\", and we identified \"redundant\" statements additionally." }, { "figure_ref": [], "heading": "FINDINGS AND DISCUSSION", "publication_ref": [], "table_ref": [], "text": "Running of the formulated prompts, resulted in automated medical reports with wordcounts shown in Table 2. The automated reports are approximately twice as long as the human references, indicating a significant disparity in length of the generated content. This could be explained by the fact that GPT generates full sentences, providing more detailed descriptions, while GPs tend to use abbreviations and keywords to convey the same information more concisely. Four out of six GPs in the expert panel indicated that they prefer abbreviations and keywords over full sentences, however, one GP preferred full sentences." }, { "figure_ref": [], "heading": "Accuracy Measurement", "publication_ref": [], "table_ref": [], "text": "In the evaluation of the accuracy of the prompts, first, the shot-prompting technique was evaluated, followed by the context manager pattern technique that built on the optimal numbers of shots." }, { "figure_ref": [], "heading": "Shot-prompting", "publication_ref": [ "b34", "b6", "b6", "b47" ], "table_ref": [ "tab_4" ], "text": "Table 3 shows the comparison of the different shotprompting approaches. The comparison shows that the zero-shot prompting approach resulted in the low- est ROUGE scores (0.121 and 0.079). One-shot prompting resulted in slightly higher scores (0.150 and 0.104) and the two-shot prompting approach resulted in the highest ROUGE scores (0.174 and 0.123). This result shows that adding shots to a prompt improves the performance. This can be explained by the fact that the shots serve as a reference for the expected output, enabling the GPT to generate similar outputs, which is in line with earlier research (Reynolds & McDonell, 2021). Adding an increasing number of shots could result in higher performances than two-shots since few-shot prompting is generally meant to include a larger set of examples (Brown et al., 2020). This was, however, not possible due to the limited data set. Controversially, using fewer examples, makes it possible to create more well-crafted examples and comes closer to human performance (Brown et al., 2020). Additionally, Zhao et al., 2021 found that few-shot prompting might introduce biases into certain answers.\nIt is also worth considering that the absence of a universally accepted method for applying shot prompting introduces a degree of uncertainty regarding the most effective approach. Including the transcripts with the sample SOAP reports, rather than only presenting the SOAP report as an example could have potentially produced different results. However, it is important to note that the main goal of this study was to teach the GPT how to correctly use the SOAP format and how to describe items in the SOAP categories." }, { "figure_ref": [], "heading": "Context manager pattern", "publication_ref": [ "b5" ], "table_ref": [ "tab_5" ], "text": "The two-shot prompting strategy produced the highest scores, thus the context manager pattern was added to this foundation. Scope context and domain context were evaluated separately as well as the combination of the two types of context. In the assessment of the context manager pattern, a slight variation could be observed in the ROUGE scores based on different contextual additions (Table 4).\nThe combined scope context (0.179 and 0.126) scored lower than the combined domain context (0.220 and 0.167). This would suggest that scope context has little effect on the quality of reports that are generated. However, the combination of scope context and domain context (0.250 and 0.189) resulted in higher ROUGE scores than domain context by itself. A noteworthy finding is the difference between domain contexts c and domain context d, where context d produced lower scores (0.173 and 0.121) than context c (0.242 and 0.179). Remarkably, contexts c and context d together (0.220 and 0.167) also produced lower results than context c by itself. This suggests a potential negative effect of context d on the overall performance. To test this, a prompt was run that excluded context d from the prompt but this led to even lower overall scores (0.239 and 0.178). This decline in score may be explained by the limited dataset, which could have resulted in skewed results.\nA potential reason why domain context increases the performance more than scope context is that the shot prompting already provides clear direction on how the GPT should behave; it has already set the context to the medical field. Prompting to use abbreviations, short sentences, and keywords (context c), may have had a considerable influence since GPT itself tends to make long sentences and provide as much information as possible. Prohibiting this action resulted in improved performance in the automated report. It is notable that it is unexpected that the GPT does not already do this after the shot prompting, but this could possibly be explained because only SOAP examples were used without including the transcripts in the examples.\nThis study also investigated the inclusion of a list of abbreviations within the prompt and found that it had a positive impact on the results, with ROUGE scores of 0.273 and 0.261. However, it was ultimately not selected as the optimal prompt since the use of abbreviations varies between hospitals and healthcare providers (Borcherding & Morreale, 2007), making it difficult to create a universally applicable prompt that incorporates all relevant abbreviations." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [], "text": "The results from the quantitative approach showed that the two-shot prompting approach in combination with the scope and domain context (Listing 5) resulted in the best performance. However, since this still resulted in a relatively low ROUGE score, human evaluation was performed for this final prompt.\n1 Within the scope of medical dialogue summarization , create a medical report from the consultation , follow the SOAP guidelines , and use the information found in the provided transcript as the sole source . 2 Consider that you are a general practitioner who writes the medical report . 3 Consider that the report is used for communication between doctors who use abbreviations and short sentences . 4 Consider that in the medical field , the division between left and right , and the medication dosage are important . 5 Use the following examples :\n[ example1 ], [ example2 ].\nListing 5: The best performing prompt.\nThe expert panel showed that all six GPs agreed on the fact that the duration of the complaints is relevant to mention within the report. For all the other categories there seems to be disagreement about the relevance. For example, there appears to be disagreement about the importance of recording specific pa- " }, { "figure_ref": [], "heading": "25", "publication_ref": [], "table_ref": [], "text": "In Subjective The patient reports ... especially in the morning, and that the ear smells." }, { "figure_ref": [], "heading": "In Objective", "publication_ref": [], "table_ref": [], "text": "Left: some earwax." }, { "figure_ref": [], "heading": "In Analysis", "publication_ref": [], "table_ref": [], "text": "This can also radiate from the sinuses." }, { "figure_ref": [], "heading": "In Plan", "publication_ref": [], "table_ref": [], "text": "A dressing and plaster have been applied to the left ear to collect the discharge." }, { "figure_ref": [], "heading": "Additional", "publication_ref": [], "table_ref": [], "text": "Colonoscopy scheduled for three years. Patient should contact for referral to a gastroenterologist. Prescription for [name of medication] for constipation." }, { "figure_ref": [ "fig_4", "fig_5", "fig_5", "fig_4" ], "heading": "2", "publication_ref": [ "b18", "b46" ], "table_ref": [ "tab_6" ], "text": "In an additional NB (Nota Bene) The occurrence is counted per consultation, so if the same error happened repeatedly in the reruns for the same consultation, it was only counted once. tient complaints. When mentioning that in particular, the left ear caused problems, the GPs disagree on the importance. Some indicate that this is relevant (n = 3), while there are also GPs that indicate that this is not relevant (n = 1) or they indicate that they are neutral about this (n = 2). However, one of the GPs who indicated that it is relevant did mention that they would note it more briefly. Another example that shows this disagreement is within the discussed treatment: \"gauze and plaster applied to the left ear to collect discharge\". Two GPs indicated that this was relevant, two indicated that this was irrelevant and two indicated that they were neutral about this.\nTable 5 shows the identified error statements in the five automated reports during the human evaluation. The human evaluation highlights several noteworthy findings regarding the quality of the automated reports. It is evident that the automated reports contain a notable number of redundant statements, 25 in total, with the majority occurring in the Plan section (n = 9) and the Subjective section (n = 7). Moreover, stylistic errors are prevalent, particularly classification errors (n = 14) and occasional repetitions (n = 3). In addition to adding extra (relevant or redundant) information, the automated report sometimes omits essential information (n = 19) when compared to the human reference. Factual errors are present as well, amounting to a total of 8 incorrect statements and 6 hallucinations. For a visual example of the error statements see Figures 3 and4.\nA possible reason for the omissions in the automated reports could be related to the GPT's limited understanding of the medical context, leading it to overlook certain critical details during the report generation process. This is supported by research from Johnson et al., 2023, who found that a potential limitation of GPT is handling complex medical queries, but they did not reach statistical significance for this statement. A potential reason for the classification errors is that the GPT lacks genuine comprehension of the distinct SOAP categories, thus negatively influencing its ability to accurately allocate information to the appropriate category within the SOAP report.\nThe human evaluation revealed substantial variations in the performance of the automated reports across different consultations, with some reports displaying higher performance levels than others. For example, the report that was generated based on transcript 2006 (Figure 4) had a lot of redundant information added to the report while the report based on transcript 2808 closely resembles the human references (Figure 3). This discrepancy in performance may be related to the difficulty that GPT encounters in differentiating between various medical conditions discussed during a single consultation, which may result in the creation of SOAP reports that include data pertaining to several medical conditions.\nOne noteworthy finding is that, even though the prompts make clear how important left-right orientation is, the automated reports frequently miss it. This can be explained by the findings from the research of Ye and Durrett, 2022; in their research, they have demonstrated that adding explanations or contextual cues alone does not necessarily ensure an improved result in the final output. This underlines the problem of ensuring that complicated, contextually relevant information is consistently included in the generated reports. This disparity highlights the ongoing diffi-culties in optimizing automated report production for medical contexts and argues the natural language processing system's flexibility and understanding when it comes to adding important facts." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Even though machine learning is becoming more popular as a medical tool, little is known about these models' workings or how to appropriately evaluate them using different metrics. In this research, we investigated the combination of shot-prompting with pattern prompting to enhance the performance of automated medical reporting. The automated medical reports were generated with the use of prompt engineering software. The generated reports were evaluated against human reference provided by a GP. For this evaluation, the widespread ROUGE metric was used in combination with human evaluations. The results showed that adding examples to the prompt is beneficial for the automated medical report. It also showed that adding both scope context as well as domain context improved the performance of the automated medical report. This resulted in the overall best structure for a prompt using a base structure in combination with two shots and scope and domain context." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Despite these promising results, this study has validity threats that could have influenced the findings. Firstly, generative AI systems are stochastic which introduces variability as they produce different answers each time they are run, which may impact the reliability and repeatability of the results. Secondly, the findings have limited generalizability to other medical conditions because of the constrained data availability, with a small dataset exclusively on Otitis, and the variability in medical reporting across diverse domains. An additional concern is the missed opportunity to explore every combination of shots and contexts. However, the feasibility of this approach was constrained within the scope of this study. This influenced the study's depth of analysis and its capacity to provide nuanced insights. Lastly, the human evaluation has some limitations, even though medical professionals were consulted to gather domain expertise the human evaluation was still performed by non-medical professionals. This potentially introduced a perspective misalignment that could have influenced the interpretation and assessment of the generated medical reports." }, { "figure_ref": [], "heading": "Future Work", "publication_ref": [], "table_ref": [], "text": "This marks an initial investigation into optimizing prompt sequences with a fixed LLM. Nonetheless, we acknowledge that diverse LLMs may yield different outcomes. Additionally, future studies should explore the applicability of our findings in the setting of different medical conditions and broaden the scope of the study beyond Otitis. The prompt could be further improved to avoid redundant statements by defining the maximum length of the output, using an increasing number of shots, or using a different method of shots such as providing the consultation transcript in addition to the resulting medical report.\nFurthermore, future work should focus on finding a more suitable metric to evaluate the output. In the current research, the ROUGE metric was used for the evaluation of the automated medical report as well as human evaluation. ROUGE is commonly used within summarization tasks however it has some downsides, the metric is very black and white. It does not take into account the meaning of the words in the summarization but only the occurrence of specific words. For future work a different evaluation needs to be created, this metric needs to take into account the meaning of the automated medical report, and it needs to investigate if the essence of the automated medical report matches the golden standard. This new metric needs to take into account rewording and paraphrasing so that they are not automatically considered wrong. For optimal evaluation, the complete reports should be evaluated by GPs." }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "We want to thank all the GPs and other medical professionals who aided us in our human evaluation. Special thanks go to Rob Vermond for assisting with the expert panel of the GPs. Their professional insights ensured that we could execute the human evaluation. We also would like to thank Kate Labunets for providing feedback on the paper. Finally, many thanks go to Bakkenist for the support of this research project." } ]
Customized medical prompts enable Large Language Models (LLM) to effectively address medical dialogue summarization. The process of medical reporting is often time-consuming for healthcare professionals. Implementing medical dialogue summarization techniques presents a viable solution to alleviate this time constraint by generating automated medical reports. The effectiveness of LLMs in this process is significantly influenced by the formulation of the prompt, which plays a crucial role in determining the quality and relevance of the generated reports. In this research, we used a combination of two distinct prompting strategies, known as shot prompting and pattern prompting to enhance the performance of automated medical reporting. The evaluation of the automated medical reports is carried out using the ROUGE score and a human evaluation with the help of an expert panel. The two-shot prompting approach in combination with scope and domain context outperforms other methods and achieves the highest score when compared to the human reference set by a general practitioner. However, the automated reports are approximately twice as long as the human references, due to the addition of both redundant and relevant statements that are added to the report.
Enhancing Summarization Performance through Transformer-Based Prompt Engineering in Automated Medical Reporting
[ { "figure_caption": "Practitioner, P = Patient OMA = Otitis Media Externa, ENT = Ear, Nose, Throat", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Research method visualization.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The flow of prompt formulation (translated to English, the original prompts are in Dutch).", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "context and domain context. The scope context explains in what scope the GPT operates and what its role is. The domain context gives more details about communication and important elements in the medical field.The following context statements are included: a. Within the scope of medical dialogue summarization;b. Consider that you are a general practitioner who writes the medical report during the consultation; c. Consider that the report is used for communication between doctors who use abbreviations and short sentences or keywords;d. Consider that in the medical field, the division between left and right, and the medication dosage are important.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Human evaluation of the automated medical report of transcript 2028 (translated to English, the generated reports are in Dutch).", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Human evaluation of the automated medical report of transcript 2006 (translated to English, the generated reports are in Dutch).", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Example of part of a consultation transcript and the corresponding SOAP report (translated to English, the original transcript and SOAP report are in Dutch). Since 1.5 weeks, ear pain and a feeling of deafness right ear, received antibiotics from the GP. Feeling sicker since yesterday, experiencing many side effects from the antibiotics. Using Rhinocort daily for hyperreactivity.Left ear operated for cholesteatoma, no complaints. Yes, the first three or four tablets were really like, whoa. And after that, it was just the same. So, I still have ear pain. And now I notice that my resistance is decreasing because of", "figure_data": "TranscriptSOAP reportGP: Good morning.", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Word count comparison between the generated report and the human reference.", "figure_data": "HumanGenerated DifferenceReferenceReportSubjective294718Objective11209Analysis484Plan143319Total1115853", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Mean and Standard Deviation (SD) for the ROUGE1 and ROUGEL-scores for zero-shot, one-shot, and two-shot prompting.", "figure_data": "ROUGE1ROUGELMean±SDMean±SDZero-shot0.121±0.007 0.079±0.006One-shot0.150±0.009 0.104±0.006Two-shot0.174±0.005 0.123±0.004", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Mean and Standard Deviation (SD) for the ROUGE1 and ROUGEL-scores for context prompts.", "figure_data": "ROUGE1ROUGELMean±SDMean±SDContext: ScopeContext a0.172±0.041 0.120±0.016Context b0.173±0.043 0.124±0.022Context a & b0.179±0.049 0.126±0.023Context: DomainContext c0.242±0.035 0.179±0.016Context d0.173±0.048 0.121±0.025Context c & d0.220±0.064 0.167±0.037Context: TotalContext a & b & c & d 0.250±0.049 0.189±0.025", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Error statements with occurrences in the five generated medical reports (translated to English, the generated reports are in Dutch). Redundant Statements The inclusion of unnecessary information that does not contribute substantively to the report, although it is on the topic of the medical condition.", "figure_data": "TypeDefinition -ExamplesOccurrenceFactual ErrorsAn error in the information presented that contradicts reality.14Hallucinations\"Pain originating from the syringing by the doctor's assistant\"6Pain was already present before the syringing.Incorrect statements\"The patient uses Rhinocort and cetirizine daily for mucous membrane hy-8perreactivity\"Patient only uses Rhinocort for mucous membrane hyperactivity.Stylistic ErrorsAn error in the manner in which information is used or presented.17Repetitions\"Patient feels sick\"3\"Patient also reports a feeling of being unwell\".Classification error\"The area around the ear feels numb.\"14in the Analysis part of SOAP.OmissionsAn error characterized by the act of neglecting to include essential informa-19tion in the report.In SubjectiveIndication of which ear is involved/ referred to3Parts of symptoms mentioned2Parts of relevant medical history5In ObjectiveIndication of which ear is involved/ referred to2Parts of symptoms observed2In AnalysisIndication of which ear is involved/ referred to3In PlanAgreement with patient1Possible future treatment1", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Daphne Van Zandvoort; Laura Wiersema; Tom Huibers; Sandra Van Dulmen; Sjaak Brinkkemper
[ { "authors": " Anil", "journal": "", "ref_id": "b0", "title": "Prompt engineering -1-shot prompting", "year": "2023" }, { "authors": "Y Balagurunathan; R Mitchell; I El Naqa", "journal": "Physica Medica", "ref_id": "b1", "title": "Requirements and reliability of ai in the medical context", "year": "2021" }, { "authors": "D Banerjee; P Singh; A Avadhanam; S Srivastava", "journal": "", "ref_id": "b2", "title": "Benchmarking llm powered chatbots: Methods and metrics", "year": "2023" }, { "authors": "M Barbella; G Tortora", "journal": "", "ref_id": "b3", "title": "Rouge metric evaluation for text summarization techniques", "year": "2022" }, { "authors": "S J Bigelow", "journal": "", "ref_id": "b4", "title": "10 prompt engineering tips and best practices: Techtarget", "year": "2023" }, { "authors": "S Borcherding; M J Morreale", "journal": "", "ref_id": "b5", "title": "The ota's guide to writing soap notes", "year": "2007" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Language models are fewshot learners", "year": "2020" }, { "authors": "P Coorevits; M Sundgren; G O Klein; A Bahr; B Claerhout; C Daniel; M Dugas; D Dupont; A Schmidt; P Singleton", "journal": "Journal of internal medicine", "ref_id": "b7", "title": "Electronic health records: New opportunities for clinical research", "year": "2013" }, { "authors": "Q Dong; L Li; D Dai; C Zheng; Z Wu; B Chang; X Sun; J Xu; Z Sui", "journal": "", "ref_id": "b8", "title": "A survey for in-context learning", "year": "2022" }, { "authors": "D Dragon", "journal": "", "ref_id": "b9", "title": "The right way to do few-shot prompting", "year": "2023" }, { "authors": "O Elassy; R De Vendt; F Dalpiaz; S Brinkkemper", "journal": "Development and Support", "ref_id": "b10", "title": "A semi-automated method for domain-specific ontology creation from medical guidelines", "year": "2022" }, { "authors": "F Falcão", "journal": "", "ref_id": "b11", "title": "Metrics for evaluating summarization of texts performed by transformers: How to evaluate the quality of summaries", "year": "2023-04" }, { "authors": "M W Friedberg; P G Chen; K R Van Busum; F Aunon; C Pham; J Caloyeras; S Mattke; E Pitchforth; D D Quigley; R H Brook", "journal": "Rand health quarterly", "ref_id": "b12", "title": "Factors affecting physician professional satisfaction and their implications for patient care, health systems, and health policy", "year": "2014" }, { "authors": "T F Heston", "journal": "", "ref_id": "b13", "title": "Prompt engineering for students of medicine and their teachers", "year": "2023" }, { "authors": "S A Hicks; I Strümke; V Thambawita; M Hammou; M A Riegler; P Halvorsen; S Parasa", "journal": "Scientific reports", "ref_id": "b14", "title": "On evaluation metrics for medical applications of artificial intelligence", "year": "2022" }, { "authors": "J Houwen; P L Lucassen; H W Stappers; W J Assendelft; S Van Dulmen; T C Olde Hartman", "journal": "British Journal of General Practice", "ref_id": "b15", "title": "Improving gp communication in consultations on medically unexplained symptoms: A qualitative interview study with patients in primary care", "year": "2017" }, { "authors": "R Jain; A Jangra; S Saha; A Jatowt", "journal": "", "ref_id": "b16", "title": "A survey on medical document summarization", "year": "2022" }, { "authors": "Z Ji; N Lee; R Frieske; T Yu; D Su; Y Xu; E Ishii; Y J Bang; A Madotto; P Fung", "journal": "ACM Computing Surveys", "ref_id": "b17", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "D Johnson; R Goodman; J Patrinely; C Stone; E Zimmerman; R Donald; S Chang; S Berkowitz; A Finn; E Jahangir", "journal": "", "ref_id": "b18", "title": "Assessing the accuracy and reliability of ai-generated medical responses: An evaluation of the chat-gpt model", "year": "2023" }, { "authors": "K Krishna; S Khosla; J P Bigham; Z C Lipton", "journal": "", "ref_id": "b19", "title": "Generating soap notes from doctor-patient conversations using modular summarization techniques", "year": "2021" }, { "authors": "C.-Y Lin", "journal": "Text Summarization Branches Out", "ref_id": "b20", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "P Liu; W Yuan; J Fu; Z Jiang; H Hayashi; G Neubig", "journal": "ACM Computing Surveys", "ref_id": "b21", "title": "Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2023" }, { "authors": "Y Liu; D Iter; Y Xu; S Wang; R Xu; C Zhu", "journal": "", "ref_id": "b22", "title": "Gpteval: Nlg evaluation using gpt-4 with better human alignment", "year": "2023" }, { "authors": "L Maas; M Geurtsen; F Nouwt; S Schouten; R Van De Water; S Van Dulmen; F Dalpiaz; K Van Deemter; S Brinkkemper", "journal": "HICSS", "ref_id": "b23", "title": "The care2report system: Automated medical reporting as an integrated solution to reduce administrative burden in healthcare", "year": "2020" }, { "authors": "Y Mathur; S Rangreji; R Kapoor; M Palavalli; A Bertsch; M Gormley", "journal": "", "ref_id": "b24", "title": "Summqa at mediqa-chat 2023: In-context learning with gpt-4 for medical summarization", "year": "2023" }, { "authors": "M C Meijers; J Noordman; P Spreeuwenberg; T C Olde Hartman; S Van Dulmen", "journal": "Family practice", "ref_id": "b25", "title": "Shared decision-making in general practice: An observational study comparing 2007 with 2015", "year": "2019" }, { "authors": "G Michalopoulos; K Williams; G Singh; T Lin", "journal": "", "ref_id": "b26", "title": "Medicalsum: A guided clinical abstractive summarization model for generating medical reports from patient-doctor conversations", "year": "2022" }, { "authors": "N Mickey", "journal": "", "ref_id": "b27", "title": "Explore the benefits of azure openai service with microsoft learn: Azure blog: Microsoft azure", "year": "2023" }, { "authors": " Azure", "journal": "", "ref_id": "b28", "title": "", "year": "" }, { "authors": "F Moramarco; A P Korfiatis; M Perera; D Juric; J Flann; E Reiter; A Savkov; A Belz", "journal": "", "ref_id": "b29", "title": "Human evaluation and correlation with automatic metrics in consultation note generation", "year": "2022" }, { "authors": "M F Mridha; A A Lima; K Nur; S C Das; M Hasan; M M Kabir", "journal": "IEEE Access", "ref_id": "b30", "title": "A survey of automatic text summarization: Progress, process and challenges", "year": "2021" }, { "authors": "V Nair; E Schumacher; A Kannan", "journal": "", "ref_id": "b31", "title": "Generating medically-accurate summaries of patient-provider dialogue: A multi-stage approach using large language models", "year": "2023" }, { "authors": "J M Overhage; D Mccallie", "journal": "Annals of internal medicine", "ref_id": "b32", "title": "Physician time spent using the electronic health record during outpatient encounters: A descriptive study", "year": "2020" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "OpenAI blog", "ref_id": "b33", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "L Reynolds; K Mcdonell", "journal": "", "ref_id": "b34", "title": "Prompt programming for large language models: Beyond the few-shot paradigm", "year": "2021" }, { "authors": "R Robinson", "journal": "", "ref_id": "b35", "title": "How to write an effective gpt-3 or gpt-4 prompt", "year": "2023" }, { "authors": " Zapier", "journal": "", "ref_id": "b36", "title": "", "year": "" }, { "authors": "A Savkov; F Moramarco; A P Korfiatis; M Perera; A Belz; E Reiter", "journal": "", "ref_id": "b37", "title": "Consultation checklists: Standardising the human evaluation of medical note generation", "year": "2022" }, { "authors": "J D Schenker; P D Rumrill", "journal": "Journal of vocational rehabilitation", "ref_id": "b38", "title": "Causalcomparative research designs", "year": "2004" }, { "authors": "A Tam", "journal": "", "ref_id": "b39", "title": "What are zero-shot prompting and fewshot prompting", "year": "2023" }, { "authors": "R Tangsali; A J Vyawahare; A V Mandke; O R Litake; D D Kadam", "journal": "", "ref_id": "b40", "title": "Abstractive approaches to multidocument summarization of medical literature reviews", "year": "2022" }, { "authors": "M M Van Buchem; H Boosman; M P Bauer; I M Kant; S A Cammel; E W Steyerberg", "journal": "NPJ digital medicine", "ref_id": "b41", "title": "The digital scribe in clinical practice: A scoping review and research agenda", "year": "2021" }, { "authors": "J Wang; E Shi; S Yu; Z Wu; C Ma; H Dai; Q Yang; Y Kang; J Wu; H Hu", "journal": "", "ref_id": "b42", "title": "Prompt engineering for healthcare: Methodologies and applications", "year": "2023" }, { "authors": "J White; Q Fu; S Hays; M Sandborn; C Olea; H Gilbert; A Elnashar; J Spencer-Smith; D C Schmidt", "journal": "", "ref_id": "b43", "title": "A prompt pattern catalog to enhance prompt engineering with chatgpt", "year": "2023" }, { "authors": "A P Widyassari; A Affandy; E Noersasongko; A Z Fanani; A Syukur; R S Basuki", "journal": "", "ref_id": "b44", "title": "Literature review of automatic text summarization: Research trend, dataset and method", "year": "2019" }, { "authors": "D Yadav; J Desai; A K Yadav", "journal": "", "ref_id": "b45", "title": "Automatic text summarization methods: A comprehensive review", "year": "2022" }, { "authors": "X Ye; G Durrett", "journal": "Advances in neural information processing systems", "ref_id": "b46", "title": "The unreliability of explanations in few-shot prompting for textual reasoning", "year": "2022" }, { "authors": "Z Zhao; E Wallace; S Feng; D Klein; S Singh", "journal": "", "ref_id": "b47", "title": "Calibrate before use: Improving few-shot performance of language models", "year": "2021" } ]
[ { "formula_coordinates": [ 8, 326.45, 364.65, 187.97, 16.3 ], "formula_id": "formula_0", "formula_text": "[ example1 ], [ example2 ]." } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b3", "b6", "b27", "b28", "b1", "b9", "b24", "b13", "b18" ], "table_ref": [], "text": "A personalized recommendation system suggests individual item sets to users based on their preferences, past interactions, and demographics. For instance, on a content platform, it can recommend movies or music tailored to the user's habits, significantly enhancing their satisfaction with the service. Matrix factorization (MF) [9] is a successful model-based approach for personalized recommender systems. The main idea is to decompose the matrix representing the observed interactions between users and items into two types of matrices: user-latent and item-latent matrices. The interaction between a user and item is then predicted by computing the inner product of the obtained user and item latent vectors. Recognized for its simplicity and power, MF is used in a variety of applications [4] and has been applied to a wide range of deep-learning-based meth-ods to improve accuracy [7,28]. On the other hand, understanding the meaning of each dimension of the learned embeddings is usually challenging, hindering its practicality in a recommender system, even though they preserve valuable information for prediction. Addressing the interpretability or explainability of the model is crucial to improve indicators beyond recommendation accuracy, including factors such as user satisfaction [29].\nThe shortest way to provide an interpretation of the recommendation model is to generate some reasons by auxiliary use of side information about users and items, such as attribute information and review text, but this content information is usually difficult to obtain, making this a somewhat impractical setting. Therefore, in limited settings where only user-item interaction matrices are available, interpretation is provided indirectly by presenting users and items that are similar to the recommendation results on the matrix. As an approach, matrix factorization methods with simultaneous and explicit clustering have been proposed to summarize the users and items in the system, taking into account unobserved interactions. However, they work on specific prediction tasks (e.g., rating predic-arXiv:2311.13277v2 [cs.IR] 21 Feb 2024 tion [2,10,25] and ranking prediction [14]) and on matrix factorization algorithms designed specifically for them (e.g., nonnegative MF). Therefore, these methods are not directly applicable to newer MF applications, and opportunities for interpretation are lost.\nBased on the above insights, we propose a hierarchical matrix factorization (HMF) method that can simultaneously perform prediction and clustering in a single model. HMF is designed to stably predict unobserved interactions while extracting the hierarchical relationships between users and items in order to provide an abstract interpretation such as \"this group of users strongly prefers this group of items.\" To this end, we further decompose the traditional latent matrix into (a) probabilistic connection matrices representing the hierarchical relationships between objects (i.e., users and items in this study) and clusters, and (b) a latent matrix of root clusters. Inspired by the motivation behind fuzzy clustering [19], each object and cluster is represented by a weighted average of the abstract embeddings of its parent clusters. This simple formulation, called hierarchical embedding, makes the loss function designed for HMF differentiable as well as conventional MFs and easily extendable to an advanced MF method based on gradient descent. To evaluate the effectiveness of the proposed method, we conducted two experiments based on rating prediction and ranking prediction. A comprehensive comparison with vanilla MF and existing hierarchical MF methods verified whether HMF can improve the recommendation accuracy and also learn interactions stably. In addition, we conducted case studies on a real movie rating dataset and observed the interpretations produced by HMF.\nIn summary, our main contributions are as follows:\n1. We propose an end-to-end matrix factorization method HMF that simultaneously detects the user and item hierarchies. 2. We introduce the hierarchical embeddings, which are general and differentiable, into MF, allowing HMF to be optimized with a single gradient descent method. 3. We provide a summary of the cluster-level interactions from the obtained hierarchical embeddings, resulting in the interpretability of HMF." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "MF and Its Extensions", "publication_ref": [ "b8", "b19", "b2", "b7", "b17", "b21", "b20", "b21", "b6", "b27", "b6" ], "table_ref": [], "text": "MF is often explained in the context of an explicit feedback task, in which users assign a rating to certain items, such as five stars. Here, we assume M users and N items are available for the recommendation task. Subsequently, we can construct the rating matrix X ∈ R M×N from the historical interactions, where each element X i j denotes the rating of item j provided by user i. The aim is to model each feedback interaction between user i and item j using the inner product of the d-dimensional user and item latent vectors U i , V j ∈ R d ; that is, X i j ≈ U T i V j for all users and items in the task. In general, each \"factor\" in a user's latent vector represents a preference of the user, such as liking a certain film genre. Similarly, each factor in an item's latent vector represents its characteristics. Thus, computing the inner product allows us to express that a strong match between a user's preferences and an item's characteristics will result in a strong interaction (i.e., a high score) between the user and item.\nNot all interactions between users and items can be observed in a real-world setting. The simplest way to deal with this problem is to treat the unobservables as zeros; however, because the resulting rating matrix is usually very sparse, this leads to an overfitted prediction. Therefore, the latent vectors are learned from the observed interactions using the objective function:\nmin U,V (i, j)∈Ω X i j -U T i V j 2 + λ Θ ||Θ|| 2 2 (1)\nwhere Ω denotes the set of observed interactions, λ Θ is the regularization parameter, and Θ represents the set of regularized parameters. Stochastic gradient descent (SGD) [9] and alternating least squares (ALS) [20] are primarily used as the optimizers. Then, in the context of the recommender system, the resulting latent matrices are generally used to predict and recommend items that are likely to be highly valued by the user. There are also many collaborative filtering settings for implicit feedback, such as \"clicks,\" and \"purchases.\" It is usually inappropriate to use Eq. ( 1) as a binary classification of whether or not there was an interaction, because it overfits for unobserved interactions. Therefore, pointwise loss [3,8,18], pairwise loss [22], and softmax loss [21] have been proposed instead. In particular, bayesian personalized ranking (BPR) [22] is a well-known method owing to its simplicity. The underlying idea is the probability that an observed interaction (i, j) ranks higher than an unobserved interaction (i, k). Therefore, the objective function is formulated as follows:\nmin U,V - (i, j)∈Ω,(i,k) Ω ln σ(U T i V j -U T i V k ) + λ Θ ||Θ|| 2 2 (2\n)\nwhere σ is a sigmoid function σ(x) = (1 + e -x ) -1 . The objective function is generally optimized using SGD and negative sampling (i.e., the sampling of unobserved interactions).\nAlthough MF alone is a powerful tool, several studies have attempted to further improve recommendation accuracy by introducing a neural network architecture [7,28]. The common motivation of them is to overcome the linear model of MF and capture more complex user-item relationships by introducing nonlinearity and allowing representation learning. Neural matrix factorization (NeuMF) [7] combines a generalization of MF with a neural architecture and a multilayer perceptron that captures the nonlinear relationship between users and items. As these methods can be optimized with SGD-based schemes, the applicability of SGD can easily extend the MF architecture." }, { "figure_ref": [], "heading": "MF with Clustering", "publication_ref": [ "b11", "b15", "b25", "b23", "b24", "b9", "b1", "b13" ], "table_ref": [], "text": "A pioneering approach is to use the hierarchical structure of objects obtained a priori to resolve sparsity in the factorization [12,16], but in practice such hierarchical information is difficult to obtain. In addition, Hidden Group Matrix Factorization (HGMF) [26] detects groups of objects in the factorized matrix, but the clustering and decomposition processes are independent, and the classification information obtained during decomposition may be overlooked. Therefore, such collaborative filtering settings are beyond the scope of our study.\nInstead, we focus on frameworks that simultaneously perform matrix factorization and clustering, which is related to several existing studies. Capturing implicit hierarchical structures for recommender systems (known as IHSR) [24,25] was the first approach to further decompose the latent matrices into some non-negative matrices by applying a non-negative MF scheme. Hidden Hierarchical MF [10] also tackles hierarchical clustering and rating prediction simultaneously and consists of a bottom-up phase for learning the hidden hierarchical structure and a top-down phase for prediction using a quasi-Newton method. Learning tree-structured embeddings (known as eTREE) [2] introduce a new regularization term, which represents the difference between the embedding of each child item and its parent item in a hierarchical structure, and also was optimized in the non-negative MF setup and used for rating prediction. In addition, the prototype-based MF method should also be mentioned, although it does not explicitly address clustering. ProtoMF [14] defines a vector that measures the similarity between the embeddings of users and items and the embeddings of prototypes (similar to clusters) to model implicit feedback. Overall, these methods are limited to specific tasks (i.e., explicit or implicit feedback) and optimization schemes. It is questionable whether they can be directly applied to MF-based deeplearning methods, which have been increasingly developed in the context of representation learning in recent decades." }, { "figure_ref": [], "heading": "Interpretable Recommendation", "publication_ref": [ "b28", "b16", "b26", "b12", "b0", "b4", "b13" ], "table_ref": [], "text": "It is essential to provide users and system vendors with the reasons for recommendations to improve non-accuracy metrics, including the transparency, persuasiveness, and reliability of the recommendations [29]. These techniques (not limited to recommender systems) have been discussed in terms of interpretation and explanation [17]. Xian et al. [27] proposed an attributeaware recommendation that provides attributes corresponding to recommendations, and McAuley et al. [13] attempted to explain latent dimensions in ratings by exploiting hidden topics in the review data. Returning to the discussion of pure collaborative filtering tasks, there are limited situations where information about users and items, including review text, is available. In other words, we must provide a useful interpretation using only the available user-item interactions and model structure. A traditional approach to interpretability in MF is to present a neighborhood-style reason, such as \"your neighbor users rate this item highly,\" which contains the idea of a memory-based approach [1,5]. However, because this type of \"reason\" depends on a per-user or per-item basis, it can be difficult to summarize how the model learns the entire dataset as the number of users and items increases. ProtoMF [14] models user and item prototypes, enabling interpretation of relationships between user prototypes and users, and between item prototypes and items. However, it does not facilitate higher-level interpretations such as understanding the relationships between user and item prototypes. Instead, by assuming hierarchical clusters of users and items on on the same latent space, our goal is to achieve multiple levels of abstraction in interpretation such as \"What group of items does a group of users prefer?\" " }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Representation of Users and Items", "publication_ref": [ "b23", "b24" ], "table_ref": [], "text": "This study defines a hierarchical structure with leaf nodes as users/items and other nodes as clusters that abstract them. A hierarchical structure with a depth of one is a simple clustering setup, whereas a deeper structure captures more complex structures, i.e. clusters of clusters. With respect to items, it is intuitive to assume a deep hierarchical structure, given that on an e-commerce site, items can be categorized into main categories (e.g., household goods), subcategories (e.g., detergents), and so on. Here, we assume that each node (i.e., users, items, or non-root clusters) is characterized by its parent clusters; in other words, represented by a more abstract combination of preferences or characteristics. Consequently, our focus is not on learning the individual user-and item-specific embeddings, but on learning the root cluster embeddings along with their connections (weights) to parent-child nodes, as shown in Fig. 1 (a). In this study, we refer to the user and item embeddings generated by this scheme as hierarchical embeddings. This approach differs from the vanilla MF method, which focuses on learning embeddings that are directly linked to individual objects, as shown in Fig. 1 (b).\nFor simplicity, we first consider a user hierarchy of depth one (i.e., a non-hierarchical clustering setting). Let m 1 be the number of user clusters; then, we decompose the user latent matrix U (0) ∈ R M×d into the connection matrix Ũ(1) ∈ R M×m 1 from users to clusters and the m 1 root cluster latent matrix U (1) ∈ R m 1 ×d ; that is, U (0) = Ũ(1) U (1) . We use the constraint s Ũ( 1) is = 1 for each user i, such that the connection matrix represents the probability that users are associated with clusters. Consequently, each user's embedding (i.e., each row of U (0) ) is represented by a weighted average of the cluster embeddings. This is clearly different from IHSR [24,25], which imposes that all elements in its matrices are non-negative real numbers.\nIn the hierarchical setting, more coarse clusters must be generated from the clusters. Here, we consider a user hierarchy of depth p; that is, we iteratively cluster users or the last generated user clusters p times. Let the level-specific number of user clusters be {m 1 , m 2 , . . . , m p } and m 0 = M. This means that there are m 0 users at level zero and m l user clusters at level l. Then, U (0) can be reformulated using recursive decomposition:\nU (0) = Ũ(1) U (1) = Ũ(1) Ũ(2) U (2) = Ũ(1) Ũ(2) • • • Ũ(p) U (p) (3)\nwhere Ũ(l) ∈ R m l-1 ×m l represents the connection matrix from user clusters (or user objects) at level (l -1) to user clusters at level l, such that s Ũ(l) is = 1, ∀i, and U (l) ∈ R m l ×d represents the embeddings of user clusters at level l.\nThe item latent matrix can be formulated similarly. Considering an item hierarchy with depth q, let {n 1 , n 2 , . . . , n q } be the level-specific number of item clusters and n 0 = N. The item latent matrix V (0) ∈ R N×d is expressed recursively as follows:\nV (0) = Ṽ(1) V (1) = Ṽ(1) Ṽ(2) • • • Ṽ(q) V (q) (4)\nwhere Ṽ(l) ∈ R n l-1 ×n l represents the connection matrix from items/item-clusters at level l -1 to item clusters at level l and V (l) ∈ R n l ×d represents the embeddings of item clusters at level l. Similarly, the connection matrices Ṽ(l) satisfy t Ṽ(l) jt = 1, ∀ j. Note that these hierarchical embeddings are clearly differentiable with respect to their connection matrices and the latent factors of the top-level clusters. Thus, the embedding part of the MF, which is differentiable with respect to the latent factors, can be easily replaced by a hierarchical embedding, and SGD can be used for optimization without modification." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Rating Prediction", "publication_ref": [], "table_ref": [], "text": "As in the vanilla MF, the ratings can be modeled by user and item latent vectors using hierarchical embeddings. Briefly, we only need to replace the latent matrices in Eq. ( 1) with the hierarchical embeddings. However, the constraint of normalizing the connection matrices poses a challenge that makes it difficult to seamlessly apply a typical gradient-based optimizer. To overcome this constraint, we instead apply a row-wise softmax function to the connection matrices. Finally, we introduce HMF for rating prediction and the corresponding objective function is min\nŨ(1) ,••• , Ũ(p) ,U (p) , Ṽ(1) ,••• , Ṽ(q) ,V (q) (i, j)∈Ω X i j -U (0)T i V (0) j 2 + λ Θ ||Θ|| 2 2 (5\n)\nwhere \nU (0) = Ũ(1) Ũ(2) • • • Ũ(p) U (p) , V (0) = Ṽ(1) Ṽ(2) • • • Ṽ(q) V (q) (6) Ũ(l) = softmax( Ũ(l) ), ∀l ∈ {1, . . . , p}(7)\nṼ(l) = softmax( Ṽ(l) ), ∀l ∈ {1, . . . , q}.(8)" }, { "figure_ref": [], "heading": "Ranking Prediction", "publication_ref": [], "table_ref": [], "text": "HMF can be easily applied to ranking prediction algorithms that are optimized based on the gradient descent algorithm. Here, we consider BPR-HMF, which is an extension of BPR to the HMF scheme. The objective function is derived by replacing the MF terms in the BPR objective function (2), as follows:\nmin Ũ(1) ,••• , Ũ(p) ,U (p) , Ṽ(1) ,••• , Ṽ(q) ,V (q) - (i, j)∈Ω, (i,k) Ω ln σ(U (0)T i V (0) j -U (0)T i V (0) k ) + λ Θ ||Θ|| 2 2 (9\n)\nwhere U (0) and V (0) follow Eq. ( 6)-(8)." }, { "figure_ref": [], "heading": "Interpretation", "publication_ref": [], "table_ref": [], "text": "The background of HMF interpretability can be illustrated by an interaction U (0)T i V (0) j between a user i and an item j. From Eq. ( 6) and ( 7), the interaction can be transformed as\nU (0)T i V (0) j = ( Ũ(1)T i Ũ(1) ) T Ṽ(1)T j Ṽ(1)(10)\n= s t Ũ(1) is Ṽ(1) jt Ũ(1)T s Ṽ(1) t .(11)\nSignificantly, the interactions between users and items are modeled by the inner product of the embeddings for the user and item clusters. In addition, the selection of which cluster to use for prediction is determined by a coefficient ( Ũ(1) is Ṽ(1) jt ) that indicates the strength of the relationship between the cluster and the entity (i.e., user or item). In this way, since the relationship between clusters and entities is consistent with modeling interactions, it is natural that clusters provide HMF interpretability." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b5", "b22", "b14" ], "table_ref": [ "tab_0" ], "text": "For the evaluation, we performed rating and ranking prediction tasks utilizing four datasets: ML-100K, ML-1M, Ciao, and DIGINETICA. ML-100K and ML-1M [6] are datasets from the movie domain that contain a total of 100,000 and 1,000,209 five-star ratings given to movies by users, respectively. As a product review dataset, we also used the released part 1 of a dataset called Ciao [23], which contains 36,065 ratings given to products by users. DIGINETICA2 which contains user sessions in the e-commerce website, is used for the ranking prediction task. We used only item view data (train-item-views.csv) from January 1 to June 1 for this experiment. Some sessions were missing user IDs or had multiple user IDs. Therefore, the last user ID in each session was used as the user ID for that session, and sessions with no IDs were deleted. Finally, the view between user and item was treated as implicit feedback.\nEach dataset was divided into three subsets: training, validation, and testing. Temporal global split [15] was used as the splitting strategy for evaluation in a realistic prediction task. For ML-100K, ML-1M, and Ciao, all the interactions were sorted by timestamp and split into training (80%) and testing (20%) subsets. Furthermore, the last 20% of the training subset was used for validation. Note that, unlike previous studies, users and items with a few ratings were not deleted. In DIGI-NETICA, interactions were divided by timestamp: January to March into the training subset, April into the validation subset, and May and June 1 into the testing subset. Users with less than five views in the training subset were removed from all subsets. None of the methods, including the baselines, considered cold-start users or items. Therefore, users and items that were not included in the training subset were removed from the validation and test evaluations. Table 1 lists the statistics of the dataset used after these splits and filters." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b8", "b23", "b24", "b1", "b21", "b6", "b13", "b10" ], "table_ref": [ "tab_1" ], "text": "The proposed methods were compared with several vanilla and state-of-the-art hierarchical MF methods to demonstrate their effectiveness. In rating prediction (i.e., explicit feedback) tasks, we used three baselines: MF [9], IHSR [24,25], and eTREE [2] for the proposed HMF. In ranking prediction (i.e., implicit feedback) tasks, we compared BPR-HMF with three baselines: BPR-MF [22], NeuMF [7], ProtoMF [14]. All methods had certain hyperparameters. The number of embedding dimensions for all methods was fixed at 20, and the other parameters were tuned in the validation set using a grid search. Table 2 lists the details of the grid search range setting for each method. For the sake of fairness, all the methods were compared with a depth one, except for eTREE, for which the hyperparameter settings are publicly available. For the gradient descent methods, AdamW [11] was used as the optimizer and weight decay was used instead of the model parameter regularization term." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "For the rating prediction tasks, we applied RMSE to calculate differences between true rating X i j and predicted rating Xi j for observed interaction set in each subset, which is the standard for evaluating regression tasks. To evaluate the ranking prediction task, we prepared 100 item candidates, of which one was a positive-interacting item and 99 were negative-interacting items. That is, for a given observed interaction (user i views item j) in the evaluation subset, we randomly sampled 99 items that had not been viewed by user i (i.e., negative sampling). The 100 items were then ranked by a trained model and scored using HitRatio@10 and MRR@10. To reduce the experimental duration, the training was terminated if the learning results did not improve for five consecutive epochs (or iterations) on the validation set. We trained all the methods with five different seeds, and for each method, the hyperparameter set with the highest average results based on the RMSE and HitRatio of the validation subset was selected. The average of the five evaluations of the testing subset was then reported." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Accuracy Comparison", "publication_ref": [], "table_ref": [], "text": "Table 3 presents the accuracy results of the proposed and the baseline methods for the testing subsets. In addition, the averages of the training and inference run times in the test are given for reference. On the small dataset ML-100K, the hierarchical methods IHSR, eTREE, and HMF showed superior accuracy, with the RMSE of HMF being 0.014 points lower than that of the second best, IHSR. For the relatively large dataset ML-1M, HMF had the best accuracy among the hierarchical methods, whereas vanilla MF had the best accuracy overall. This slight degradation may be due to the insufficient tuning of the hyperparameter set for ML-1M, which has a large number of users and items. Specifying larger user or item clusters may have allowed HMF to outperform or match vanilla MF. For Ciao, a sparse dataset compared to the others, HMF showed a dramatic improvement over MF, which did not converge well. This suggests that a strong assumption of representative points can facilitate solution searching and provide robustness for sparse datasets. For ranking prediction and DIGINETICA, the proposed BPR-HMF method was the best in terms of HitRatio, but not MRR. This is because HMF assumes groups of items, which may make it difficult to push the rank of a particular item higher within a group, even though it may be possible to push the rank of the group to the top. The proposed methods (namely, HMF and BPR-HMF) on GPU took 0.04 to 1.70 times the training time and 1.38 to 1.56 times the inference time compared to the classical methods (namely, MF and BPR-MF). This indicates that incorporating hierarchical embeddings does not significantly increase computational time. Although the training Table 3. Evaluation results for the proposed and baseline methods. The sign † indicates a significant difference over the proposed method (i.e., HMF or BPR-HMF) using Tukey's HSD test. All the runtimes were measured on the same machine (CPU: Intel Xeon Gold 6132, GPU: GeForce RTX 3080 Ti), with IHSR and eTREE implemented using NumPy and the other methods using JAX.\n( times of the proposed methods on CPU were up to 138 times longer than those of the baseline methods, using a GPU can reduce them to a more practical execution time." }, { "figure_ref": [ "fig_1" ], "heading": "Loss Convergence", "publication_ref": [], "table_ref": [], "text": "To confirm the convergence of the HMF losses, which assumes clustering, we visualized the evolution of the losses for different hyperparameter settings. We selected the best hyperparameter setting in the validation subset for each weight decay (which corresponded to the strength of the parameter regularization) and tracked the changes in its RMSE. Fig. 2 shows the change in the RMSE of the validation subset per epoch for MF and HMF in the rating prediction tasks. In ML-100K, MF tended to slightly overfit, even with strong regularization; however, HMF tended to converge near to one regardless of regularization. This may be because of the clusters assumed by HMF, which may have had a regularization effect. In other words, because each user or item is represented by a weighted average of the clusters, it is less likely to be assigned an inappropriate position in the latent space. In ML-1M, MF required many epochs to converge, whereas HMF required approximately five epochs. Therefore, even when using the same SGD-based optimization method, that is, AdamW, the architecture of the model can cause convergence. For Ciao, which is considered difficult to learn owing to its high sparsity, MF did not converge well in any setting, whereas HMF converged to an RMSE of less than one in all setting. This suggests that hierarchical embedding in HMF not only speeds up convergence but also increases the probability of convergence." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [ "b0" ], "table_ref": [ "tab_3", "tab_3", "tab_4" ], "text": "In HMF, all users, items, and their clusters are projected onto the same latent space, allowing identifying whether user representatives (clusters) think highly about item categories (clusters) has significant benefits for enhancing service quality. Herein, we present a case study of ML-1M for a movie service. Table 4 shows the inner products of the zeroth and first user clusters and all the item clusters, as well as the inner products of the 374th and 216th item clusters and all the user clusters. The inner product of the i-th user cluster and the j-th item cluster was computed as U (1)T i V (1) j . The user/item cluster size was calculated based on the weights of the connection matrix with a user/item of 1; that is, the size of the j-th user cluster ID is i Ũ (1) i j . In addition, item cluster titles (user cluster genders) characterized the item (user) cluster and were the titles (genders) of items (users) in the neighborhood of the item (user) cluster in the latent space. Table 4 (a) shows that the zeroth user cluster strongly prefers horror movies such as \"Bride of the Monster\" and \"Braindead,\" indicating that it captures similar movies through explicit feedback. From 4 (c) and (d), it is possible that men prefer the 374th item cluster while the opposite trend was observed for the 216 th item cluster. In addition to the relationship between user and item clusters, it is also interesting to observe the relationship between clusters and their entities, as shown in table 5. The connection matrix allows us to observe entities that strongly belong to a cluster, and it seems that users or items with close attributes were considered as the same cluster. Other relationships, such as between users and item clusters, can also be observed, and by observing the various relationships between users, items, and clusters, HMF has interpretability that provides insight by summarizing the learning results." }, { "figure_ref": [], "heading": "Influence of Hierarchy Depth", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "So far, we have studied HMF with a hierarchical depth of one. However, it is not clear how the depth affects the recommendation accuracy of HMF. Therefore, this section compares the recommendation accuracy of the depth-one HMF with that of the deeper HMF. Based on the number of user (item) clusters m 1 (n 1 ) selected in the depth-one HMF, we constructed a hierarchy in which the number of user (item) clusters at level l is defined as m l = m 1 /2 l-1 (n l = n 1 /2 l-1 ), and we report the results of retuning hyperparameters without the number of clusters. Table 6 presents the accuracy results of the proposed methods with depth p = q = 1, 2, 3, 4 for the testing subsets. Except for ML-1M, where the proposed method's recommendation accuracy was inferior to that of the baseline mentioned in Section 5.1, no significant change in the recommendation accuracy of the proposed method is observed among different depths p, q. Therefore, the reason why the proposed method outperforms the traditional methods in Table 3 is not to increase the depth p, q of the hierarchy, but to capture clusters. The advantage of hierarchical embedding that we expect is interpretability, which can provide multiple granularities of interpretation results, rather than improved accuracy. On the other hand, ML-1M tends to be less accurate with depth. We have not yet identified the cause of this trend, but a detailed analysis of this trend may reveal the cause of the proposed method's inferiority to the baseline in ML-1M." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed HMF, which captures the hierarchical relationships between users and items for interpretable recommender systems. It is assumed that the embedding vector of the user/item or cluster is the weighted average of the embedding vectors of the parent clusters in the hierarchical structure. This simple formulation allowed us to tackle both MF and clustering with a single gradient method, and also facilitated the possibility of using it for recently developed MF methods based on the gradient method. The experimental results on real datasets showed that our methods equaled or outperformed existing hierarchical and vanilla MF methods, demonstrating competitiveness and robustness in particularly sparse interactions. By characterizing user and item clusters, we presented relationships between the clusters and provided an example of how we can interpret how HMF learns user and item interactions. Nevertheless, the study has limitations we should solve in future work. The most important one is that hierarchical embeddings may not be good at subtle ranking, therefore we need to control the strength of the clustering and be careful about the expressiveness of the embedding. " }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported by JSPS KAKENHI, Grant Numbers JP21H03553 and JP22H03698." } ]
Matrix factorization (MF) is a simple collaborative filtering technique that achieves superior recommendation accuracy by decomposing the user-item interaction matrix into user and item latent matrices. Because the model typically learns each interaction independently, it may overlook the underlying shared dependencies between users and items, resulting in less stable and interpretable recommendations. Based on these insights, we propose "Hierarchical Matrix Factorization" (HMF), which incorporates clustering concepts to capture the hierarchy, where leaf nodes and other nodes correspond to users/items and clusters, respectively. Central to our approach, called hierarchical embeddings, is the additional decomposition of the latent matrices (embeddings) into probabilistic connection matrices, which link the hierarchy, and a root cluster latent matrix. The embeddings are differentiable, allowing simultaneous learning of interactions and clustering using a single gradient descent method. Furthermore, the obtained cluster-specific interactions naturally summarize user-item interactions and provide interpretability. Experimental results on ratings and ranking predictions show that HMF outperforms existing MF methods, in particular achieving a 1.37 point improvement in RMSE for sparse interactions. Additionally, it was confirmed that the clustering integration of HMF has the potential for faster learning convergence and mitigation of overfitting compared to MF, and also provides interpretability through a cluster-centered case study.
Hierarchical Matrix Factorization for Interpretable Collaborative Filtering
[ { "figure_caption": "Fig. 1 .1Fig. 1. The differences between HMF and MF model architectures illustrated by an example of rating prediction for User 2 and Item 0. Gray highlights indicate the model parameters trained on a dataset. MF has the latent vectors for each user and item, whereas HMF has latent variables in the root clusters and the connection parameters between levels.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. MF and HMF losses (shown in RMSE) per epoch on the validation subsets. The best hyperparameter setting was selected for each weight decay, showing the change in its loss. Note that if the validation loss increased for five consecutive epochs, the optimization was terminated.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Dataset statistics after filtering.", "figure_data": "User / Item #Interaction # (training / validation / test)DensityML-100K ML-1M Ciao625 / 1,561 4,463 / 3,594 1,786 / 9,00464,000 / 2,775 / 1,932 640,133 / 33,344 / 68,816 23,081 / 709 / 2110.0704 0.0463 0.0015DIGINETICA24,933 / 54,859180,894 / 2,324 / 1,3980.0004", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Hyperparameter settings for the models in this study.", "figure_data": "ModelHyperparameterRangeAllEmbed. size d20MF, HMF,Weight decay10 -2 , 10 -3 , 10 -4 , 10 -5 , 0BPR-MF, NeuMF,Learning rate10 -2 , 10 -3 , 10 -4ProtoMF, BPR-HMFBatch size1024MF, HMF# of epoch512BPR-MF, NeuMF,# of epoch128ProtoMF, BPR-HMFIHSR, HMF# of user clusters200, 400, 600, 800, 1000# of item clusters100, 200, 300, 400, 500ProtoMF, BPR-HMF# of user clusters⌊24933/512⌋, ⌊24933/128⌋,⌊24933/32⌋# of item clusters⌊54859/512⌋, ⌊54859/128⌋,⌊54859/32⌋IHSRReg. param. λ0, 10 -4 , 10 -3 , 10 -2 , 10 -1 , 1, 10Max. # of iter.64NeuMFLayer sizes{40, 20, 10}eTREE# of item clusters{10}, {25, 5}, {50, 10, 3}Reg. param. λ, µ10 -3 , 0.5, 1, 5, 10, 15, 20Reg. param. η1000ProtoMFTuning param. λ", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "/ 88.5 1.188 / 0.038 BPR-HMF 0.323 ± 0.004 0.126 ± 0.003 1674.5 / 70.2 0.162 / 0.022", "figure_data": "a) Rating predictionDatasetMethodAccuracy ± std. Time on CPU / GPU [sec.]RMSETrainInferenceMF1.113 ± 0.013 †6.3 / 6.3 0.006 / 0.009ML-100KIHSR eTREE 1.082 ± 0.006 † 182.6 / 1.080 ± 0.002 † 0.6 /-0.009 / -0.002 /--HMF1.066 ± 0.00282.7 / 10.7 0.024 / 0.014MF0.913 ± 0.002 †237.7 / 172.1 0.009 / 0.010ML-1MIHSR eTREE 0.922 ± 0.003 3054.5 / 0.951 ± 0.000 † 130.7 /-0.227 / -0.014 /--HMF0.920 ± 0.003209.1 / 7.0 0.557 / 0.016MF2.306 ± 0.017 †21.9 / 15.6 0.007 / 0.010CiaoIHSR eTREE 1.254 ± 0.045 † 105.1 / 1.122 ± 0.001 † 84.9 /-0.002 / -0.020 /--HMF0.939 ± 0.001331.2 / 9.9 0.061 / 0.014(b) Ranking prediction@10 (Dataset: DIGINETICA)MethodAccuracy ± std.Time on CPU / GPU [sec.]HitRatioMRRTrainInferenceBPR-MF 0.314 ± 0.019 0.153 ± 0.013 † 160.4 / 62.7 0.025 / 0.016NeuMF0.307 ± 0.009 0.150 ± 0.005 † 1094.2 / 290.2 0.142 / 0.030ProtoMF0.277 ± 0.027 † 0.122 ± 0.011891.6", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "A case study investigating the relationships between user and item clusters on ML-1M. For each target user (item) cluster, the inner product with all item (user) clusters is computed and sorted in descending order.", "figure_data": "(a) Target: 0th User Cluster", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "A case study investigating the relationships between users/items and user/item clusters on ML-1M. For each target user (item) cluster, the probability of connecting to all users (items) is sorted in descending order.", "figure_data": "(a) 1st User Cluster -UsersUser ID Connect. Probab. GenderOccupation29150.048FK-12 student41620.046Macademic/educator29600.024Mprogrammer7730.024Mprogrammer33380.019Mwriter. . .. . .. . .. . .(b) 374th Item Cluster -ItemsItem ID Connect. Probab.Item Title2850.196Pulp Fiction2510.196Star Wars: Episode IV -A New Hope10660.172Star Wars: Episode V -The Empire ...10800.146Star Wars: Episode VI -Return of ...33380.059For a Few Dollars More (1965). . .. . .. . .", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Evaluation results for the proposed methods with different depths of hierarchical embeddings. The sign † indicates a significant difference over the model with depth 1 using Tukey's HSD test.", "figure_data": "(a) Rating PredictionMethod Depth p, qML-100KML-1MCiaoRMSE ± std.RMSE ± std.RMSE ± std.11.066 ± 0.0020.920 ± 0.0030.939 ± 0.001HMF2 31.070 ± 0.013 0.930 ± 0.003 † 0.928 ± 0.005 1.061 ± 0.005 0.928 ± 0.004 † 0.927 ± 0.01241.067 ± 0.017 0.931 ± 0.002 † 0.930 ± 0.011(b) Ranking Prediction@10 (Dataset: DIGINETICA)MethodDepth p, qHitRatio ± std. MRR ± std.10.323 ± 0.0040.126 ± 0.003BPR-HMF2 30.325 ± 0.003 0.328 ± 0.0100.125 ± 0.002 0.127 ± 0.00240.326 ± 0.0080.126 ± 0.002", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Kai Sugahara; Kazushi Okamoto
[ { "authors": "B Abdollahi; O Nasraoui", "journal": "", "ref_id": "b0", "title": "Using explainability for constrained matrix factorization", "year": "2017" }, { "authors": "F M Almutairi; Y Wang; D Wang; E Zhao; N D Sidiropoulos", "journal": "", "ref_id": "b1", "title": "etree: Learning tree-structured embeddings", "year": "2021" }, { "authors": "I Bayer; X He; B Kanagal; S Rendle", "journal": "", "ref_id": "b2", "title": "A generic coordinate descent framework for learning from implicit feedback", "year": "2017" }, { "authors": "Y Chen; S Mensah; F Ma; H Wang; Z Jiang", "journal": "Pattern Recognit. Lett", "ref_id": "b3", "title": "Collaborative filtering grounded on knowledge graphs", "year": "2021" }, { "authors": "W Cheng; Y Shen; L Huang; Y Zhu", "journal": "", "ref_id": "b4", "title": "Incorporating interpretability into latent factor models via fast influence analysis", "year": "2019" }, { "authors": "F M Harper; J A Konstan", "journal": "ACM Trans. on Interact. Intell. Syst", "ref_id": "b5", "title": "The movielens datasets: History and context", "year": "2015" }, { "authors": "X He; L Liao; H Zhang; L Nie; X Hu; T.-S Chua", "journal": "", "ref_id": "b6", "title": "Neural collaborative filtering", "year": "2017" }, { "authors": "Y Hu; Y Koren; C Volinsky", "journal": "", "ref_id": "b7", "title": "Collaborative filtering for implicit feedback datasets", "year": "2008" }, { "authors": "Y Koren; R Bell; C Volinsky", "journal": "Comput", "ref_id": "b8", "title": "Matrix factorization techniques for recommender systems", "year": "2009" }, { "authors": "H Li; Y Liu; Y Qian; N Mamoulis; W Tu; D W Cheung", "journal": "Data Min. and Knowl. Discov", "ref_id": "b9", "title": "Hhmf: hidden hierarchical matrix factorization for recommender systems", "year": "2019" }, { "authors": "I Loshchilov; F Hutter", "journal": "", "ref_id": "b10", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "A Mashhoori; S Hashemi", "journal": "", "ref_id": "b11", "title": "Incorporating hierarchical information into the matrix factorization model for collaborative filtering", "year": "2012" }, { "authors": "J Mcauley; J Leskovec", "journal": "", "ref_id": "b12", "title": "Hidden factors and hidden topics: Understanding rating dimensions with review text", "year": "2013" }, { "authors": "A B Melchiorre; N Rekabsaz; C Ganhör; M Schedl", "journal": "", "ref_id": "b13", "title": "Protomf: Prototype-based matrix factorization for effective and explainable recommendations", "year": "2022" }, { "authors": "Z Meng; R Mccreadie; C Macdonald; I Ounis", "journal": "", "ref_id": "b14", "title": "Exploring data splitting strategies for the evaluation of recommendation models", "year": "2020" }, { "authors": "A K Menon; K.-P Chitrapura; S Garg; D Agarwal; N Kota", "journal": "", "ref_id": "b15", "title": "Response prediction using collaborative filtering with hierarchies and side-information", "year": "2011" }, { "authors": "G Montavon; W Samek; K.-R Müller", "journal": "Digit. Signal Process", "ref_id": "b16", "title": "Methods for interpreting and understanding deep neural networks", "year": "2018" }, { "authors": "X Ning; G Karypis", "journal": "", "ref_id": "b17", "title": "Slim: Sparse linear methods for top-n recommender systems", "year": "2011" }, { "authors": "C.-H Oh; K Honda; H Ichihashi", "journal": "", "ref_id": "b18", "title": "Fuzzy clustering for categorical multivariate data", "year": "2001" }, { "authors": "R Pan; Y Zhou; B Cao; N N Liu; R Lukose; M Scholz; Q Yang", "journal": "", "ref_id": "b19", "title": "One-class collaborative filtering", "year": "2008" }, { "authors": "S Rendle", "journal": "", "ref_id": "b20", "title": "Item recommendation from implicit feedback", "year": "2022" }, { "authors": "S Rendle; C Freudenthaler; Z Gantner; L Schmidt-Thieme", "journal": "", "ref_id": "b21", "title": "Bpr: Bayesian personalized ranking from implicit feedback", "year": "2009" }, { "authors": "J Tang; H Gao; H Liu", "journal": "", "ref_id": "b22", "title": "Mtrust: Discerning multi-faceted trust in a connected world", "year": "2012" }, { "authors": "S Wang; J Tang; Y Wang; H Liu", "journal": "", "ref_id": "b23", "title": "Exploring implicit hierarchical structures for recommender systems", "year": "2015" }, { "authors": "S Wang; J Tang; Y Wang; H Liu", "journal": "IEEE Trans. on Knowl. and Data Eng", "ref_id": "b24", "title": "Exploring hierarchical structures for recommender systems", "year": "2018" }, { "authors": "X Wang; W Pan; C Xu", "journal": "", "ref_id": "b25", "title": "Hgmf: Hierarchical group matrix factorization for collaborative recommendation", "year": "2014" }, { "authors": "Y Xian; T Zhao; J Li; J Chan; A Kan; J Ma; X L Dong; C Faloutsos; G Karypis; S Muthukrishnan; Y Zhang", "journal": "", "ref_id": "b26", "title": "Ex3: Explainable attribute-aware item-set recommendations", "year": "2021" }, { "authors": "H.-J Xue; X.-Y Dai; J Zhang; S Huang; J Chen", "journal": "", "ref_id": "b27", "title": "Deep matrix factorization models for recommender systems", "year": "2017" }, { "authors": "Y Zhang; X Chen", "journal": "Found. and Trends® in Inf. Retr", "ref_id": "b28", "title": "Explainable recommendation: A survey and new perspectives", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 367.53, 161.52, 195.04, 24.74 ], "formula_id": "formula_0", "formula_text": "min U,V (i, j)∈Ω X i j -U T i V j 2 + λ Θ ||Θ|| 2 2 (1)" }, { "formula_coordinates": [ 2, 341.05, 423.2, 217.65, 22.01 ], "formula_id": "formula_1", "formula_text": "min U,V - (i, j)∈Ω,(i,k) Ω ln σ(U T i V j -U T i V k ) + λ Θ ||Θ|| 2 2 (2" }, { "formula_coordinates": [ 2, 558.7, 425.3, 3.87, 8.9 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 56.65, 165.06, 236.93, 11.41 ], "formula_id": "formula_3", "formula_text": "U (0) = Ũ(1) U (1) = Ũ(1) Ũ(2) U (2) = Ũ(1) Ũ(2) • • • Ũ(p) U (p) (3)" }, { "formula_coordinates": [ 4, 93.99, 302.84, 199.59, 11.41 ], "formula_id": "formula_4", "formula_text": "V (0) = Ṽ(1) V (1) = Ṽ(1) Ṽ(2) • • • Ṽ(q) V (q) (4)" }, { "formula_coordinates": [ 4, 76.78, 621.8, 212.92, 31.76 ], "formula_id": "formula_5", "formula_text": "Ũ(1) ,••• , Ũ(p) ,U (p) , Ṽ(1) ,••• , Ṽ(q) ,V (q) (i, j)∈Ω X i j -U (0)T i V (0) j 2 + λ Θ ||Θ|| 2 2 (5" }, { "formula_coordinates": [ 4, 289.71, 626.89, 3.87, 8.9 ], "formula_id": "formula_6", "formula_text": ")" }, { "formula_coordinates": [ 4, 54.17, 686.26, 239.41, 31.1 ], "formula_id": "formula_7", "formula_text": "U (0) = Ũ(1) Ũ(2) • • • Ũ(p) U (p) , V (0) = Ṽ(1) Ṽ(2) • • • Ṽ(q) V (q) (6) Ũ(l) = softmax( Ũ(l) ), ∀l ∈ {1, . . . , p}(7)" }, { "formula_coordinates": [ 4, 57.77, 722.38, 235.81, 13.04 ], "formula_id": "formula_8", "formula_text": "Ṽ(l) = softmax( Ṽ(l) ), ∀l ∈ {1, . . . , q}.(8)" }, { "formula_coordinates": [ 4, 323.21, 221.49, 235.49, 41.95 ], "formula_id": "formula_9", "formula_text": "min Ũ(1) ,••• , Ũ(p) ,U (p) , Ṽ(1) ,••• , Ṽ(q) ,V (q) - (i, j)∈Ω, (i,k) Ω ln σ(U (0)T i V (0) j -U (0)T i V (0) k ) + λ Θ ||Θ|| 2 2 (9" }, { "formula_coordinates": [ 4, 558.7, 254.55, 3.87, 8.9 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 362.63, 360.11, 199.94, 15.26 ], "formula_id": "formula_11", "formula_text": "U (0)T i V (0) j = ( Ũ(1)T i Ũ(1) ) T Ṽ(1)T j Ṽ(1)(10)" }, { "formula_coordinates": [ 4, 401.89, 379, 160.68, 23.04 ], "formula_id": "formula_12", "formula_text": "= s t Ũ(1) is Ṽ(1) jt Ũ(1)T s Ṽ(1) t .(11)" } ]
10.2202/1944-2866.1076
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b38", "b11", "b35", "b36", "b12", "b35", "b36", "b35", "b36", "b19", "b21" ], "table_ref": [], "text": "Algorithmic transparency is often considered an essential goal in the responsible design and deployment of AI (Winfield et al. 2021;Felzmann et al. 2020). Relevant information should be disclosed to allow interested parties to monitor, check, criticise, or intervene in decisions by the algorithm (Diakopoulos & Koliska, 2016, 3). But algorithmic transparency may have an overlooked dark side, too. Recently, Wang (2022Wang ( , 2023) ) and Franke (2022) discussed a new, critical perspective on the link between transparency and manipulation. In short, disclosing information about an algorithm may normalise behaviour that benefits the operators of the algorithm, which can constitute manipulation in the context of power disparities insofar as it exploits people's vulnerability. Thus, \"we need to worry about algorithmic transparency as manipulation\" (Wang 2023, p. 1). If this critical perspective is apt, the debate about algorithmic transparency has a severe omission that needs to be addressed.\nDrawing attention to the manipulative potential of algorithmic transparency is an important and fruitful project. But, so far, it is based on an inadequate understanding of manipulation. Wang (2022Wang ( , 2023) ) relies on the view that manipulation exploits vulnerabilities (which I will call the vulnerability view for short) to demonstrate the manipulative potential of algorithmic transparency. The vulnerability view is fraught with issues. For one, the vulnerability view fails to support the conclusion that algorithmic transparency amounts to manipulation because it is unclear how, if at all, algorithmic transparency exploits vulnerabilities.\nMoreover, the vulnerability view has independent problems on its own. Exploiting vulnerabilities is neither a sufficient nor a necessary criterion for manipulation. Thus the vulnerability view is ill-equipped to support sound conclusions about categorising influences as manipulation, which hampers the assessment of algorithmic transparency's manipulative potential.\nTherefore, another model of manipulation is needed to explore further the critical perspective on algorithmic transparency. This paper aims to contribute to this new research angle and strengthen the critical perspective on algorithmic transparency that Wang (2022Wang ( , 2023) ) championed. To do so, I show that the risk concerning algorithmic transparency's manipulative potential can be understood with the indifference view of manipulation (Klenk 2020(Klenk , 2021b)). The indifference view explains better why algorithmic transparency has manipulative potential. In short, when transparency is used without concern for, or indifferently to, \"revealing reasons to users\" (Klenk 2021b, p. 101), it quickly degenerates into manipulation, as explained by the indifference view. The indifference view enjoys independent support and exactly highlights what may go wrong with algorithmic transparency: it may be used without concern to reveal reasons to users, and, for that reason, it is manipulative.\nThe paper should interest scholars of algorithmic transparency and, more generally, anyone interested in manipulation. The aim is modest insofar as I suggest a new direction for the important discussion about algorithmic transparency's manipulative potential. Several questions about how, exactly, to fill in that new perspective will remain open, as I point out below. The paper should nevertheless provide a fruitful starting point for future discussion.\nApart from lessons about algorithmic transparency's link to manipulation, the paper draws a general lesson about the study of manipulation. The claim that algorithmic transparency is manipulation requires a clear view of what manipulation is to justify and defend that claim ('it is manipulation because….').\nWhen the underlying account of manipulation is misleading, we might be led in false directions, e.g., by searching in vain for features wrongly associated with manipulation. By attempting to make explicit what manipulation is and how a given phenomenon -in this case, algorithmic transparency -satisfies the relevant criteria, we can make progress in understanding the many manifestations of manipulation beyond shaky allegations and conjectures.\nSection 2 re-constructs and clarifies Wang's argument. Section 3 critically evaluates the argument, and section 4 introduces the superior indifference view that salvages Wang's argument. I conclude in section 6 and suggest questions for further research." }, { "figure_ref": [], "heading": "Wang on transparency and manipulation", "publication_ref": [ "b35" ], "table_ref": [], "text": "In this section, I offer an interpretation of Wang (2022) 's argument about the link between algorithmic transparency and manipulation (section 2.1 and 2.2) and discuss two clarifications (2.3), which serves as a basis for the critical discussion later. 1" }, { "figure_ref": [], "heading": "Situating Wang's argument", "publication_ref": [ "b38", "b1" ], "table_ref": [], "text": "The predominant view in scholarship and regulatory debates about algorithms stresses that algorithmic transparency is crucial to algorithmic systems' responsible design and use(e.g., Winfield et al. 2021).\nBut there have also been critical voices (e.g., Bannister and Connolly 2011).\nFor instance, it is recognised that transparency is not merely a neutral transmission of information but a social process linked to power dynamics (cf." }, { "figure_ref": [], "heading": "Ananny and Crawford 2018).", "publication_ref": [ "b35", "b24", "b34", "b35", "b36", "b12", "b35", "b36", "b12" ], "table_ref": [], "text": "In particular, as Wang (2022) notes, others have already made tentative suggestions about the link between transparency and manipulation (Kossow et al. 2021;Wachter et al. 2018). These early discussions, however, remained on a rather general level. Effectively, they suggested that corporate interests behind algorithmic transparency may corrupt its otherwise laudable goals (as a kind of 1 I focus on the argument in Wang (2022) because it is the most detailed and extensive regarding the conceptual claim that algorithmic transparency can constitute manipulation. Wang (2023) and Franke (2022) seem to agree about the conceptual claim and offer different perspectives on the ethical question of whether and why such manipulation is morally problematic. I will discuss their contributions insofar as they bear on Wang (2022) claims about the ethics of manipulation. ethics washing). What was lacking was a more detailed perspective on how, exactly, manipulation is related to or caused by algorithmic transparency itself.\nWang's main contribution, and the subsequent discussion in Wang (2023) and Franke (2022), take these tentative suggestions further. It is a novel contribution insofar as it may show in some detail how algorithmic transparency itself can constitute or lead to manipulation (Wang 2022, p. 2)." }, { "figure_ref": [], "heading": "Re-constructing Wang's argument", "publication_ref": [ "b33", "b32", "b33", "b36", "b12", "b35" ], "table_ref": [], "text": "Wang outlines a specific process -the objectification of norms that results from algorithmic transparency -and argues that this process qualifies as manipulation.\nIn his argument, Wang relies on the view of manipulation developed by Susser et al. (2018), which I call the vulnerability view for short (see also Susser et al. 2019).\nLet us now look at the individual steps in Wang's argument.\nThe FICO algorithm serves Wang as an illustrative example of algorithmic transparency's manipulative potential. Over 90% of lenders in the US use the FICO algorithm to determine the creditworthiness of individuals. Arguably, the FICO algorithm is transparent in the informational sense because there is publicly available information regarding (1) the categories of data collected, (2) the sources and techniques used to acquire that data, and (3) the specific data points that a tool uses for scoring (Hurley and Adebayo 2016, 204, 213). Because of this transparency, argues Wang, there is a risk of manipulation.\nFirst, Wang defends the empirical premise that algorithmic transparency leads to the 'objectification' of norms (Wang 2022, p. 13). 2 For example, the FICO algorithm rewards punctual payment of bills, and this information suggests that paying on time is a norm, effectively \"disciplining [consumers] according to some expected norms of being responsible credit consumers \" (Wang 2022, p. 12). Individuals gradually come to think of these norms \"as natural and necessary\" and accept them without critical analysis: the norms become \"objectified\" (Wang 2022, p. 12). They come to think of the system in a particular way (e.g., as objective and value-neutral in the case of the FICO algorithm), and their minds are \"reframed\" so that their \"thinking of other possibilities\" is \"constrained\" (Wang 2022, p. 15).\nFor example, they fail to see the FICO algorithm as an \"arbitrary,\" \"discriminatory,\" or \"unfair\" system because they are more likely \"to only focus on the scientific and objective narrative of its algorithm, ignoring other alternative narratives\" (Wang 2022, pp. 16-17). 3 Second, Wang suggests that the objectification of norms constitutes manipulation by drawing on the vulnerability view. Wang adopts a version of the vulnerability view from Susser et al. (2018), who argue that manipulation \"exploit[s] the manipulee's cognitive (or affective) weaknesses and vulnerabilities in order to steer his or her decision-making process towards the manipulator's ends\" (cf. Wang 2022, 2, 18). 4 Wang does not explicitly say how norm objectification that issue to the side in this paper. Moreover, there are several open empirical questions about Wang's norm-objectification premise. I set them aside in this paper to focus on the manipulation aspect of his argument. 3 Wang suggests that algorithmic transparency \"opens the black box\" so that people know what the rules are and can actively try to conform to them\", cf. Wang (2022, p. 13). Consumers can indirectly derive, and are directly told, about an \"ideal model\" of someone that the algorithm would rate highly. In various ways, people may be influenced to conform to the model. Given the rewards and punishments associated with creditworthiness, \"consumers as rational individuals will try to better their position\" behaving in ways \"to their advantage,\" cf. Wang (2022, p. 13). For example, upon learning that \"payment history\" will be considered in FICO's algorithm, individuals would tend to make prompt repayment to improve their credit scores. exploits vulnerabilities, nor does he define what 'vulnerabilities' are in general. 5 He seems to suggest, however, that norm-objectification is a way to 'exploit the manipulatee's weaknesses and vulnerabilities' to benefit the manipulator in contexts of \"asymmetrical power relations,\" such as commercial and political settings (Wang 2022, 3, 17). 6 For example, the FICO algorithm is \"a commercial tool for lenders to make profits\" (Wang 2022, p. 19); transparency about the algorithm can lead to \"disciplined\" individuals that follow the newly objectified norms about behaving in line with good credit-rating scores so that they can be charged with higher interest rates at lower risk of default, which benefits the lender (Wang 2022, p. 19). This, argues Wang, constitutes manipulation on the vulnerability view. 7 often highlights that -to the contrary -manipulation can take place non-covertly (e.g. Wang 2022, 69). Thanks to an anonymous referee for prompting me to clarify this point.\n5 Unlike Wang, Susser et al. (2018, p. 40) distinguish between general (shared by \"all human beings in virtue of their embodied condition\") and \"situated, socially constructed, or contingent vulnerabilities.\" They further distinguish the latter into structural vulnerabilities, which derive from membership in groups with differential levels of advantage (e.g. being poor, or of a certain gender), and individual vulnerabilities, which are irrespective of group membership and derive, e.g., from one's personal history or habits. Susser et al. (2018, p. 41) write that contingent vulnerabilities are not \"monolithic\" and that various overlaps and combinations of vulnerabilities can pertain to any one person. This makes it understandable why they characterise online manipulation, a type of influence that can be highly personalised and targeted, in light of vulnerabilities which, on their view, are also highly personalised and non-monolithic. 6 See also Wang (2023, p. 2).\n7 Wang (2023), responding to criticism by Franke (2022) of the norm objectification premise, notes that manipulation may also occur by other means. For example, he notes that companies may also manipulate people's behaviour \"directly\" by changing people's choice architecture, rather than through the process of norm-objectification, see Wang (2023, p. 2). It seems that this interpretation is clearly true: there are many other ways in which people can be manipulated, apart from some process of norm-objectification, e.g. by altering people's options. But that interpretation is not relevant for the claim about transparency as manipulation. The relevant, but doubtful, interpretation is that transparency itself has some role to play in these other ways of manipulation. That interpretation is doubtful because it is completely unclear what these 'other ways' might be in which transparency can manipulate without exploiting norm-objectification. Thanks to an anonymous referee for stressing this point. It seems that the relevant interpretation supports the reconstruction of Wang's argument offered above: norm-objectification is a specific process or way in which vulnerabilities can be exploited. In that sense, the exploration of the link between manipulation and transparency on the indifference view are a charitable contribution to Wang's suggestion that there may be 'other' ways in which algorithmic transparency can be manipulative.\nSo, according to Wang (2022), algorithmic transparency leads to norm objectification and, in the context of power disparities, operators of algorithms can exploit that vulnerability to steer people toward behaviours that benefit themselves. This, he argues, constitutes manipulation." }, { "figure_ref": [], "heading": "Two clarifications", "publication_ref": [ "b10", "b35", "b33" ], "table_ref": [], "text": "Two clarifications are in order. First, there is an ambiguity in Wang's argument between norm objectification being constitutively or causally linked to the exploitation of weaknesses and vulnerabilities. On the one hand, Wang writes that algorithmic transparency can \"lead to\" manipulation (Wang 2022, p. 17), which suggests the causal interpretation that manipulation can be a result or effect of algorithmic transparency. On the other hand, he also writes that algorithmic transparency itself is \"potentially manipulative\" (Wang 2022, p. 18), thus suggesting a constitutive interpretation.\nThis ambiguity matters for interpreting the connection between the claim that algorithmic transparency causes norm objectification and the claim that this is manipulation. If norm objectification constitutes the exploitation of vulnerabilities, then by instigating a process of norm objectification, one is immediately in the business of manipulation. In contrast, if the causal interpretation is correct, then norm objectification only causes, perhaps contingently, the exploitation of vulnerabilities, and by instigating norm \"the only necessary condition of manipulation is that the influence is hidden\" (Susser et al. 2018, p. 27). Call this the covertness criterion. Wang does not mention the covertness criterion in his discussion of the vulnerability view. More so, he might even reject the covertness criterion when he approvingly refers to Estop (2014), writing that \"power can operate through transparency to manipulate people-not only through hidden lies but through the transparency of 'truth'\" (Wang 2022, p. 5).\nIn any case, Wang (2022) focuses on processes that exploit someone's weaknesses and vulnerabilities in his discussion of manipulation. But, according to Susser et al. (2018), these are not necessarily criteria by which we can tell whether or not a given influence is manipulation. They are \"the means through which a hidden influence is imposed\" (Susser et al. 2018, p. 27, emphasis added).\nThe means by which something is achieved need not constitute criteria for that thing, and it is important not to confuse criteria and means. In analogy, police activity sometimes involves physical violence e.g., to detain suspects. But physical violence is a means associated with police activity but not a reliable criterion by which we can tell whether or not we are dealing with police activity. Likewise, exploiting vulnerabilities as way in which manipulation often happens need not be a reliable criterion by which we can tell whether we are really dealing with manipulation or some other, perhaps benign, form of influence." }, { "figure_ref": [], "heading": "The clarification of the vulnerability view matters because it has implications", "publication_ref": [ "b33", "b35" ], "table_ref": [], "text": "for assessing Wang's argument. On the one hand, algorithmic transparency must credibly satisfy the criterion of hidden influence to have any chance at qualifying as manipulation, if the vulnerability view is maintained. On the other hand, Wang needs to make it plausible that exploiting vulnerabilities effectively forms a sufficient criterion for manipulation. This seems to be a challenge because, so far, nobody has explicitly defended that criterion: Neither Susser et al. (2018;2019) nor Wang (2022) explicitly argue that exploiting vulnerabilities is sufficient for manipulation. We will see below how this is a problematic omission for Wang's argument." }, { "figure_ref": [], "heading": "Evaluating Wang's argument about transparency and manipulation", "publication_ref": [], "table_ref": [], "text": "So far, I have re-constructed Wang's argument and clarified two points regarding the vulnerability view. I will now turn to a critical assessment. I argue in this section that Wang's transparency-manipulation argument fails if we adopt the vulnerability view of manipulation." }, { "figure_ref": [], "heading": "The vulnerability view does not support Wang's conclusion", "publication_ref": [ "b5", "b9", "b6", "b37" ], "table_ref": [], "text": "The vulnerability view does not support the conclusion that algorithmic transparency constitutes or causally leads to manipulation.\nRecall that, according to the constitutive interpretation of Wang's argument, norm objectification due to algorithmic transparency constitutes the exploitation of vulnerabilities in the context of power disparities and, therefore, amounts to manipulation. This interpretation is not convincing for two reasons.\nFirst, it is implausible to treat norm-objectification itself as a vulnerability (and thus 'using' that process cannot count as exploiting a vulnerability). 9 Humans are social animals that follow both descriptive and social norms. Our human capacity for and propensity to follow norms is likely an evolutionary adaptation (Bicchieri 2006;Elster 2015). As part of our general propensity to follow norms, we regularly treat them as objective, which may, in many cases, also be adaptive (e.g. Bowles and Gintis 2013). That something is natural does not mean that it is ethical or legitimate. But treating norm-objectification itself as a vulnerability would leave open why only this and not other universal human traits like prosociality constitute vulnerabilities.\nSecond, it is also implausible to treat norm-objectification as constituting a vulnerability only in the context of power disparities. Norm-objectification has certain epistemic costs that arise independently of the context. In the broadest sense, it may lead to an 'unexamined life' which, as Socrates emphatically put, is not worth living (cf. Franke 2022, p. 5). 10 For example, one may believe falsehoods, suffer inadequate understanding, and misinterpret reasons for behaviour. Failing to 'make up your mind' (as Wang puts it) about a situation and an uncritical attitude means that you miss out on these intrinsic and instrumental goods, and 9 In terms of Susser et al. (2018, p. 40)`s account, norm-objectification may at best be an \"ontological\" vulnerability, rather than a contingent vulnerability. Their account of manipulation, however, focuses on the latter as the relevant type of vulnerability in the context of manipulation.\n10 Franke (2022) contrasts Socrates' dictum with Whitehead's (1911) emphatic emphasis of the value of automating thought and behaviour in the sense of \"extending the number of operations we can perform without thinking about them\" (1911, pp. 45-46), cited in Franke 2022. Franke is right to challenge an uncritical adoption of the thought that conscious reflection and deliberation is, per se, valuable. It is beyond the scope of this article to enter into a debate about the respective merits of the positions emphasised emphatically by Socrates and Whitehead. Wang's point about the ability to make up one's mind, and the reference to the importance of revealing reasons adopted by the indifference view (see section 4), can be appreciated at least in the minimal sense that there are some contexts when this is valuable (without claiming that this is valuable all the time). Thanks to an anonymous referee for prompting me to clarify this point. these losses arise even if there is no context of power disparity. Therefore, this interpretation does not sit well with Wang's repeated and emphatic emphasis on the relevance of power disparities as an enabling condition for the threat of manipulation by algorithmic transparency.\nSo, Wang's argument fails on a constitutive interpretation because it is not convincing that norm-objectification exploits vulnerabilities in the context of power disparities. Norm-objectification is neither a general vulnerability nor dependent on power disparities in whichever wrong-making features it has. This means that Wang's argument depends on the causal interpretation.\nAccording to the causal interpretation, algorithmic transparency leads to normobjectification, which then causally contributes to the exploitation of vulnerabilities in the context of power disparities. To assess this interpretation, recall the case of transparency about the FICO algorithm. 11 Paying bills on time is a relevant criterion for the FICO algorithm, and let us assume the information shared about the algorithm represents this accurately; the (objectified) norms that people follow actually help them to satisfy that criterion. For instance, they end up paying their bills on time, they believe that everyone should do so, and correctly believe that the algorithm judges their behaviour according to the criterion.\nTransparency in this case is accurate, not misleading, and desirable to many.\nThe norm-objectification that may follow from transparency in such cases does not amount to the exploitation of vulnerabilities by itself. Of course, the 11 When transparency means that false or misleading information is communicated about the algorithm, transparency conceivably causes exploitation. Perhaps there will be manipulation as a result. But such a case is obviously irrelevant for Wang's argument to the effect that informationally adequate, genuine transparency can lead to manipulation. This situation must be set aside.\nunderlying system may be unfair and it may be unfair precisely because it reflects and perpetuates power disparities. But that observation is immaterial for Wang's argument, which aims to show that transparency about a system rather than the system itself can be manipulation. 12One might think that transparency makes matters worse, aggravating injustices in situations of power disparities. That is, one might think that if a system is rigged against you in some way, then being transparent about the system does not obviously resolve the problem, and indeed it may make it worse if, by a process of norm-objectification, the unfair system remains in place. The claim may be that transparency may lead people to be 'locked in' an unfair system by exploiting their tendency to objectify norms. In the example of the FICO system, people may (erroneously) think of the system as objective and neutral and thus fail to challenge it (as, perhaps, they should).\nBut it is not the case that transparency makes matters worse. One problem is that this interpretation overlooks the distinction between being harmed ('locked in' the system through objectified norms longer than they would be otherwise) and not being benefitted (being 'released' from the problematic system by deobjectifying norms at a time at which they could be released). On the causal interpretation we are now considering, norm-objectification causally leads to harm. Insofar as harm is understood narrowly as 'less on some relevant dimension than the status quo,' then transparency leads to harm only insofar as it worsens the status quo, i.e., transparency produces harm that would not be there were it not for the norm-objectification. But insofar as a problematic, exploitative system is already in place, the status quo is not altered by norm-objectification: matters do not get worse, and transparency does not amount to additional harm. The system could be toppled, of course, but that amounts to a potential benefit that is not realised. 13 Plausibly, exploiting vulnerabilities means that someone is harmed, rather than merely not benefitted, and it is quite unclear whether and why anyone is harmed, rather than not benefitted, in that situation by the transparency.\nAnother problem is that it is entirely unclear, from an empirical perspective, whether norm-objectification would actually perpetuate the system and what the relevant comparison class is. Unfair systems of power disparity presumably do not rely on transparency to remain in place. And it is hardly conceivable that not communicating about an existing system of power disparity at all would be preferable to informational transparency about it. This shows that normobjectification through transparency itself does not cause the exploitation of vulnerabilities.\nThus, what's problematic is, again, the underlying system and not transparency about it. In situations of power disparity, people may be vulnerable, and the powerful may set up systems that exploit them. But transparency about those systems cannot be considered to lead to additional exploitation of 13 The alternative, wide notion of harm would count as harmful anything that does not contribute to an improvement of the status quo. Though I cannot argue for it here, however, that seems to me to be an implausible notion of harm. In any case, the present argument stands independently of that dispute insofar as there is no empirical evidence that non-transparency would lead to benefits, i.e. improvements over the status quo (thus, even if not procuring these benefits counts as harm on a wide notion of harm, it is simply empirically unclear whether the benefits would materialise). Thanks to an anonymous referee for prompting me to clarify this point. vulnerabilities itself. Therefore, Wang's argument fails, if indeed it depends on the vulnerability view of manipulation." }, { "figure_ref": [], "heading": "The vulnerability view is itself problematic", "publication_ref": [ "b33", "b29", "b3", "b3", "b21", "b35" ], "table_ref": [], "text": "The vulnerability view is also problematic independently of considerations about Wang's argument. This means that even if we could salvage the claim that normobjectification exploits vulnerabilities, we cannot readily conclude that this is manipulation.\nTo begin with, the vulnerability view does not provide a sufficiency criterion for manipulation. Such a criterion would be helpful, however, to tell whether or not a given influence is manipulation. Arguably, it would be required if an account of manipulation aspires to approximate something like an explanation of the 'nature' of manipulation. Due to the lack of a sufficiency criterion, the vulnerability view cannot be used to infer, without further argument, that algorithmic transparency amounts to manipulation. The question is thus whether the vulnerability view could give us a sufficient criterion that is plausible.\nProponents of the vulnerability view may suggest that the exploitation of vulnerabilities could do as a sufficiency criterion or, at least, as a reliable sign of manipulation. But this will likely not do.\nFirst, as I noted in section 2.3, we should keep processes or mechanisms of a phenomenon apart from criteria to identify the phenomenon. Second, and more importantly, the vulnerability view itself suggests that the exploitation of vulnerability cannot be a sufficient or reliable criterion that can help us determine whether something is manipulation. This is because the vulnerability view defends the covertness criterion (or hidden influence, see above) as a necessary criterion for manipulation (Susser et al. 2018). But vulnerabilities can also be exploited in an overt, obvious way. For example, a manager can make perfectly clear to their employee that they will face dire professional consequences if they do not abide by the manager's inappropriate wishes. Though this is a case of exploiting vulnerabilities, it is not a case of manipulation according to the vulnerability view itself. So, treating 'exploiting vulnerabilities' as a reliable sign of manipulation, let alone a sufficient criterion, is false by the lights of the vulnerability view itself because the manipulative influence is sometimes in clear sight.\nOf course, the covertness criterion has been subject to persuasive challenges in the philosophical literature on manipulation (e.g. Noggle 1996;Barnhill 2014).\nNext to several counterexamples that challenge the covertness criterion (see especially Barnhill 2014), there are also fundamental moral and conceptual reasons against that criterion. For instance, Klenk (2021b) suggested that the criterion may imply that responsibility for manipulation is shifted toward the victim in problematic ways. After all, an influence counts as manipulation, according to the vulnerability view, only insofar as it remains hidden. If that means that the victim can simply 'undo' the manipulation by being sufficiently aware, countermeasures to manipulation may, inappropriately, focus on educating victims, rather than disciplining manipulators. In light of these challenges, there may be good reason to give up the vulnerability view's core commitment to the necessity of the covertness criterion.\nBut with its core criterion in trouble, and no sufficiency criterion in sight, it is not clear what the vulnerability view amounts to as a view of manipulation.\nMost importantly, the vulnerability view does not help demarcate manipulation from non-manipulative influence: we are neither given a sufficient or reliable criterion nor a plausible necessary criterion.\nTherefore, there is good reason to question the vulnerability view as an adequate view of manipulation. This means that Wang's argument fails, insofar as it depends on the vulnerability view.\nIn summary, there is double trouble for Wang's argument. It purports to show that algorithmic transparency leads to norm-objectification (1) which exploits contingent vulnerabilities, and (2) thus counts as manipulation, according to the vulnerability view. Both points are in serious doubt, as shown in this section.\nGiven the vulnerability view, the argument about transparency's potential manipulation fails.\nSo far, I re-constructed and criticised Wang's argument about the potential manipulativeness of algorithmic transparency. I concluded that his argument fails if indeed we have to stick to the vulnerability view of manipulation. If that were all there is to it, the critical perspective on algorithmic transparency developed by Wang (2022) would be in trouble.\nHowever, I believe that Wang's intuition about the manipulative potential of algorithmic transparency is on the right track. The problems that I discussed in section 3 originated from the vulnerability view of manipulation, which required us to look for a sense in which algorithmic transparency constitutes or causes the exploitation of vulnerability, which was not to be found." }, { "figure_ref": [], "heading": "The indifference view and manipulation by algorithmic transparency", "publication_ref": [], "table_ref": [], "text": "In this section, I introduce the indifference view of manipulation as a superior alternative to the vulnerability view and show how it salvages Wang's intuition about the manipulative potential of algorithmic transparency.\nWang is correct that algorithmic transparency can amount to manipulation.\nIn short, when algorithmic transparency does not aim to reveal reasons to people but merely aims at achieving a certain effect, such as instigating a particular behaviour or creating a certain impression, when it is, in a slogan, indifferent to reasons (cf. Klenk 2021a), then it degenerates into manipulation. This view, the indifference view, not only has the advantage of salvaging Wang's conclusion (that algorithmic transparency has manipulative potential), but independent and general considerations about the nature and ethics of manipulation also support it." }, { "figure_ref": [], "heading": "The indifference view of manipulation", "publication_ref": [ "b28", "b4", "b20", "b22", "b28", "b22", "b19", "b21", "b3", "b29", "b31", "b29", "b31", "b3", "b31", "b30", "b23", "b30", "b23", "b29", "b16" ], "table_ref": [], "text": "The indifference view of manipulation defines manipulation as an influence that aims to be effective but is not explained by the aim to reveal reasons to the interlocutor (Klenk 2021b). 14 For example, when a politician uses an image of 'foreign-looking' people in their political ad, and they chose that image because it will ignite people's xenophobia and racial hatred and not because (implausibly) the image will reveal 14 Ideas pertinent to the indifference view have also been defended by Gorin (2014b), Mills (1995), and Baron (2014). Klenk (2021) uses the term 'carelessness,' whereas Klenk (2022) introduces the more appropriate term 'indifference' to avoid the misleading impression that manipulation is, overall, lazy or not planned out. Indeed, manipulation is often carefully crafted influence in its aim to be effective, but careless or indifferent only to the aim of revealing reasons to others.\nto people why they have (or lack) reasons to vote for the politician, then the politician is manipulating people (cf. Mills 1995). Similarly, when a recommender system is set to display content that effectively engages people's attention, and it displays that content for that purpose rather than to reveal reasons to users e.g. about whom to vote for, what to buy, or what to believe, then the recommender system is used manipulatively (Klenk 2022(Klenk , 2020)).\nThe indifference view thus identifies manipulation based on two conditions. First, it only looks at influence that is aimed at a particular goal. In that sense, and in line with most if not all literature on manipulation, the view excludes influence that is purely accidental from counting as manipulation (see Noggle 2018). Second, the indifference view then asks why a particular means of influence was chosen to achieve the relevant goal. Manipulative influence is characterised negatively, in terms of the manipulator's choice of a means of influence that is not being explained by the aim to reveal reasons to the interlocutor. The manipulator is, in that sense, \"careless\" (Klenk 2021b) or indifferent to, revealing reasons to their victims. Since the account focuses on the grounds for choosing means of influence rather than the goal that is pursued, it is possible that the goal is to benefit the target. This means that the indifference view makes room for paternalistic manipulation, or manipulation that -overall -benefits the target. 15The indifference view of manipulation is part of a broad family of views on the nature of manipulation that emphasise the norm-violating character of manipulation (Barnhill 2014;Gorin 2014aGorin , 2014b;;Noggle 1996Noggle , 2020)). 16 This perspective suggests that manipulation is a kind of influence that falls short of some ideal. Proposals concerning the nature of the ideal differ. For example, Noggle (1996Noggle ( , 2020) ) argues that manipulation is an influence intended to make the victim violate some norm of belief, desire, or emotion. Barnhill (2014) provides a broader suggesting that manipulation may sometimes be an influence that makes someone behave in non-ideal ways, namely in a way that violates their selfinterest. The indifference view takes yet a broader, and indeed quite different, perspective on the ideal in question. Unlike the views of Noggle and Barnhill, for example, the indifference view suggests that the ideal in question concerns the motivation of the manipulator, not the behaviour of the patient. Manipulation occurs when the genesis of the manipulator's influence falls short of an ideal, namely that it is not explained by the aim to reveal reasons to the interlocutor. A relevant consequence of this perspective, one that will occupy us further below, is that manipulation need not be the result of nefarious, evil intentions to do wrong.\nInstead, it can be simply -but perhaps not less problematically -the result of carelessness and indifference.\nA relevant question about the indifference view is a potential ambiguity between a strict and a wider reading of the indifference view. On a wider reading, facts about the target play a role in determining whether or not an influence is Noggle (2020) 16 See Noggle (2018) and Klenk and Jongepier (2022) for critical discussion and overviews.\nmanipulation. On the narrow reading, only the motives of the manipulator count. This is suggested, e.g. by Klenk (2022, p. 112) when he writes that manipulation comes down to \"a lack of care [by the manipulator] to reveal reasons to the manipulatee,\" which says \"a lot about the manipulator and next to nothing about the manipulatee.\" Similarly, Noggle (2018) paraphrases the view as emphasising only the motives of the manipulator, leaving out any reference to what happens 'in' (e.g. whether emotional processing is used) or to the victim (e.g. whether the victim is exploited). In this article, I hew close to existing expositions of the indifference account and thus adopt a narrow reading. An implication is that an attempt to reveal reasons through algorithmic transparency that fails to do so is not manipulative, whereas an attempt to use algorithmic transparency toward some other end that happens -perhaps by chance -to reveal reasons is manipulation. The narrow reading should not obscure, however, that facts about the targets may count toward our moral evaluation of manipulation even if they play no role in defining or conceptualising manipulation. As such, the vulnerability of people may play a role in our assessment of whether and why manipulation is morally problematic. It is bad enough that manipulators are not properly motivated, as it were. But if their irresponsible influence (contingently) leads to further, negative consequences in light of the vulnerability of people, then that is all the more reason to worry about manipulation. Moreover, it matters how manipulators perceive their targets, since their perception of their targets will influence what it means for them to be motivated to reveal reasons to them. 17 17 To further illustrate the point, consider a world of omniscient, hyper-rational beings that are not vulnerable at all. Whether or not someone strives to reveal reasons to them or not does not matter at all because they are perfect trackers of reasons. Manipulation on a narrow reading of the indifference view would appear much less of a problem insofar as it will have no discernible With this sketch of the indifference view on the table, it is helpful to briefly note two relevant contrasts with the vulnerability view. First of all, while the vulnerability view focuses on what actually happens to the patient (are they vulnerable? Are they exploited? What actually goes on in the patient?), the indifference view focuses on the agent (what explains their method of influence?). This agent-focused perspective of the indifference view will help us explain better how and why algorithmic transparency has manipulative potential. Second, even though the indifference view focuses on the agent-perspective, it does not require strongly nefarious intentions such as the intention to 'exploit' the victim or harm them otherwise, but associates manipulation with a characteristic indifference toward the ideal of reasoned discourse. This features make the view well posed to explain manipulation in settings such as a marketplace, where actors are simply out for their own good, and often ruthlessly so, but where it would be misleading to describe them as intentionally out to harm others. 18 consequences on the targets. This does suggest that facts about the potential targets of manipulation -such as their vulnerability -is relevant at least in two ways. First, for our assessment of the importance of manipulation in general and, second, for the moral assessment of a specific instance of manipulation. One can consistently adopt the narrow reading of the indifference view for purposes of defining or conceptualising manipulation and acknowledge the significance of consequences for evaluating manipulation. It is a further question whether the strict reading aligns with intuitions about manipulation. Since it mirrors how, for example, we talk about deception (a deceiver can accidentally make people believe the truth), I take it that the narrow reading enjoys sufficient support; see also Klenk and Jongepier (2022). I thank an anonymous referee for pressing me to clarify this point and for providing a version of the helpful example discussed in this footnote. 18 Since this is but a sketch of the indifference view (and necessarily so, in view of the aim of the article), relevant questions remain concerning, for example, the precise nature of the ideal to reveal reasons to the interlocutor, and an adequate justification of that ideal (see Noggle (1996) and Hanna (2015) for pertinent discussion about the objectivity of the ideal in question). For the purposes of this article, however, the view is adequately described to explore the implications for the manipulative potential of algorithmic transparency." }, { "figure_ref": [], "heading": "Indifferent algorithmic transparency is manipulation", "publication_ref": [ "b7", "b26", "b22", "b2", "b25", "b35", "b36", "b36", "b18", "b8", "b13" ], "table_ref": [], "text": "The indifference view explains nicely what might be manipulative about algorithmic transparency. In short, algorithmic transparency may not be designed to enhance the decision making capabilities of the users of the algorithm by revealing reasons to them. If that is the case, then algorithmic transparency will be manipulative.\nThere is some reason to think that at least some instances of algorithmic transparency are manipulative for being indifferent. The operators of an algorithm may be transparent for all sorts of reasons, and the reasons that motivate them in choosing a particular method of transparency may not always be to reveal reasons to the users. Instead, they may publish information simply to serve the aim to comply by some regulatory demand, to appear in a certain light and to leave a certain impression on users, to make users behave in a certain way, or simply because a certain functionality that enables transparency is available in a pertinent software library that the developers are using. 19 Notably, the pertinent point is not the motivation to be transparent in the first place but why particular means or methods of being transparent have been chosen. These two things can come apart.\nFor example, an organisation may decide to be transparent about their algorithm because they are convinced that it is, ethically, the right thing to do.\nStill, the organisation faces a question about how to achieve algorithmic transparency, that is, what means or methods to employ. They might have the option, for example, of using text to communicate or to record brief instructional 19 Thanks to an anonymous referee for suggesting the last point. videos. There is some evidence that videos enhance learning in educational contexts (Brame 2016). The organisation may choose videos because of their (presumed) helpfulness in revealing reasons to users; in that case, their attempt at transparency is clearly not manipulative. But if the organisation opts for videos because they reckon that it will win them favours with users and scholars interested in algorithmic transparency, their influence is manipulative. It is not explained by the aim to reveal reasons to users, but tries to achieve some other end effectively.\nIn short, algorithmic transparency has manipulative potential because the providers of said transparency may be transparent in ways that are simply indifferent to informational quality and revealing pertinent reasons to the users.\nInstead, they may be much more interested in inducing certain behaviours, such as continued or increased use of and reliance on the system that the algorithm operates in. 20 20 An important set of question concerns the motives that determine whether or not the attempt at algorithmic transparency was manipulative. First, whose motives count? The 'providers' of algorithmic transparency, like the FICO, are often corporations or other institutions, and there is a large debate about whether or not to think of them as group agents, or mere collectives of individuals (List and Pettit 2011). So far, accounts of manipulation rely on a notion of intention that is at least contentious to ascribe to such groups or artificial entities. Since the ultimate criterion for manipulation on the indifference view is an explanation of an influence, it is at least possible to give such an explanation independently of intention but instead in terms of function or purpose, which may more easily be ascribed to groups and artificial agents cf. Klenk (2022). Related to that question is the question of how to determine which amongst the many of motives that reside within an individual agent (or are 'distributed' across collectives of individuals) count toward the assessment of manipulation. For example, a manager may, next to the aim to reveal reasons to their employee, be interested in fulfilling their duty, finishing work that day, and so on. More pertinently, Barclay and Abramson (2021) demonstrate that there are many roles and motives that may legitimately be associated with a given algorithmic system. A tentative suggestion on behalf of the indifference view is that the motive to reveal reasons need not be the only or primary motive (which seems overly demanding) but at least a causal source for chosen means of influence, i.e. the chosen influence would be chosen across a range of counterfactual contexts (Lagnado et al. 2013). This would account for to the intuition that manipulative influences are such that the manipulator all too easily forgoes the aim to reveal reasons (which may be present) in favour of the aim to be effective. Tentative as this suggestion is, it would have some bearing on the practical question of how to regulate manipulative algorithmic transparency. For instance, regulation should aim to encourage robust motives to reveal reasons. Their presence Applied to Wang's example of the FICO algorithm, the indifference view suggests the following picture. If transparency about the FICO algorithm is manipulative, the manipulativeness does not lie in the process in which users process the provided information (does it exploit vulnerabilities?) or the effects the transparency has on the user (does it exploit users?). Instead, the manipulativeness lies in the purpose of the transparency. In the non-manipulative case, the aim is to contribute to the user's deliberation. Inquiry is an important case of deliberation. Credit scoring systems raise a heap of important questions for deliberation. For instance, 'Insofar as I want a credit, how should I behave to get a good credit score?' The FICO transparency is, conceivably, a good-faith contribution to this inquiry. Another question is 'How does this system actually make decision?.' Again, transparency in the FICO case may actually be intended as a contribution to that question. However, there is surely manipulative potential in the FICO case, tool. The purpose of transparency about the FICO case may not be to genuinely contribute to any question that users may have. Instead, the aim may simply be to effectively generate a certain belief such as 'the FICO algorithm is good' or a certain type of behaviour. In that case, the system is clearly manipulative, as Wang warned, and the indifference view tells us why.\nInterestingly, Wang offers several remarks that are well aligned with the indifference view already and I want to suggest that some of his observations can fruitfully be understood in light of the indifference view of manipulation. could be assessed by assessing which of the available means of influence -some more, some less reason-revealing, were, in fact, chosen by the influencer. Ultimately, however, this does not fully answer the question of whose motives count, and the tentative suggestion would need to be developed further. I thank an anonymous referee for pressing this point.\nFirst, Wang's (2022Wang's ( , 2023) ) emphatic emphasis on power disparities in commercial or political contexts serves as a useful reminder that algorithmic transparency in the informational sense may seem to be beneficial to stakeholders (e.g. users, and regulators) while the true aims of the deployer of the algorithm may be quite different from serving stakeholders' interests. Deployers of algorithms may thus be disingenuous about their true motives for algorithmic transparency in three relevant ways. They may intend to mislead stakeholders through algorithmic transparency, they may dissimulate their reasons for pushing for algorithmic transparency, or -and this is what the indifference view emphasises -they may use methods for algorithmic transparency for the wrong kinds of reasons. As Wang puts it in the case of the FICO algorithm, their pursuit of algorithmic transparency \"does not mean that the FICO Score really cares about credit users' true interests\" (Wang 2022, p. 19, emphasis added). From the perspective of the indifference view, the provider of the FICO algorithm may manipulate because they go about their transparency with some motive other than to actually reveal reasons to the user. This element of indifference or carelessness in the possible motives behind algorithmic transparency is aptly observed and nicely links up with the indifference account of manipulation.\nSecond, Wang also comments on the moral problem associated with the manipulation that results from algorithmic transparency in ways that are not readily compatible with vulnerability view, but valuable if seen in the light of the indifference view. Wang (2023) aptly observes that a problem with manipulation is epistemic, and collective insofar as it hampers our collective ability to deliberate and that it requires a collective, political solution. In particular, he suggests that the real model is political and that we have a duty to support collective deliberation. As he puts it, \"we as society have the duty to build algorithmic systems that can ensure the healthy development of humans' deliberative capacity\" (Wang 2023, p. 6). These remarks reflect one of the core insights of the indifference view, namely that manipulation somehow hinders or at least does not reliably promote deliberation.\nIf seen in this light, some of the features that Wang (2022) explains as manipulation based on the vulnerability view -such as norm-objectification -turn out to be consequences of manipulative influence on the indifference view. For example, suppose the operators of the FICO algorithm chose their informational influences not based on their propensity to reveal reasons to their users, but based on whether they will make the users like or endorse the FICO algorithm. In that case, it is tempting to think that that may make users fail to consider other possible alternatives to set up credit systems. Again, from the perspective of the indifference view, Wang has aptly described a possible result of manipulation. 21 .\nAlgorithmic transparency thus may or may not lead to norm-objectification, and we can leave open whether any step of that process involves exploited vulnerabilities. It is still possible, and perhaps likely, that the deployer of an 21 More broadly, and beyond the credit system that Wang discusses, the practice of consciousness raising, cf. Keane (2016), can be interpreted as a way to come to question fixed social structures and -insofar as these structures are to an extent malleable and constructed -it would be a mistake to consider them fixed. The indifference view may -even on a narrow reading, and as a purely contingent, empirical matter, explain how the very process of consciousness raising does not get off the ground as a result of manipulative transparency, insofar as influence that is indifferent reason-revealing may (contingently) end up being not reason-revealing influence. It is important to emphasise, again, that this is an empirical question. I am not aware that it has, in specific detail, been explored yet. There is, however, relevant anecdotal evidence from education or training which, in many areas, starts out being geared toward effective influence (simply getting the student to perform a task) and then more and more toward understanding (getting the student to understand why and how the task is performed). algorithm may be careless or indifferent in deploying the means of transparency.\nOn the one hand, it will surely be explained by the aim to do something in the vain of transparency. But, on the other hand, the particular method of achieving transparency -a video, a text, and so on -may not be designed such as to be genuinely informative and reveal reasons to users. In that situation, the algorithmic influence may qualify as manipulation in the sense of being careless or indifferent influence.\nTherefore, the indifference view secures the conclusion about the manipulative potential of algorithmic transparency. There is a sense of manipulation that not only resonates with Wang's general remarks and apt observations about the social-and power-related implications of algorithmic transparency but also save his argument. The influence that results from algorithmic transparency may be indifferent or careless and, therefore, constitute manipulation.\nThe indifference view also provides a fruitful lens to explore the manipulative potential of algorithmic transparency further. For instance, the indifference view suggests that we should think carefully about what kind of means of achieving transparency are best at revealing reasons to users. 22 Mere transparency may not suffice. For instance, as Lorenz-Spreen et al. ( 2021) point out in a different albeit 22 Though only facts about the manipulator matter for the definition of manipulation (see section 4.4), some of those facts will be facts about what manipulators believe or assume about their targets insofar as what it means to reveal reasons to someone is at least partly determined by that person's psychology. As discussed above, it is still facts about the manipulator (their beliefs, etc.) that matters for determining whether something is manipulation. But insofar as we strive for non-manipulation in our interactions, or aim for design for non-manipulative transparency, we need to form a conception of what it means to reveal reasons to users. Hence, non-manipulators need to form a perception of people's vulnerabilities in order to determine what it means to reveal reasons to them. I thank an anonymous referee for prompting me to clarify this point. related context, merely making transparent to users that they are now seeing a personalised ad does not significantly alter their decisions. Arguably, more than simple informational transparency is required to reveal reasons to users. So, not being indifferent to users' deliberation may thus mean that one must engage significantly with users' perspectives to get the message across. These explorations could draw on concrete empirical explorations of (requirements for) algorithmic transparency that already exist. For example, a study by Dexe et al. (2020) exemplifies how the value-sensitive design approach can be used to explore transparency while drawing on the contributions from stakeholders. Building on the indifference view, future work from such a design perspective along these lines could explicitly address what it would take to reveal reasons to relevant users, which could serve, for instance, as a guideline for the 'providers' of algorithmic transparency. 23There are also notable open questions about the manipulative potential of algorithmic transparency from the indifference view's perspective that, as pointed out in the introduction, cannot be answered here. Next to future work on refining and explicating the indifference view, which touches on rather philosophical questions about the nature and ethics of manipulation and the underlying ideal of deliberation, in particular concerning the question of how to assess the aims of influencers for manipulation, there are difficult and important questions about operationalising guidelines for non-manipulative transparency, reliable methods to detect manipulative transparency 'in the field,' and investigations of the effects of manipulative transparency. Some tentative suggestions in these directions are to explore value sensitive design approaches (Friedman and Hendry 2019) under the heading of design for non-manipulative transparency, empirical investigations into the motives of providers of transparency, as well as modelling approaches to study the effects of manipulative transparency. If the indifference view inspires further exploration of the manipulative potential of algorithmic transparency, along those lines or others, then this article has its goal." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Algorithmic transparency is often regarded as an unequivocally good goal in scholarly and regulatory debates about the societal implications of algorithms.\nThese debates are enriched by a critical perspective that suggests that algorithmic transparency may harbour manipulative potential. So far, however, that perspective rested on a shaky view of manipulation, the vulnerability view. Therefore, I suggested here an improved notion of manipulation, the indifference view, that salvaged Wang's key insight about hitherto underacknowledged manipulative aspects of algorithmic transparency. The indifference view suggests that manipulation is a purposeful influence that is not explained by the aim to reveal reasons to the interlocutor. The algorithmic transparency providers may often choose means to achieve transparency for motives other than revealing reasons to users -they may, for example, be interested in leaving a certain impression with users, or complying with regulatory demands. Insofar as these motives replace or crowd out the motive to genuinely contribute to people's understanding of the algorithm, there will indeed be manipulative transparency.\nThus, Wang -and other proponents of the critical perspective on algorithmic transparency -are on the right track, and the indifference view explains why.\nFuture investigations should explore open questions about the indifference view itself, and assess algorithmic transparency in general or in concrete cases for signs of manipulation, which will plausibly require both philosophical as well as empirical approaches. A notable question that has not been discussed concerns the ethical dimension: future discussions should also focus on the wrong-making features of manipulative transparency, as understood by the indifference view. 24 24 I thank the team at the Delft Digital Ethics Centre and two very constructive, meticulous, and helpful anonymous referees for valuable feedback on an earlier version of this paper." } ]
A series of recent papers raises worries about the manipulative potential of algorithmic transparency (to wit, making visible the factors that influence an algorithm's output). But while the concern is apt and relevant, it is based on a fraught understanding of manipulation. Therefore, this paper draws attention to the 'indifference view' of manipulation, which explains better than the 'vulnerability view' why algorithmic transparency has manipulative potential. The paper also raises pertinent research questions for future studies of manipulation in the context of algorithmic transparency.
Algorithmic transparency and manipulation
[ { "figure_caption": "objectification one has not yet automatically committed manipulation. In what follows, I will discuss both interpretations: the constitutive and the causal interpretation of Wang's claims. Second, Wang does not adequately represent the vulnerability view of manipulation he adopts from Susser et al. (2018). 8 Susser et al. emphasise that", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" } ]
Michael Klenk
[ { "authors": "M Ananny; K Crawford", "journal": "New Media and Society", "ref_id": "b0", "title": "Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability", "year": "2018" }, { "authors": "F Bannister; R Connolly", "journal": "Policy & Internet", "ref_id": "b1", "title": "The Trouble with Transparency: A Critical Review of Openness in e-Government", "year": "2011" }, { "authors": "I Barclay; W Abramson", "journal": "", "ref_id": "b2", "title": "Identifying Roles, Requirements and Responsibilities in Trustworthy AI Systems", "year": "2021" }, { "authors": "A Barnhill", "journal": "Oxford University Press", "ref_id": "b3", "title": "What is manipulation", "year": "2014" }, { "authors": "M Baron", "journal": "Oxford University Press", "ref_id": "b4", "title": "The mens rea and moral status of manipulation", "year": "2014" }, { "authors": "C Bicchieri", "journal": "Cambridge University Press", "ref_id": "b5", "title": "The grammar of society: The nature and dynamics of social norms", "year": "2006" }, { "authors": "S Bowles; H Gintis", "journal": "Princeton University Press", "ref_id": "b6", "title": "Cooperative species: Human reciprocity and its evolution", "year": "2013" }, { "authors": "C J Brame", "journal": "CBE life sciences education", "ref_id": "b7", "title": "Effective Educational Videos: Principles and Guidelines for Maximizing Student Learning from Video Content", "year": "2016" }, { "authors": "J Dexe; U Franke; A A Nöu; A Rad", "journal": "Springer", "ref_id": "b8", "title": "Towards Increased Transparency with Value Sensitive Design", "year": "2020" }, { "authors": "J Elster", "journal": "Cambridge University Press", "ref_id": "b9", "title": "Social norms", "year": "2015" }, { "authors": "J D S Estop", "journal": "Cultural Studies ↔ Critical Methodologies", "ref_id": "b10", "title": "WikiLeaks: From Abbé Barruel to Jeremy Bentham and Beyond", "year": "2014" }, { "authors": "H Felzmann; E Fosch-Villaronga; C Lutz; A Tamò-Larrieux", "journal": "Science and engineering ethics", "ref_id": "b11", "title": "Towards Transparency by Design for Artificial Intelligence", "year": "2020" }, { "authors": "U Franke", "journal": "Philosophy & Technology", "ref_id": "b12", "title": "How Much Should You Care About Algorithmic Transparency as Manipulation", "year": "2022" }, { "authors": "B Friedman; D Hendry", "journal": "The MIT Press", "ref_id": "b13", "title": "Value sensitive design: Shaping technology with moral imagination", "year": "2019" }, { "authors": "M Gorin", "journal": "American Philosophical Quarterly", "ref_id": "b14", "title": "Do Manipulators Always Threaten Rationality?", "year": "2014" }, { "authors": "M Gorin", "journal": "Oxford University Press", "ref_id": "b15", "title": "Towards a theory of interpersonal manipulation", "year": "2014" }, { "authors": "J Hanna", "journal": "Social Theory and Practice", "ref_id": "b16", "title": "Libertarian Paternalism, Manipulation, and the Shaping of Preferences", "year": "2015" }, { "authors": "M Hurley; J Adebayo", "journal": "Yale J.L & Technology", "ref_id": "b17", "title": "Credit scoring in the age of big data", "year": "2016" }, { "authors": "W Keane", "journal": "Princeton University Press", "ref_id": "b18", "title": "Ethical life: Its natural and social histories", "year": "2016" }, { "authors": "M Klenk", "journal": "Springer", "ref_id": "b19", "title": "Digital Well-Being and Manipulation Online", "year": "2020" }, { "authors": "M Klenk", "journal": "SSRN Electronic Journal", "ref_id": "b20", "title": "Interpersonal Manipulation", "year": "2021" }, { "authors": "M Klenk", "journal": "Review of Social Economy", "ref_id": "b21", "title": "Manipulation (Online): Sometimes Hidden, Always Careless", "year": "2021" }, { "authors": "M Klenk", "journal": "Routledge", "ref_id": "b22", "title": "Manipulation, injustice, and technology", "year": "2022" }, { "authors": "M Klenk; F Jongepier", "journal": "Routledge", "ref_id": "b23", "title": "Manipulation Online: Charting the field", "year": "2022" }, { "authors": "N Kossow; S Windwehr; M Jenkins", "journal": "", "ref_id": "b24", "title": "Algorithmic transparency and accountability", "year": "2021" }, { "authors": "D A Lagnado; T Gerstenberg; R Zultan", "journal": "Cognitive science", "ref_id": "b25", "title": "Causal responsibility and counterfactuals", "year": "2013" }, { "authors": "C List; P Pettit", "journal": "Oxford University Press", "ref_id": "b26", "title": "Group Agency: The possibility, design, and status of corporate agents", "year": "2011" }, { "authors": "P Lorenz-Spreen; M Geers; T Pachur; R Hertwig; S Lewandowsky; S M Herzog", "journal": "Scientific Reports", "ref_id": "b27", "title": "Boosting people's ability to detect microtargeted advertising", "year": "2021" }, { "authors": "C Mills", "journal": "Social Theory and Practice", "ref_id": "b28", "title": "Politics and Manipulation", "year": "1995" }, { "authors": "R Noggle", "journal": "American Philosophical Quarterly", "ref_id": "b29", "title": "Manipulative Actions: A Conceptual and Moral Analysis", "year": "1996" }, { "authors": "R Noggle", "journal": "", "ref_id": "b30", "title": "The Ethics of Manipulation", "year": "2018" }, { "authors": "R Noggle", "journal": "American Philosophical Quarterly", "ref_id": "b31", "title": "Pressure, Trickery, and a unified account of manipulation", "year": "2020" }, { "authors": "D Susser; B Roessler; H Nissenbaum", "journal": "Internet Policy Review", "ref_id": "b32", "title": "Technology, autonomy, and manipulation", "year": "2019" }, { "authors": "D Susser; B Roessler; H F Nissenbaum", "journal": "Georgetown Law Technological Review", "ref_id": "b33", "title": "Online Manipulation: Hidden Influences in a Digital World", "year": "2018" }, { "authors": "S Wachter; B Mittelstadt; C Russell", "journal": "Harvard Journal of Law & Technology", "ref_id": "b34", "title": "Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR", "year": "2018" }, { "authors": "H Wang", "journal": "Philosophy & Technology", "ref_id": "b35", "title": "Transparency as Manipulation? Uncovering the Disciplinary Power of Algorithmic Transparency", "year": "2022" }, { "authors": "H Wang", "journal": "Philosophy & Technology", "ref_id": "b36", "title": "Why Should We Care About the Manipulative Power of Algorithmic Transparency", "year": "2023" }, { "authors": "A N Whitehead", "journal": "originally Williams & Norgate", "ref_id": "b37", "title": "An introduction to mathematics. E-book by Project Gutenberg", "year": "1911" }, { "authors": "A F Winfield; S Booth; L Dennis; T Egawa; H Hastie; N Jacobs", "journal": "Frontiers in Robotics and AI", "ref_id": "b38", "title": "IEEE P7001: A Proposed Standard on Transparency", "year": "2021" } ]
[]
2023-11-22
[ { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Introduction", "publication_ref": [ "b43", "b28", "b33" ], "table_ref": [], "text": "Media retargeting is a way to edit images, videos, 3D objects, or even entire 3D scenes by a global deformation such that relevant content, details, and features are properly preserved. The overall goal is to change the aspect ratio of the scene's bounding box so that it fits an allocated space, e.g. fitting an image to a display with prescribed format. Retargeting is based on the idea of identifying regions with little detail and to accumulate the necessary distortion induced Figure 1. Different objectives applied to different visual domains with our approach: We demonstrate retarget images, NeRFs, and meshes. Image from [44].\nby the deformation in these regions. Small editing operations are possible e.g. through concentrating the distortion there, specifying a region containing a particular object to have this object removed. While most existing retargeting methods typically use some form of seam carving, i.e. the deletion of discrete pixels [2, 8, 9, 43], we formulate the problem via a continuous deformation field that maps from the retargeted image or scene back to the undistorted input such that, e.g., colours or normals can be queried. This allows us to handle discrete media (images) and continuous media (3D objects and scenes) equally. In this paper, we introduce a more general approach for retargeting various forms of visual media, including the first approach that allows retargeting of neural radiance fields [29] (NeRFs). We also demonstrate our approach for images and polygon meshes, as can be seen in Fig. 1. Our contributions are as follows: (a) We introduce the use of a neural deformation field compressing or stretching low information content regions to achieve a smaller or larger output or to follow other editing objectives. (b) We demonstrate the domain agnostic nature of our formulation by applying it to images, 3D meshes, and 3D scenes in the form of NeRFs. (c) We show that the high flexibility of our approach from optimising a global deformation field produces better outputs than iteratively computing seams. To achieve this, we regularise the neural deformation fields to follow general sanity guidelines, while attempt to minimize distortion in places with presumed high information content. We evaluate our results on both qualitative and quantitative levels. To showcase our approach on interesting scenes captured as NeRFs and inspired by the dataset of Richter et al. [41], we also provide a small synthetic NeRF retargeting dataset captured in the video game GTA V [34]. Our method can be applied in a matter of seconds, requires only minor modifications for vastly different applications and domains, and is straightforward to implement. We will provide the code of our method demonstrated on images on GitHub 1 ." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b5", "b8", "b42", "b3", "b9", "b29", "b31", "b45", "b10", "b11", "b22", "b4", "b23", "b13", "b44", "b53", "b16", "b28", "b52", "b51", "b19", "b56", "b47", "b46", "b54", "b39", "b26", "b35", "b49" ], "table_ref": [], "text": "As our approach is inspired by seam carving, it produces results from a single example without any learned prior, and was designed to allow seam carving-like deformations for learned 3D representations. We hence discuss these three branches of related work.\nSeam Carving Image/video retargeting is an important tool in computer vision and computer graphics. Early works [25] first annotate or detect the region of interest (ROI), which is uniformly scaled to the target size afterwards. While contents out of the ROI are deformed by fisheye warping, Liu and Gleicher [26] later extend it to the video domain by introducing a motion salience map. Avidan and Shamir [2] propose seam carving, a content-aware image resizing method. It first defines an energy map by summing the gradient norms along the horizontal and vertical directions. The cost of a seam is defined by adding up the energy values of pixels on it. The seam with minimal energy can be found by dynamic programming and will be removed in the resizing process. Rubinstein et al. [42] extend the seam carving idea to the video domain and transform the dynamic programming problem to a graph cut problem. Follow-up works [8, 9,43] incorporate homogeneous cropping and scaling with seam carving and excavate more highlevel information like symmetry [51], depth [4], and object similarity [10] to assist seam carving. With the advances in deep learning, a few methods [30,32,33] made attempts to detect and localise the seam carving forgery. Song et al. [46] propose Carving-Net to replace the previous handcrafted energy map with a neural network while the seam finding algorithm is still dynamic programming. Instead of using networks to improve seam selection, our approach replaces the discrete seams and dynamic programming itself with a neural network, globally optimising a continuous deformation field.\nSingle Example Generation The goal of single example generation is to discover similar local patterns from a single input and synthesise novel samples. Texture synthesis on 2D images and 3D shapes has been widely studied in computer vision and computer graphics by non-neural methods [11,12,23,49]. There are also methods focusing on synthesising geometric textures for 3D shapes [5,18,24], which transfer local geometry details from an example shape to other surfaces. Recently, neural-based methods have been proposed to generate 2D textures [14,45,54] and transfer them to 3D shapes [16,17], resulting in improved results. With recent advances in implicit representation [7, 28,35] and neural radiance fields [29], Wu et al. propose to learn implicit 3D shapes from a shape with [53] or without texture [52]. To enhance the realism of synthesised shapes, Huang et al. [20] reconstructs the 3D shape with a NeRF by optimising texture features on a reconstructed mesh. These features can be applied to arbitrary surfaces to render the target surface with synthesised textures. These methods often only work for a certain kind of data format, while our method proposes a domain agnostic approach to synthesise visual data from a single example.\nDeformations for Learned 3D Representations Neural radiance fields (NeRF), as a newly proposed 3D shape representation, has been widely used in scene reconstruction and novel view synthesis. However, the original NeRF only works for static scenes thus unable to deal with deformation in dynamic scenes or generate motions for a static scene. To overcome this, the deformation field is introduced to NeRF. NeRF-Editing [57] is the first to explore deformation in NeRF. It first reconstructs the explicit mesh of a static scene with NeuS [48] and deforms the explicit mesh with the ARAP algorithm [47]. Then sample points in volume rendering are deformed along with the mesh via barycentric coordinate interpolation, resulting in the change of rendered images. Follow-up NeRF deformation methods [13,39,55] utilise a similar pipeline but use different geometry prox-ies. To reconstruct dynamic scenes, Albert et al. [40] first explore the possibility by adding an extra time-conditioned deformation field to the static NeRF. The time-conditioned deformation field transforms sample points in the observation space into the canonical space, where their colours and opacities are queried and then rendered into images. This idea stimulated a series of dynamic NeRF reconstruction methods [6, 27,36,37,50,56]. While our approach also uses a deformation field, we inject general plausibility constraints into it to keep the results plausible, enabling the idea behind seam carving on different types of visual data." }, { "figure_ref": [ "fig_1" ], "heading": "Deformation Fields for Retargeting Visual Data", "publication_ref": [], "table_ref": [], "text": "Editing strategies for visual data require avoiding modifications on meaningful contents, which create visible artefacts. An approach that focuses on retargeting the input to match a target size can be guided by the following set of rules: (a) Content Aware Objective: Prevent the accidental creation of new or the removal of existing regions with high information content (controlling where deformation happens). (b) Sanity Objective: Avoid introducing deformations that yield implausible results (controlling how deformation happens). Note that for the following sections, we only consider retargeting visual data to a smaller size (shrinking). This formulation can be extended naturally to provide editing (see Sec. 3.5) or to retarget to a larger size (expansion, see Sec. 8).\nDiscrete Solution Seam carving [2] performs editing by manipulating (removing or doubling) one seam of pixels at a time, i.e. a path with the width of one single pixel passing through the image. Through dynamic programming, seam carving chooses this path based on the lowest energy of the pixels, with high energy indicating information-rich areas. In a simple case, this energy is the colour gradient. This yields a discrete solution to the retargeting problem that is both content aware and sane. While this naturally extends to voxelised 3D scenes, it would become costly at high resolutions while suffering from voxelisation artefacts.\nContinuous Solution Our approach follows the same guidelines, but instead of discrete, pixel-wise paths through the image, we learn a deformation function for continuous inputs. By working directly in the continuous domains (e.g. not requiring voxelisation of a 3D scene), our formulation is less prone to artefacts, while allowing for more degrees of freedom by accommodating non-continuous compression and expansion (see Fig. 4). We consistently utilise the same core formulation, regardless of the underlying domain and application, e.g. moving an object in an image or retargeting a 3D scene. For clarity, we will refer to our continuous seams as folds." }, { "figure_ref": [], "heading": "General Formulation", "publication_ref": [], "table_ref": [], "text": "For input visual data I, we train a neural deformation field D, a simple MLP. The deformation field provides an offset to each input coordinate, i.e. it learns an I-specific deformation field to look up the content of I at a different place (see Fig. 2). For any point p ∈ P of the domain of I, the deformation D(p) yields a scalar offset in a deformation direction v for p. We define P to contain all possible input positions for I, e.g. P = [0, 1] 2 for a square image. Adding this scaling of v to p we obtain a deformed point p ′ :\np ′ = p + vD(p)(1)\nThis deformation field is always tuned to the visual data I(p), e.g. a previously trained NeRF or an image. We only optimise the deformation field that gives an offset for the mapping from the edited output data to the input data. To fulfil the objectives stated in Sec. 3.1, we introduce two sets of losses: Content Aware Objectives We apply regularisation to guide the deformation process in order to adjust the learnt deformation to the content of the data. To discourage noticeable changes in regions with high information content, we penalise the product between deformation magnitude and information content, expressed as an energy function E. The change in the deformation field between two points also change the distance between them in the output, suggesting a potentially noticeable deformation at this point. We penalise these distortions in areas with high information content. We define E(p) to be a measure of content information at p, expressed as, e.g., the gradient in the visual data:\nE(p) = ||∇I(p)|| 2(2)\nWe therefore define our general energy term that measures content deformation as follows:\nL C = p∈P [E(p + v • D(p)) • ||∇D(p)|| 1 ] dp(3)\nThus, in Eq. ( 3), we effectively minimise the product of information content at the deformed point as\nE(p + v • D(p))\nand the change in the deformation itself as ||∇D(p)|| 1 , penalising folds in the deformation at regions with high information content. Using the L 1 -norm for the distortion promotes piecewise constant, possibly sparse, deformations. We can then split up the change in deformation to obtain:\nL C = p∈P E(p + v • D(p)) • ∂ ∂v D(p) + ∂ ∂v ⊥ D(p) dp(4)\nThis equation is then split into two components, L e penalising stretching or compression and L s punishing shearing (deformation change in a direction that is not our deformation direction). For a now discrete sample set P from our domain, these terms are expressed as\nL e = p∈P E (p + v • D(p)) • ∂ ∂v D(p) dp, L s = p∈P E (p + v • D(p)) • ∂ ∂v ⊥ D(p) dp.(5)\nL e directs deformation to low information content areas, while L s discourages shearing for high information content regions. Note that while L s would also penalise nonstraight deformation seams, we tune our losses such that this only affects the output substantially if an area is sheared (e.g. for the balloons in Fig. 2, the deformation going Input Seams to carve Seam carved around the balloons introduces shearing only right at the seam in an area with low information content)." }, { "figure_ref": [], "heading": "Our folds Ours", "publication_ref": [], "table_ref": [], "text": "In case of image deformation, neighbouring pixels with low energy in the input may translate to pixels widely apart in the output, potentially bypassing a substantial amount of (important) content. To overcome this, we want to penalise Input Initial / uniform Optimised Figure 6. Applying our retargeting to 3D meshes.\nthe amount of energy Ê between two points p, q in the target domain, rather than the energy at individual positions:\nÊ(p, q) = p ′ ∈[p,q] E(p ′ ) dp ′ (6)\nNow, we re-define L e to be the product of the energy between points and their deformation change,\nL e = p∈P Ê (p + vD(p), p ε + vD(p ε )) • |D(p) -D(p ε )| ε (7)\nwhere p ε := p + ε is a slightly offset p for small ε. This formulation guides the deformation to avoid introducing deformation to high information content regions or fold over them, while L s discourages non-local deformations that would shear data with high information content. Sanity objectives To further prevent undesired outputs, we introduce two additional losses. Assume we shrink the input by a factor α ∈ [0, 1]:\n1. The boundaries of the affected region should not change, e.g. an image should not be simply cropped. Based on α, the deformation field should map the start/end of the deformation axis in the output to the start/end of the input. For P 0 , P 1 containing all points at the start/end of the deformation axis of the output, we express this requirement as follows:\nL b = p∈P0 |D(p)| dp + p∈P1 |D(p) -(1 -α)| dp (8)\n2. We additionally demand monotonicity in the deformation field, expressed as\nL m = p∈P max 0, - ∂ ∂v D(p) dp(9)\nindicating that any change in the deformation direction should be positive, i.e. the deformation D(p) should be monotonic and only stagnate or increase exclusively along v. This prevents the repetition of the same image portion (\"jumping back and forth\" through our deformation), which might result in bad optima.\nWe then obtain the total loss L with weights λ as\nL = λ e L e + λ s L s + λ b L b + λ m L m .(10)\nTo promote convergence and avoid undesirable local optima, we always initialise our deformation function with a uniform stretch to the target size. In the following, we demonstrate the application of this general formulation to different domains, adapting the energy formulation. All implementation and architecture details are given in Sec. 6 and the exact loss terms in Sec. 7." }, { "figure_ref": [], "heading": "Optimising Deformation Fields for Images", "publication_ref": [], "table_ref": [], "text": "To ensure consistency with our continuous approaches, we first train a continuous image representation MLP I to serve as our input data, mapping pixel coordinates to colours. We train a neural network to first learn a energy field E, then a cumulative energy field Σ learning to reproduce Ê with an MLP. The difference in cumulative energy Σ between two points p, q on the same axis v is then the absolute difference in information content between them:\nÊ(p, q) ≈ |Σ(q) -Σ(p)|(11)\nFor all objectives, we simply use discrete approximations of the loss terms introduced in Sec. 3.1.\nThe whole pipeline is visualised in Fig. 2, and extra results can be found in Tab. 4." }, { "figure_ref": [], "heading": "Optimising Deformation Fields for Neural Radiance Fields", "publication_ref": [ "b2" ], "table_ref": [], "text": "A naive extension of the energy term from 2D to 3D as the product of density and change in colour, as given by the NeRF has two major issues: change of colour and density may not be well-defined for single points (e.g. consider mip-NeRF [3] only using volumes), while a large number of samples are in empty space, hence are unimportant. To address this, we propose in the following a strategy to optimise for points on the surface instead, as visualised in Fig. 3: 1. Extracting a Sparse Set of Representatives Using the RGB and depth images of applying the NeRF from the training dataset positions, we extract the surface point cloud that, by definition, contains the points with the most significant role in the deformation. We then compute the energy value for each point corresponding to a pixel as a sum of colour change in the RGB image and depth change in the depth image. As this leads to highly noisy energy values, we apply smoothing by taking the minimum energy for each point in a neighbourhood radius of 5 points." }, { "figure_ref": [], "heading": "Defining an Energy Function", "publication_ref": [], "table_ref": [], "text": "Since colour change alone is not a sufficient energy formulation, our defined energy combines colour change in the rendered image and change in the depth image. Hence, each point contributing to the rendered image is equipped with its corresponding energy value. To obtain a continuous energy function we can evaluate for every point in the domain, we train an auxiliary energy network E net . The network maps all 3D points to their energy values, and clamps the energy to zero for all points further than twice the average neighbour distance. The resulting energy field has zero energy in empty space and inside objects, and non-zero energy values for points near the surface. Following the deformation axis, we additionally learn a cumulative energy field, expressed as a network E cnet . The network is trained by shooting rays in the direction of deformation, and accumulating energy values from E net from uniformly sampled points. Having the cumulative energy field, we can implicitly compute the energy for a segment between two points along v." }, { "figure_ref": [ "fig_2" ], "heading": "Inverting the Deformation", "publication_ref": [], "table_ref": [], "text": "We define an inverse deformation function to determine those points that lie on the surface after deformation. In contrast to images, we only perform optimisation on a sparse set of points, namely surface points. As we optimise a deformation from the target space to the source space, we need the position of surface points in both spaces. By training an MLP U to reverse the deformation, i.e. minimising (U (D(x)) = x) 2 , we can effortlessly extract surface points, and hence their energy values, in the target space. This relation is also visualised in Fig. 3. We always update U once after training one iteration of D.\nWe therefore adapt our general approach from Sec. 3.1 for NeRFs as follows:\n• We train a continuous energy function E from sparse samples on the NeRF surface, then train a cumulative energy function Σ from that. • We use an inverse deformation network U to find points in the target space that map to surface points in the source space. • We apply our regularisation terms on a mix of surface and random points to optimise our loss function. As we use a discrete set of points, we apply the losses from Sec. 3.1 to Eq. ( 10) for a discrete interval, i.e. mean over a random subset of sample points. Example images from deformed scenes can be found in Fig. 5, while a larger number of results can be found in Sec. 10." }, { "figure_ref": [], "heading": "Optimising Deformation Fields for Meshes", "publication_ref": [ "b43" ], "table_ref": [], "text": "To show the versatility of our approach, we also it to meshes, only redefining the energy term and using surface Input Edited Figure 7. We demonstrate different editing operations by simply adding and adapting the loss formulation, leading to different editing operations (always input on top, output on the bottom, green indicating a given input mask). Left: Removing an Object, Right: Moving an object. Images from [44].\nsamples and vertex positions to optimise. For objects like chairs, we used curvature, and for the example in Fig. 6, we simply set the energy for the floor to 0 and to 1 for everything else." }, { "figure_ref": [], "heading": "Other Applications", "publication_ref": [], "table_ref": [], "text": "While retargeting is our main contribution, just as for seam carving [2], our framework opens up multiple different editing opportunities that even extend seam carving capabilities. While all details can be found in Sec. 9, the following exemplary operations were implemented through minor changes in the loss formulation, are applicable to any domain, and could form the basis for further editing methods or user-driven applications. Examples can be found in Fig. 7. Removal To remove a part of a scene, we remove the monotonicity loss, use the same size for input and output, and add another loss term that penalises every deformation target that leads to the region, that should be removed. Hence, points originally mapping to the target region will be distorted to neighbouring regions. Moving an Object Although classic seam carving cannot trivially move an object, since seams are assumed to cross the whole image, our formulation has no such restriction. We remove the monotonicity term, then enforce the deformation to point at the target feature from the desired new place of the object and punish it occurring anywhere else to avoid just repeating the object." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b43" ], "table_ref": [], "text": "As retargeting quality is difficult to measure and lacks similar methods for e.g. meshes, as the backbone for our evaluation, we compare our results to seam carving [2] on images using the RetargetMe [44] benchmark for image retargeting. In 3D, lacking an alternative, we compare with the videoseam-carved animation frames of the same scene. To allow a fairer comparison with video seam carving, we only pan down the camera instead of rotating around the object (e.g. spinning the camera around an object would be disadvantageous for video seam carving). We always produce a short, 3 second clip at 30fps." }, { "figure_ref": [ "fig_3" ], "heading": "Comparisons", "publication_ref": [ "b43" ], "table_ref": [], "text": "Quantitative Comparison In accordance with recent publications in the generative model and image editing domain [15, 21, 38], we compare our results and seam carving results with the original data using FID [19] in Tab. 1 for images. While we observe the results to align with our perception, we also validate this on a qualitative level.\nQualitative Comparison As image quality is a subjective criterion and may not align with our quantitative results, we validate our quantitative results by having humans classify which retargeted version they prefer. For this, we evaluate in a simple user study where participants were asked to classify which retargeted version of a randomly chosen image for a randomly chosen deformation axis they preferred, always choosing between our and the seam carved[2] version. We used the 80 images of the RetargetMe [44] for 2D, producing 360 variations per approach. For retargeting quality of 3D scenes given as NeRFs, we compare ours results with what we consider to be the approach nearest of kin, video seam carving [42]. We use a 50% and a 150% version for each axis of our provided GTA V NeRF retargeting dataset. The results can be found in Tab. 2 (2D images) and Tab. 3 (3D scenes).\nVisual Ablation We provide a simple visual ablation that shows how our method requires all of its loss terms in Fig. 9. We observed similar behaviour for 3D NeRFs and meshes." }, { "figure_ref": [], "heading": "Methodological Comparison", "publication_ref": [], "table_ref": [], "text": "Compared to classic seam carving, our approach provides a superset of all possible solutions from discrete seam carving, as e.g. any removed seam can simply be expressed as a jump in the output of our deformation network. Hence, we extend the space of possible solutions. Our approach also tends to be less \"jaggy\" compared to individually removed seams, as neural networks fall into optima that are continuous rather than changing with high frequency. While removal of individual seams one after the other also prevents shearing by construction, we do not punish shearing that occurs in unimportant areas, gaining flexibility. Especially compared to video seam carving, our approach excelled in 3D, as we are able to produce geometry aware results." }, { "figure_ref": [], "heading": "Discussion and Limitations", "publication_ref": [ "b3", "b9", "b45", "b44" ], "table_ref": [], "text": "Successors to seam carving focused on adding clever regularisations [4,10,51] or better energy terms [46]. While we only relied on colour gradient for our comparison to the baseline, showing our advantage without any improved energy formulation, and focused on redefining and generalising the approach itself rather than exploring e.g. energy terms we can incorporate in our approach by design. However, when e.g. applying our approach to images, where the assumption of important parts having high colour gradient breaks, will produce flawed outputs, just as seam carving does. This could be addressed by using a similar loss function like the discriminator from SinGAN [45] or using a more complex measure for information content, as also partially explored with a face detector in [2]. Further, we do not learn a general deformation network but have to newly optimise a neural field for each individual input; However, this also means we do not require any training data other than our input image. Lastly, regarding computation time, our approach is slightly slower than seam carving for images, which we attribute to a more global solution, but excelled in 3D, as e.g. video seam carving on a 600 by 400 pixel image can takes multiple hours, while our approach produces a retargeted 3D scene within 15 minutes." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present an approach that generalises the principle underlying seam carving to create image variants by manipulating its low content areas. By training neural deformation fields that avoid applying distortion to important parts of an input, our approach is applicable to different editing objectives, centred around visual data retargeting.\nOur formulation is largely domain invariant and only needs minor alterations to work on different types of visual data, as we demonstrated through its application to neural radiance fields and even meshes, which is novel for approaches from the seam carving family that do not operate in continuous domains. Compared to the existing idea of seam carving, our approach offers increased flexibility by using continuous deformation fields instead of discrete decisions for seams, while bringing compatibility e.g. for the use of more complex energy formulations in the future." }, { "figure_ref": [], "heading": "Retargeting Visual Data with Deformation Fields", "publication_ref": [], "table_ref": [], "text": "Supplementary Material 6. Architecture We use simple components for our architecture, with a basic MLP with residual connection being the core of all our used networks. We use the Adam optimiser [22] in all instances, without weight or learning rate decay, and the default learning rate of 0.001 if not specified otherwise." }, { "figure_ref": [ "fig_4" ], "heading": "Architecture for Images", "publication_ref": [], "table_ref": [], "text": "For the neural field holding the images as described in Sec. 3.2, we apply the network depicted in Fig. 10, a simple MLP with 192 channels, a residual connection, and positional encoding that outputs 3 scalars for the RGB value. For the deformation and the cumulative gradient network, we use the same network with 64 channels and a single output scalar value. We use a learning rate of 0.0001 for expanding the image. We train the network learning the image itself and the network learning the cumulative gradient for 250 epochs with 100 iterations each, while we initialise the deformation network for 50 epochs with a uniform transformation that we then train on our loss defined in Sec. 7. Our slim architecture allows optimisation of all of the image data in one single batch." }, { "figure_ref": [], "heading": "Architecture for Neural Radiance Fields", "publication_ref": [ "b28", "b2" ], "table_ref": [], "text": "We use a pre-trained, customised iNGP [31] as our learned 3D scene (both vanilla [29] and mip-NeRF [3] worked just as well in our early tries), then use the same setup as for images. We train initial deformation, energy, and cumulative energy for 5000 iterations each, always using 10000 samples. We optimise the deformation field for 10000 random points from the surface and 10000 random points from the boundary for 50 epochs with 100 iterations each, and always choose the network parameters from the epoch with the best average loss. For L e , we define:" }, { "figure_ref": [], "heading": "Exact Loss Formulations", "publication_ref": [], "table_ref": [], "text": "L e = |E cnet (p) -E cnet (p ϵ )| • |D(p) -D(p ϵ )| ϵ (12\n)\nwhere E cnet is the MLP for cumulative energy and p ϵ are all pixels p moved by one pixel in direction v. For shearing, for energy net E and p ⊥ ϵ being all points p moved by one pixel in the direction orthogonal to v, we define:\nL s = E(p) • |D(p) -D(p ⊥ ϵ )| ϵ(13)\nFor the boundary, we build our loss around the pixels on the one end of the image, p l , and the other, p r , regularising their deformation to be 0 at the one, and 1.0 -α at the other end:\nL b = mean(|D(p l )|) + mean(|D(p l ) -(1 -α)|)(14)\nFor monotonicity in images, we use:\nL m = mean(max(0, D(p) -D(p ϵ ))) ϵ(15)\nAs weights for our loss terms, we use λ e = 10000, λ s = 250 λ b = λ m = 10000. For the cumulative energy net, we accumulate energy over all pixels, then train the MLP to minimise mean squared error between input (2D position) and output (cumulative energy for that position) for 10000 iterations, always for all pixels." }, { "figure_ref": [], "heading": "Loss Formulation for Neural Radiance Fields", "publication_ref": [], "table_ref": [], "text": "We first define a mixture of surface points and uniform random points p ′ in source space we run our optimisation on, then finding those points p in target space that are deformed to become p ′ :\np = U (p ′ )(16)\nFor L e , we define:\nL e = mean(|E cnet (p) -E cnet (p ϵ )| • |D(p) -D(p ϵ )|) ϵ(17)\nwhere E cnet is the MLP for cumulative energy and p ϵ are our points p moved by an offset in direction v. For shearing, for energy net E and p ⊥ ϵ 1 , p ⊥ ϵ 2 being all points p moved by an offset in the directions orthogonal to v, we define:\nL s = E(p) • |D(p) -D(p ⊥ ϵ 1 )| ϵ + E(p) • |D(p) -D(p ⊥ ϵ 2 )| ϵ (18)\nFor the boundary, we build our loss around the pixels on the one end of the image, p l , and the other, p r , regularising their deformation to be 0 / 1.0 -α:\nL b = mean(|D(p l )| + mean(|D(p l ) -(1 -α)|) (19)\nFor monotonicity in images, we use:\nL m = mean(max(0, D(p) -D(p ϵ ))) ϵ(20)\nAs weights for our loss terms, we use λ e = 10, λ s = 0.1 λ b = 100, = λ m = 1. After every iteration, we also update U with all points p, i.e. minimise (D(U (p)) -p) 2 for one step. For the cumulative energy net, we accumulate energy by shooting random rays along the deformation direction with 100 uniformly distributed samples. We gather and accumulate energy from E net , then train the cumulative energy MLP to minimise mean squared error between input (3D position) and output (cumulative energy for that position on that ray) for 10000 iterations, always for 10000 points." }, { "figure_ref": [], "heading": "Expansion", "publication_ref": [], "table_ref": [], "text": "Expansion of visual data follows the same idea as for seam carving[2]: Instead of compressing (in seam carving: removing) low energy parts, we stretch (in seam carving: double) them. Examples can be seen in Sec. 10. To expand data, e.g. to increase its width to 150 percent, we can use the exact same losses as for retargeting to a smaller size, but with one additional loss term and a minor modification:\nWe require a limit on the change in the deformation to avoid only duplicating a single seam, meaning to punish any change in the deformation that is too large, hence would repeat the same part of the image over and over again. Seam carving prevents this for image expansion by only doubling a seam once. As a new loss term, this gives us:\nL cap = mean(max(0, (p -p ϵ ) ϵ -1)))(21)\nPreventing to more than double an input. In addition, we modify L e to not punish any information content skipped between two points (as we no longer skip over anything), but instead simply punish the energy directly, i.e. replacing Ê from Eq. ( 6) with E applied to only one point." }, { "figure_ref": [], "heading": "Editing", "publication_ref": [], "table_ref": [], "text": "For the editing procedures describe in Sec. 3.5, we can perform the same concept for every domain. For this, we always retain the same image size.\nRemoval To remove an object, we punish every point that maps to the content that we want to remove, i.e. take all points of the input, deform them, and look up if the deformed point lies within the marked area. For continuity, we do this by training another network net mask to learn a mask, i.e. 0 for regions outside, 1 for regions inside the area to remove. We then simply add mean(net mask (p)) for all points p to the loss function, punishing any deformation that maps inside the target area. To allow both sides of the removed content to fill in the space, we disable the monotonicity loss L m . The remaining loss terms then keep the rest of the image in place." }, { "figure_ref": [], "heading": "Moving an object", "publication_ref": [], "table_ref": [], "text": "To move an object, we first remove the monotonicity loss L m to allow more complex deformations. As this disables protection against repetition, we then actively punish our target object occuring anywhere else, i.e. using a network net mask containing a binary mask of the object to move, then add net mask (p) for all points p in the output to the loss function except for the target coordinates of the object. We then add a term to the loss that enforces the exact offset value that would place the target at the right location for all parts of it. Lastly, to avoid any other unwanted edits and make the optimisation more stable, we add a last loss term scaled by 0.01 that punishes the absolute mean of deformation, i.e. punishes applying any deformation at all. While this avoids unnecessary deformations, it is neglectable compared to the additional loss caused by only the part relevant for the deformation.\nTo move an object along multiple axis, we decompose the movement to the target position into two axis, then apply our approach for each direction. 5. More examples of our deformation approach on 3D NeRF scenes." }, { "figure_ref": [], "heading": "Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": " ", "publication_ref": [ "b43" ], "table_ref": [], "text": "4\n. More examples of our deformation approach on images, all from the RetargetMe dataset [44]." } ]
Seam carving is an image editing method that enable content-aware resizing, including operations like removing objects. However, the seam-finding strategy based on dynamic programming or graph-cut limits its applications to broader visual data formats and degrees of freedom for editing. Our observation is that describing the editing and retargeting of images more generally by a displacement field yields a generalisation of content-aware deformations. We propose to learn a deformation with a neural network that keeps the output plausible while trying to deform it only in places with low information content. This technique applies to different kinds of visual data, including images, 3D scenes given as neural radiance fields, or even polygon meshes. Experiments conducted on different visual data show that our method achieves better content-aware retargeting compared to previous methods.
Retargeting Visual Data with Deformation Fields
[ { "figure_caption": "OptimiseFigure 2 .Figure 3 .23Figure2. Our proposed pipeline for retargeting visual data: For a given input (left), we train two simple networks that learn the energy and cumulative energy along the deformation axis of the input (centre-left). We then initialise a network that stretches samples to the desired position (centre-right), then optimise this deforms to distribute the distortion to low information content regions (right). Image from[44].", "figure_data": "", "figure_id": "fig_0", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Example of applying retargeting to 75 percent width.Seam carving fails to use seams that are non continuous or with a too steep angle, not properly using the empty space in the shelf to close the gaps before shrinking the bookshelves content.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Applying our retargeting to NeRFs. Top row: Input, stretched, retargeted with our method. Bottom row: Input, folds on the input suggested by our approach, retargeted with our method.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Visual ablation for retargeting to 80 percent width on an image[1], from top left, clockwise: Wobbly contours (no Ls), only uniform stretch (no Le), swallowed features (no Lm), no boundary (no L b ).", "figure_data": "", "figure_id": "fig_3", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 .10Figure10. The simple architecture we apply for learning the neural image field, the deformation field(s), energy network and cumulative energy network. We use positional encoding (yellow), then apply linear layers (cyan) with a residual connection, LeakyReLu (black arrow), and an output depending on context (red, with sigmoid for images, otherwise LeakyReLu).", "figure_data": "", "figure_id": "fig_4", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "7. 1 .1Loss Formulation for ImagesDue to out light weight network we use for our deformation field, we can apply our loss terms directly to all points p in the output domain (i.e. the shrunken image), training one single batch for an image.", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Seam carving 85.75 66.41 51.67 35.20 23.80 52.57 20.86 30.01 37.38 43.45 46.37 35.61 Ours 79.39 58.06 43.69 30.53 21.71 46.68 11.17 19.42 29.19 37.87 49.74 29.47 Seam carving 116.70 84.33 60.59 40.86 24.92 65.48 23.04 33.58 40.82 48.49 52.59 39.70 Ours 89.44 67.77 49.62 35.59 25.42 53.57 13.55 21.02 27.55 38.20 46.40 29.00 Comparison of FID scores on the RetargetMe [44] dataset, for retargeting x (top) and y (bottom) axis to a smaller (left) and bigger (right) size.", "figure_data": "x FID(↓)50%60%70%80%90% mean110% 120% 130% 140% 150% meany FID(↓)50%60%70%80%90% mean110% 120% 130% 140% 150% meanRetargeting widthOursSCDraw50 %48.48 % 20.87 % 30.67 %150 %33.8 %36.27 % 29.92 %Retargeting heightOursSCDraw50 %49.5 %24.25 % 26.25 %150 %45.84 %20.0 %34.15 %Total44.57 %25.1 %30.32 %X Axis to 50 %X Axis to 150 %SceneOurs VSC Draw Ours VSC DrawBeach17312010Farm21002100Harb.21002100Y Axis to 50 %Y Axis to 150 %SceneOurs VSC Draw Ours VSC DrawBeach19111911Farm16322100Harb.21002100", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "User study comparing retargeting a scene, then recording it (ours), to recording it and then retargetet it (video seam carving [42], VSC), using our provided dataset. 21 participants were asked to indicated which solution they prefer, casting 252 votes in total.", "figure_data": "", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" } ]
Tim Elsner; Julia Berger; Tong Wu; Victor Czech; Lin Gao; Leif Kobbelt
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Newton2 at English Wikipedia", "year": "2007" }, { "authors": "Shai Avidan; Ariel Shamir", "journal": "ACM Trans. Graph", "ref_id": "b1", "title": "Seam carving for contentaware image resizing", "year": "2007" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Matthew Tancik; Peter Hedman; Ricardo Martin-Brualla; Pratul P Srinivasan", "journal": "", "ref_id": "b2", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "Tali Dekel Basha; Yael Moses; Shai Avidan", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b3", "title": "Stereo seam carving a geometrically consistent approach", "year": "2013" }, { "authors": "Sema Berkiten; Maciej Halber; Justin Solomon; Chongyang Ma; Hao Li; Szymon Rusinkiewicz", "journal": "Comput. Graph. Forum", "ref_id": "b4", "title": "Learning detail transfer based on geometric features", "year": "2017" }, { "authors": "Hongrui Cai; Wanquan Feng; Xuetao Feng; Yan Wang; Juyong Zhang", "journal": "", "ref_id": "b5", "title": "Neural surface reconstruction of dynamic scenes with monocular rgb-d camera", "year": "2022" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "", "ref_id": "b6", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Weiming Dong; Ning Zhou; Jean-Claude Paul; Xiaopeng Zhang", "journal": "ACM Trans. Graph", "ref_id": "b7", "title": "Optimized image resizing using seam carving and scaling", "year": "2009" }, { "authors": "Weiming Dong; Guan-Bo Bao; Xiaopeng Zhang; Jean-Claude Paul", "journal": "J. Comput. Sci. Technol", "ref_id": "b8", "title": "Fast multi-operator image resizing and evaluation", "year": "2012" }, { "authors": "Weiming Dong; Ning Zhou; Tong-Yee Lee; Fuzhang Wu; Yan Kong; Xiaopeng Zhang", "journal": "IEEE Trans. Vis. Comput. Graph", "ref_id": "b9", "title": "Summarization-based image resizing by intelligent object carving", "year": "2014" }, { "authors": "Alexei A Efros; William T Freeman", "journal": "ACM", "ref_id": "b10", "title": "Image quilting for texture synthesis and transfer", "year": "2001" }, { "authors": "Alexei A Efros; Thomas K Leung", "journal": "", "ref_id": "b11", "title": "Texture synthesis by non-parametric sampling", "year": "1999" }, { "authors": "Stephan J Garbin; Marek Kowalski; Virginia Estellers; Stanislaw Szymanowicz; Shideh Rezaeifar; Jingjing Shen; Matthew Johnson; Julien Valentin", "journal": "", "ref_id": "b12", "title": "Voltemorph: Realtime, controllable and generalisable animation of volumetric representations", "year": "2022" }, { "authors": "Leon A Gatys; Alexander S Ecker; Matthias Bethge", "journal": "", "ref_id": "b13", "title": "Texture synthesis using convolutional neural networks", "year": "2015" }, { "authors": "Jiatao Gu; Shuangfei Zhai; Yizhe Zhang; Josh Susskind; Navdeep Jaitly", "journal": "", "ref_id": "b14", "title": "Matryoshka diffusion models", "year": "2023" }, { "authors": "Philipp Henzler; Tobias Niloy J Mitra; Ritschel", "journal": "", "ref_id": "b15", "title": "Learning a neural 3d texture space from 2d exemplars", "year": "2019" }, { "authors": "Philipp Henzler; Valentin Deschaintre; Niloy J Mitra; Tobias Ritschel", "journal": "ACM Trans. Graph", "ref_id": "b16", "title": "Generative modelling of BRDF textures from flash images", "year": "2021" }, { "authors": "Amir Hertz; Rana Hanocka; Raja Giryes; Daniel Cohen-Or", "journal": "ACM Trans. Graph", "ref_id": "b17", "title": "Deep geometric texture synthesis", "year": "2020" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Yihua Huang; Yan-Pei Cao; Yu-Kun Lai; Ying Shan; Lin Gao", "journal": "", "ref_id": "b19", "title": "Nerf-texture: Texture synthesis with neural radiance fields", "year": "2023" }, { "authors": "Bahjat Kawar; Shiran Zada; Oran Lang; Omer Tov; Huiwen Chang; Tali Dekel; Inbar Mosseri; Michal Irani", "journal": "", "ref_id": "b20", "title": "Imagic: Text-based real image editing with diffusion models", "year": "2023" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b21", "title": "Adam: A method for stochastic optimization", "year": "2017" }, { "authors": "Johannes Kopf; Chi-Wing Fu; Daniel Cohen-Or; Oliver Deussen; Dani Lischinski; Tien-Tsin Wong", "journal": "ACM Transactions on Graphics", "ref_id": "b22", "title": "Solid texture synthesis from 2d exemplars", "year": "2007" }, { "authors": "Yu-Kun Lai; Shi-Min; Xianfeng Hu; Ralph R Gu", "journal": "ACM", "ref_id": "b23", "title": "Geometric texture synthesis and transfer via geometry images", "year": "2005" }, { "authors": "Feng Liu; Michael Gleicher", "journal": "ACM", "ref_id": "b24", "title": "Automatic image retargeting with fisheye-view warping", "year": "2005" }, { "authors": "Feng Liu; Michael Gleicher", "journal": "ACM", "ref_id": "b25", "title": "Video retargeting: automating pan and scan", "year": "2006" }, { "authors": "Jonathon Luiten; Georgios Kopanas; Bastian Leibe; Deva Ramanan", "journal": "", "ref_id": "b26", "title": "Dynamic 3d gaussians: Tracking by persistent dynamic view synthesis", "year": "2024" }, { "authors": "Lars Mescheder; Michael Oechsle; Michael Niemeyer; Sebastian Nowozin; Andreas Geiger", "journal": "", "ref_id": "b27", "title": "Occupancy networks: Learning 3D reconstruction in function space", "year": "2019" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b28", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Marcos Thierry Pinheiro Moreira; S Cleison; Leandro A Santana; João Passos; Paulo Papa; Kelton Augusto; Pontara Da; Costa ", "journal": "", "ref_id": "b29", "title": "An end-to-end approach for seam carving detection using deep neural networks", "year": "2022" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b30", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Seung-Hun Nam; Wonhyuk Ahn; In-Jae Yu; Myung-Joon Kwon; Minseok Son; Heung-Kyu Lee", "journal": "IEEE Trans. Circuits Syst. Video Technol", "ref_id": "b31", "title": "Deep convolutional neural network for identifying seam-carving forgery", "year": "2021" }, { "authors": "Lakshmanan Nataraj; Chandrakanth Gudavalli; Tajuddin Manhar Mohammed; Shivkumar Chandrasekaran; B S Manjunath", "journal": "", "ref_id": "b32", "title": "Seam carving detection and localization using two-stage deep neural networks", "year": "2021" }, { "authors": "", "journal": "Rockstar North", "ref_id": "b33", "title": "Grand theft auto", "year": "2015" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b34", "title": "DeepSDF: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Keunhong Park; Utkarsh Sinha; Jonathan T Barron; Sofien Bouaziz; Dan B Goldman; Steven M Seitz; Ricardo Martin-Brualla", "journal": "", "ref_id": "b35", "title": "Nerfies: Deformable neural radiance fields", "year": "2021" }, { "authors": "Keunhong Park; Utkarsh Sinha; Peter Hedman; Jonathan T Barron; Sofien Bouaziz; Dan B Goldman; Ricardo Martin-Brualla; Steven M Seitz", "journal": "ACM Trans. Graph", "ref_id": "b36", "title": "Hypernerf: a higherdimensional representation for topologically varying neural radiance fields", "year": "2021" }, { "authors": "William Peebles; Saining Xie", "journal": "", "ref_id": "b37", "title": "Scalable diffusion models with transformers", "year": "2023" }, { "authors": "Yicong Peng; Yichao Yan; Shenqi Liu; Yuhao Cheng; Shanyan Guan; Guangtao Bowen Pan; Xiaokang Zhai; Yang", "journal": "", "ref_id": "b38", "title": "Cagenerf: Cage-based neural radiance fields for generalized 3d deformation and animation", "year": "2022" }, { "authors": "Albert Pumarola; Enric Corona; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b39", "title": "D-NeRF: Neural Radiance Fields for Dynamic Scenes", "year": "2021" }, { "authors": "Hassan Stephan R Richter; Vladlen Abu Alhaija; Koltun", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b40", "title": "Enhancing photorealism enhancement", "year": "2022" }, { "authors": "Michael Rubinstein; Ariel Shamir; Shai Avidan", "journal": "ACM Trans. Graph", "ref_id": "b41", "title": "Improved seam carving for video retargeting", "year": "2008" }, { "authors": "Michael Rubinstein; Ariel Shamir; Shai Avidan", "journal": "ACM Trans. Graph", "ref_id": "b42", "title": "Multioperator media retargeting", "year": "2009" }, { "authors": "Michael Rubinstein; Diego Gutierrez; Olga Sorkine; Ariel Shamir", "journal": "ACM Transactions on Graphics (Proc. SIGGRAPH ASIA)", "ref_id": "b43", "title": "A comparative study of image retargeting", "year": "2010" }, { "authors": "Tamar Rott Shaham; Tali Dekel; Tomer Michaeli", "journal": "", "ref_id": "b44", "title": "Singan: Learning a generative model from a single natural image", "year": "2019" }, { "authors": "Eungyeol Song; Minkyu Lee; Sangyoun Lee", "journal": "IEEE Access", "ref_id": "b45", "title": "Carvingnet: Content-guided seam carving using deep convolution neural network", "year": "2019" }, { "authors": "Olga Sorkine; - Hornung; Marc Alexa", "journal": "", "ref_id": "b46", "title": "As-rigid-aspossible surface modeling", "year": "2007" }, { "authors": "Peng Wang; Lingjie Liu; Yuan Liu; Christian Theobalt; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b47", "title": "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction", "year": "2021" }, { "authors": "Li-Yi Wei; Marc Levoy", "journal": "ACM", "ref_id": "b48", "title": "Fast texture synthesis using treestructured vector quantization", "year": "2000" }, { "authors": "Guanjun Wu; Taoran Yi; Jiemin Fang; Lingxi Xie; Xiaopeng Zhang; Wei Wei; Wenyu Liu; Qi Tian; Wang Xinggang", "journal": "", "ref_id": "b49", "title": "4d gaussian splatting for real-time dynamic scene rendering", "year": "2023" }, { "authors": "Huisi Wu; Yu-Shuen Wang; Kun-Chuan Feng; Tien-Tsin Wong; Tong-Yee Lee; Pheng-Ann Heng", "journal": "ACM Trans. Graph", "ref_id": "b50", "title": "Resizing by symmetry-summarization", "year": "2010" }, { "authors": "Rundi Wu; Changxi Zheng", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b51", "title": "Learning to generate 3d shapes from a single example", "year": "2022" }, { "authors": "Rundi Wu; Ruoshi Liu; Carl Vondrick; Changxi Zheng", "journal": "", "ref_id": "b52", "title": "Sin3dm: Learning a diffusion model from a single 3d textured shape", "year": "2023" }, { "authors": "Patsorn Wenqi Xian; Varun Sangkloy; Amit Agrawal; Jingwan Raj; Chen Lu; Fisher Fang; James Yu; Hays", "journal": "", "ref_id": "b53", "title": "Texturegan: Controlling deep image synthesis with texture patches", "year": "2018" }, { "authors": "Tianhan Xu; Tatsuya Harada", "journal": "", "ref_id": "b54", "title": "Deforming radiance fields with cages", "year": "2022" }, { "authors": "Ziyi Yang; Xinyu Gao; Wen Zhou; Shaohui Jiao; Yuqing Zhang; Xiaogang Jin", "journal": "", "ref_id": "b55", "title": "Deformable 3d gaussians for high-fidelity monocular dynamic scene reconstruction", "year": "2023" }, { "authors": "Yu-Jie Yuan; Yang-Tian Sun; Yu-Kun Lai; Yuewen Ma; Rongfei Jia; Lin Gao", "journal": "", "ref_id": "b56", "title": "Nerf-editing: Geometry editing of neural radiance fields", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 394.61, 276.81, 150.5, 11.03 ], "formula_id": "formula_0", "formula_text": "p ′ = p + vD(p)(1)" }, { "formula_coordinates": [ 3, 389.12, 538.32, 155.99, 9.65 ], "formula_id": "formula_1", "formula_text": "E(p) = ||∇I(p)|| 2(2)" }, { "formula_coordinates": [ 3, 335.59, 605.16, 209.52, 23.16 ], "formula_id": "formula_2", "formula_text": "L C = p∈P [E(p + v • D(p)) • ||∇D(p)|| 1 ] dp(3)" }, { "formula_coordinates": [ 3, 480.39, 656.38, 64.72, 8.74 ], "formula_id": "formula_3", "formula_text": "E(p + v • D(p))" }, { "formula_coordinates": [ 4, 73.71, 592.59, 212.65, 51.36 ], "formula_id": "formula_4", "formula_text": "L C = p∈P E(p + v • D(p)) • ∂ ∂v D(p) + ∂ ∂v ⊥ D(p) dp(4)" }, { "formula_coordinates": [ 4, 325.37, 573.05, 219.74, 59.49 ], "formula_id": "formula_5", "formula_text": "L e = p∈P E (p + v • D(p)) • ∂ ∂v D(p) dp, L s = p∈P E (p + v • D(p)) • ∂ ∂v ⊥ D(p) dp.(5)" }, { "formula_coordinates": [ 5, 373.85, 237.5, 171.27, 26.14 ], "formula_id": "formula_6", "formula_text": "Ê(p, q) = p ′ ∈[p,q] E(p ′ ) dp ′ (6)" }, { "formula_coordinates": [ 5, 348.2, 307.86, 196.91, 50.54 ], "formula_id": "formula_7", "formula_text": "L e = p∈P Ê (p + vD(p), p ε + vD(p ε )) • |D(p) -D(p ε )| ε (7)" }, { "formula_coordinates": [ 5, 327.93, 572.27, 217.18, 23.16 ], "formula_id": "formula_8", "formula_text": "L b = p∈P0 |D(p)| dp + p∈P1 |D(p) -(1 -α)| dp (8)" }, { "formula_coordinates": [ 5, 360.22, 639.01, 184.9, 29.9 ], "formula_id": "formula_9", "formula_text": "L m = p∈P max 0, - ∂ ∂v D(p) dp(9)" }, { "formula_coordinates": [ 6, 93.27, 144.25, 193.09, 9.65 ], "formula_id": "formula_10", "formula_text": "L = λ e L e + λ s L s + λ b L b + λ m L m .(10)" }, { "formula_coordinates": [ 6, 120.23, 377.29, 166.13, 11.47 ], "formula_id": "formula_11", "formula_text": "Ê(p, q) ≈ |Σ(q) -Σ(p)|(11)" }, { "formula_coordinates": [ 12, 323.02, 254.98, 217.94, 22.31 ], "formula_id": "formula_12", "formula_text": "L e = |E cnet (p) -E cnet (p ϵ )| • |D(p) -D(p ϵ )| ϵ (12" }, { "formula_coordinates": [ 12, 540.96, 260.84, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 12, 361.2, 340.56, 183.91, 23.89 ], "formula_id": "formula_14", "formula_text": "L s = E(p) • |D(p) -D(p ⊥ ϵ )| ϵ(13)" }, { "formula_coordinates": [ 12, 316.56, 417.67, 228.55, 9.65 ], "formula_id": "formula_15", "formula_text": "L b = mean(|D(p l )|) + mean(|D(p l ) -(1 -α)|)(14)" }, { "formula_coordinates": [ 12, 348.07, 458.55, 197.04, 22.31 ], "formula_id": "formula_16", "formula_text": "L m = mean(max(0, D(p) -D(p ϵ ))) ϵ(15)" }, { "formula_coordinates": [ 12, 406.12, 644.2, 138.99, 11.03 ], "formula_id": "formula_17", "formula_text": "p = U (p ′ )(16)" }, { "formula_coordinates": [ 12, 315.13, 682.83, 229.98, 30.32 ], "formula_id": "formula_18", "formula_text": "L e = mean(|E cnet (p) -E cnet (p ϵ )| • |D(p) -D(p ϵ )|) ϵ(17)" }, { "formula_coordinates": [ 13, 106.94, 129.62, 179.42, 50.56 ], "formula_id": "formula_19", "formula_text": "L s = E(p) • |D(p) -D(p ⊥ ϵ 1 )| ϵ + E(p) • |D(p) -D(p ⊥ ϵ 2 )| ϵ (18)" }, { "formula_coordinates": [ 13, 59.75, 232.22, 226.61, 9.65 ], "formula_id": "formula_20", "formula_text": "L b = mean(|D(p l )| + mean(|D(p l ) -(1 -α)|) (19)" }, { "formula_coordinates": [ 13, 89.32, 271.92, 197.04, 22.31 ], "formula_id": "formula_21", "formula_text": "L m = mean(max(0, D(p) -D(p ϵ ))) ϵ(20)" }, { "formula_coordinates": [ 13, 88.74, 626.45, 197.62, 22.31 ], "formula_id": "formula_22", "formula_text": "L cap = mean(max(0, (p -p ϵ ) ϵ -1)))(21)" } ]
2023-11-22
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b32", "b15", "b23", "b18", "b1", "b7", "b4", "b30", "b35" ], "table_ref": [], "text": "Large Language Models (LLMs) have gained increasing prominence in artificial intelligence. The emergence of potent models such as ChatGPT (OpenAI 2022) and LLaMA (Touvron et al. 2023) has led to substantial influences on many areas like society, commerce, and research. However, LLMs still suffer from severe factual hallucination problems, i.e., LLMs can frequently generate unsupported false statements regarding factual information due to their lack of intrinsic knowledge (Ji et al. 2023). For example, in Figure 1, Chat-GPT fails to provide an accurate response to the query \"When is Frédéric Chopin's father's birthday?\" due to a wrong belief that Nicolas Chopin's birthday is on June 17, 1771. Factual hallucination poses a severe challenge for LLM applications, particularly in real-world situations where factual accuracy holds significance. Consequently, the endeavor to alleviate factual hallucinations in LLMs has become a research hotspot in NLP field (Liu et al. 2021;Kang and Hashimoto 2020).\nOn the other hand, Knowledge Graphs (KGs) store a substantial amount of high-quality factual information, which can significantly alleviate factual hallucination if incorporated with LLMs. For example, in Figure 1, we can retrofit the erroneous statement \"Nicolas Chopin was born on June 17, 1771\" by referring to the provided factual knowledge \"(Nicolas Chopin, date of birth, 1771-04-15T00:00:0)\" in Wikidata. Recent work has focused on integrating LLMs with KGs by retrieving the entities in the query within knowledge graphs. Then the obtained factual triples are utilized as an additional context for LLMs to enhance their factual knowledge (Baek, Aji, and Saffari 2023;Chase 2022). Unfortunately, these approaches are limited to retrieving factual knowledge relevant to entities explicitly mentioned within the given query. However, the fundamental capability of large language models involves intricate and multi-step reasoning. Such reasoning processes often necessitate the validation and augmentation of factual knowledge that may be employed during the reasoning process. For example, in the case shown in Figure 1, LLM fails to answer the question because it requires an intermediate knowledge about \"Nicolas Chopin was born on April 15, 1771\". However, such information does not refer to entities appearing in the query. As a result, previous approaches are inadequate in addressing the factual hallucination appearing in the reasoning processes of LLMs.\nIn this paper, we propose Knowledge Graph-based Retrofitting (KGR), a new framework that incorporates LLMs with KGs to mitigate factual hallucination during the entire reasoning process of LLMs. Instead of retrieving factual information from KGs using original queries, the main idea behind KGR is to autonomously retrofit the initial draft responses of LLMs based on the factual knowledge stored in KGs. However, achieving the above process is challenging because draft responses generated by large language models typically contain a mixture of various information about the reasoning process, making the extraction, verification, and revision of relevant knowledge in it very challenging. Therefore, the key to integrating Knowledge Graphs into the reasoning process of large models to mitigate factual hallucinations lies in efficiently extracting the information requiring validation from draft responses, querying and selecting relevant knowledge from the knowledge graphs, and using this To this end, KGR presents a LLMs-based framework to autonomously extract, validate and refine factual statements within the initial draft responses without any manual efforts. Specifically, given an input query and a draft response generated by the LLM that entails the reasoning process of how LLM resolves this problem, KGR will request a LLM to extract factual claims in the reasoning process that require verifying by KGs. As shown in Figure 1, given the draft response \"Frédéric Chopin's father is Nicolas Chopin, he was born on June 17, 1771.\", the claim extraction step will generate factual claims in it like \"Frédéric Chopin's father is Nicolas Chopin\" and \"Nicolas Chopin was born on June 17, 1771\". Then, KGR will identify critical entities in the extracted claims, retrieve relevant factual triples from knowledge graph about the entities, and use a LLM-based fact selector to identify fact triples relevant to the draft response. Subsequently, the retrieved factual knowledge is utilized to compare with the previously extracted factual claims from the draft to verify their correctness. Finally, LLMs are asked to retrofit the draft in accordance with the outcomes of factual verification. This process can be repeated multiple times to ensure that all facts in the generated answers align with the knowledge stored within the knowledge graph. In this way, our method can not only verify the fact in query and response but also the facts used during reasoning. Furthermore, because all phases in the procedure can be automatically executed using a large language model, our method doesn't need any external components and therefore is easy to implement.\nWe conduct experiments with three representative LLMs on three standard factual QA benchmarks with different levels of reasoning difficulty, including Simple Question (Bordes et al. 2015), Mintaka (Sen, Aji, and Saffari 2022) for complex reasoning, and HotpotQA (Yang et al. 2018) for open domain, multi-hop reasoning. Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks especially when involving complex reasoning processes, which demonstrates the necessity and effectiveness of KGR in mitigating hallucination.\nIn summary, the contributions are as follows:\n• We propose a new framework that incorporates LLMs with KGs to mitigate factual hallucination by effectively extracting, verifying, and refining factual knowledge in the entire reasoning process of LLMs. • We present an implementation of the above-mentioned procedure by executing all the above-mentioned steps using LLMs without introducing any additional efforts. • Experiments on 3 datasets and 3 different LLMs confirm that KGR can significantly mitigate the hallucination and enhance the reliability of LLMs." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b15", "b13", "b19", "b2", "b31", "b27", "b25", "b24", "b11", "b8", "b12", "b37", "b36", "b1", "b16", "b0" ], "table_ref": [], "text": "Hallucination Hallucination in Large Language Models has been a prominent research focus within the NLP community (Ji et al. 2023). Automated large-scale data collection processes are prone to collecting erroneous information, which can significantly impact the quality of the generated outputs (Gunasekar et al. 2023). Additionally, excessive repetition of certain data during training can introduce memory biases, further exacerbating the hallucination issue (Lee et al. 2022;Biderman et al. 2023). Imperfections in the encoder backbone and variations in decoding strategies also play a role in determining the extent of hallucination in LLMs outputs (Tian et al. 2019). Recent studies have emphasized the importance of model output confidence as an indicator of potential hallucination occurrences (Manakul, Liusie, and Gales 2023).\nRetrieval Augmentation To address hallucination issues in LLMs, two main categories of retrieval augmentation methods have been proposed, which can be concluded as \"retrieve before generation\" and \"retrieve after generation\". The retrieve before generation mainly focuses on leveraging information retrieval (IR) to provide additional information to LLMs about the query. Along this line, UniWeb (Li et al. 2023b) introduces an adaptive method for determining the optimal quantity of referenced web text, Chameleon (Lu et al. 2023) leverages an assortment of tools including search engines, to bolster the reasoning capabilities of LLMs, We-bGLM (Liu et al. 2023b) augments LLMs with web search and retrieval capabilities. One major limitation of these approaches is the retrieved text is question-related, thus cannot guarantee the correctness of the question-unrelated portions in the generations. The retrieve after generation like RARR (Gao et al. 2023), PURR (Chen et al. 2023), and CRITIC (Gou et al. 2023) automatically edit model generations using evidence from the web. Our method leverages KGs as knowledge base to retrofit the model-generated response while reducing hallucination risk.\nKG-Enhanced LLM The Knowledge Graph is regarded as a dependable source of information and is consequently frequently employed to enhance model generations. Traditional approaches involve knowledge representations during the training phase, which often necessitates dedicated model architecture and model-specific training (Zhang et al. 2019(Zhang et al. , 2022)). However, this incurs a substantial cost for contemporary LLMs. Recent years, many researchers propose to inject knowledge while inference. For example, KAPING (Baek, Aji, and Saffari 2023), RHO (Ji et al. 2022), KITLM (Agarwal et al. 2023), andStructGPT (Jiang et al. 2023) try to retrieve knowledge in KG and utilize them as an additional input context for LLMs to enhance their generations. However, these methods only search for question-relevant information, which limits the overall performance. To the best of our knowledge, we're the first to involve knowledge graphs into model response retrofitting." }, { "figure_ref": [ "fig_0" ], "heading": "KGR: Autonomous Knowledge Graph-based Retrofitting", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our proposed method KGR, which automatically mitigates factual hallucinations via a chain-of-verification process. As shown in Figure 1, given a query and its draft response, KGR retrofits the response by 1) extracting claims from the draft answer that requires verification; 2) detecting entities in the claims that are critical for retrieving facts from knowledge graph; 3) retrieving relevant fact statements from the knowledge graph; 4) verifying the factual correctness of each extracted claim using the returned factual statements from the knowledge graph; 5) retrofitting the previous draft response based on the verification results. All these steps are autonomously executed using the large language model itself without additional manual efforts. And this process can be iterative and repeated multiple times to ensure that all facts in the generated answers align with the factual knowledge stored within the knowledge graph. In the following, we will describe each component in KGR respectively in detail." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Claim Extraction", "publication_ref": [ "b9" ], "table_ref": [], "text": "Given a generated draft response as input, claim extraction will extract all factual claims from previously generated drafts that require validation. The main idea behind claim extraction is that a draft response can frequently contain various factual statements that need to be verified. For the example in Figure 1, the draft response contains at least two factual statements, i.e., \"Frédéric Chopin's father is Nicolas Chopin\" and \"Nicolas Chopin was born on June 17, 1771\". Therefore, to make it possible for KG to verify these statements respectively, claim extraction decomposes the draft response to be atomic factual claims. Previous work has shown that large language models have strong abilities to extract various kinds of critical information from texts via in-context few-shot learning (Chern et al. 2023). Therefore in this paper, we leverage LLM itself to autonomously extract the claims in the generated draft response. As shown in Figure 2, we prompt LLM with a query and response pair, with the anticipation of receiving a list of decomposed factual claims. After extracting claims, entity detection identifies mentioned critical entities for knowledge graph retrieval." }, { "figure_ref": [ "fig_0" ], "heading": "Entity Detection and Knowledge Graph Retrieval", "publication_ref": [ "b5", "b14" ], "table_ref": [], "text": "Given a list of claims extracted from the draft response, entity detection will detect the critical entities mentioned in the claims. Then, we retrieve the detected entities' local subgraph from the KG and expressed it in the form of triples. The main idea behind entity detection and knowledge graph retrieval is that we need to identify entities in claims so as to retrieve the relevant knowledge in the KG. Meanwhile, we can ensure recalling relevant triples as much as possible by retrieving the local subgraph in the knowledge graph. For the example in Figure 1, we identify the entity Frédéric Chopin and its entity id Q1268, so we can search the identified entity to acquire knowledge relevant to Claim1 in the KG.\nPrevious methods (Brank, Leban, and Grobelnik 2017;Honnibal et al. 2020) rely on supervised fine-tuned models which necessitate training for specific knowledge graphs, resulting in poor generalization for different scenarios. Furthermore, it proves to be a challenging task to discern essential entities that merit fact selection.\nIn this paper, we prompt LLMs to detect entities. As illustrated in Figure 3, our approach shows powerful generalization ability by capitalizing on the information extraction capabilities of LLMs (Li et al. 2023a) through the utilization of few-shot prompt. Based on the few-shot prompts, we can make LLMs understand which entities merit fact selection.\nA comparative assessment of performance between the supervised fine-tuned model and LLM is presented in the Appendix.\nAfter detecting the entities, we retrieve the knowledge graph for the local subgraph and send it to fact selection in the form of triples. " }, { "figure_ref": [ "fig_3" ], "heading": "Fact Selection", "publication_ref": [ "b6" ], "table_ref": [], "text": "Given the retrieved triples based on the detected entities, fact selection will select relevant fact statements among them. The main idea behind fact selection is the limited ability of LLMs in long-context modeling (Liu et al. 2023a) and the constraint on the context window size of LLMs, which make it impractical to select critical triples at one time. In this paper, we partition retrieved triples into several chunks and leverage LLM itself to extract the critical triples in the retrieved triples respectively, illustrated in Figure 4. In this way, we can avoid introducing excessive irrelevant knowledge into claim verification.\nPrevious approaches (Cao et al. 2022) typically relied on text-to-SQL models to formulate query statements for interacting with KG. This method is challenging due to the requirement for substantial annotated training data, which also varies across different KGs. Additionally, it is still challenging to generate compilable and structured SQL, limiting the number of recalled triples. In contrast, our approach leverages the LLM's information extraction ability, to improve the recall of critical triples." }, { "figure_ref": [], "heading": "[ few-shot examples] Claim:", "publication_ref": [], "table_ref": [], "text": "Fré dé ric Chopin's father is Nicolas Chopin." }, { "figure_ref": [], "heading": "Retrieved triples:", "publication_ref": [], "table_ref": [], "text": "[0] (\"Fré dé ric Chopin\", \"father\", \"Nicolas Chopin\") [1] (\"Nicolas Chopin\", \"family name\", \"Chopin\") Useful Triples:\n[0] (\"Fré dé ric Chopin\", \"father\", \"Nicolas Chopin\") Once we have selected the critical triples, the claim verification will verify the factual correctness of claims and subsequently offer suggestions." }, { "figure_ref": [ "fig_4" ], "heading": "Claim Verification", "publication_ref": [], "table_ref": [], "text": "Given the critical triples selected by the fact selection, we utilize LLM to compare the model-generated claims with the factual information present in the KGs. The main idea behind claim verification is to propose a detailed revision suggestion for each claim, as retrofitting solely based on the selected knowledge may not convince LLMs. As illustrated in Figure 5, we employ LLMs to verify each claim and propose revision suggestions respectively based on the retrieved fact knowledge, so as to boost the execution of the following retrofitting step. Then, we send the claim verification result to LLM to ask it to retrofit the draft response accordingly." }, { "figure_ref": [ "fig_4" ], "heading": "Response Retrofitting", "publication_ref": [ "b12", "b40" ], "table_ref": [], "text": "Given the verification of all claims, the response retrofitting step retrofits the generated draft response in accordance with the verification suggestions.\nIn this paper, we capitalize on the capabilities of LLMs for the purpose of retrofitting. This approach involves employing LLMs with a few-shot prompt, a strategy that has exhibited efficacy in prior researches (Gou et al. 2023;Zheng et al. 2023). As illustrated in Figure 5, we merge the entire KGR process into a singular prompt. This allows LLMs to leverage their in-context learning ability, comprehending the KGR process and enhancing their comprehension of factual retrofitting based on verification suggestions By following the cycle of \"Extraction -Detection -Selection -Verification -Retrofitting \", our KGR framework can be iterated multiple times to ensure all facts in the generated answers align with the factual knowledge stored within the knowledge graph." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b4", "b30", "b35" ], "table_ref": [], "text": "We evaluate our KGR framework on three datasets with different levels of reasoning difficulty, including Simple Question (Bordes et al. 2015), Mintaka (Sen, Aji, and Saffari 2022), and HotpotQA (Yang et al. 2018). We also compare KGR with information retrieval-based approaches and previous question-relevant knowledge graph retrieval approaches." }, { "figure_ref": [], "heading": "Experiment Settings", "publication_ref": [ "b4", "b35" ], "table_ref": [], "text": "Dataset and Evaluation We conduct experiments on three representative factual QA benchmarks, including:\n• Simple Question (Bordes et al. 2015) is a simple QA dataset that contains 100k questions constructed from Freebase knowledge graph. All questions in the dataset are simple, require no deep reasoning procedure, and can be easily answered as long as the correct evidence is retrieved. Therefore, we can evaluate the ability to retrieve relevant evidence in KG based on Simple Question. • Mintaka (Sen, Aji, and Saffari 2022) is a complex, natural and multilingual dataset, composing 20k questions collected in 8 different languages. We only use English test sets. In this setting, we focus on the ability to logically refine and revise its answers based on the evidence gathered. • HotpotQA (Yang et al. 2018) is a Wikipedia-based1 dataset with 113k questions that requires finding and reasoning over multiple supporting documents to answer, which are diverse and not constrained to any pre-existing knowledge bases or knowledge schemas. Therefore, we evaluate the KGR framework's robustness in handling generalized scenarios, requiring LLMs to answer involving the incorporation of both parametric knowledge and Knowledge Graph-based information.\nWe reported the results in terms of EM and F1 scores respectively on 50 samples from the validation set of each dataset. By comparing performance across these three datasets, we can evaluate how well different methods mitigate factual hallucinations and handle complex tasks." }, { "figure_ref": [], "heading": "LLMs and KG Implementation", "publication_ref": [], "table_ref": [], "text": "We evaluate the effectiveness of KGR on both close-source and open-source large language models. For close-source models, we evaluate on text-davinci-003 and ChatGPT (gpt-3. " }, { "figure_ref": [], "heading": "Baseline", "publication_ref": [ "b34", "b12", "b11", "b1" ], "table_ref": [], "text": "We compared our KGR framework with the following methods, including:\n• Vanilla, which adopts a straightforward approach to prompt the model to generate answers for the given question.\n• Chain of Thought (CoT) (Wei et al. 2022), which aims to generate more reliable answers by prompting LLMs to generate more comprehensive and detailed explanations for the generated answers. • Self-Correcting with Tool-Interactive Critiquing (CR-ITIC) (Gou et al. 2023), which revises the answer based on text from the web. Since CRITIC did not release their web crawling method, in this experiment we adopt the crawling pipeline provided in RARR (Gao et al. 2023) via Bing Search3 • Question-relevant Knowledge Retrieval method (QKR), which prompts LLMs with the question-relevant retrieved facts in the knowledge graph to generate answers. In this setting, we aim to demonstrate the superior effectiveness of our response-relevant retrofitting method over the question-relevant knowledge graph augmentation approach. The QKR method, inspired by KAPING (Baek, Aji, and Saffari 2023), leverages extracted facts from KGs as prompts to enhance response correctness. In our implementation, we replace the fact extracts process with our entity detection and fact selection to strictly compare the difference between the response-relevant and queryrelevant methods." }, { "figure_ref": [], "heading": "Overall Results", "publication_ref": [ "b29" ], "table_ref": [ "tab_4", "tab_5" ], "text": "As shown in Table 1, our method demonstrates significant superiority over other methods across various conditions.\n1) Our framework can mitigate large language model hallucination via Knowledge Graph-based Retrofitting and achieve significant improvements on 3 datasets.\nCompared with the CoT and CRITIC baseline, our KGR framework gains improvements on all three datasets. This indicates that our KG-based approach is more effective due to its reliance on a reliable knowledge base, whereas IR-based methods like CRITIC might introduce noise from external. Additionally, we observed that the CoT method performed worse than the vanilla approach in ChatGPT. This could be attributed to the CoT method's tendency to ask for more information, which is amplified in ChatGPT due to Reinforcement Learning from Human Feedback (Ouyang et al. 2022). 2) By verifying the facts used during reasoning via chainof-verification, our method can achieve significant performance improvement in complex reasoning tasks in Mintaka and HotpotQA datasets. As shown in Table 1, compared to the QKR method, our KGR framework achieves F1 improvement for at least 6.2 and 1.1 on Mintaka and HotpotQA. Both of them pose complex reasoning question-answering challenges, and the success of our method with chain-of-verification on these datasets demonstrates its capability to handle complex questions effectively. It is worth noting that the text-davinci-003 outperformed QKR in Simple Question. We attribute this to the fact that Simple Question consists of straightforward, one-hop questions, which makes the question-relevant method more effective. 2. We can find that the KGR framework outperforms both CoT and QKR, demonstrating the generalizability of our framework even leveraging a compact size LM. Moreover, the significant improvement with ChatGPT and text-davinci-003 shows the generalizability of both aligned LLMs and misaligned LLMs.\nIn summary, our method consistently outperforms other methods across various conditions and exhibits strong generalization ability. The results suggest that our KGR framework is more reliable and effective, especially in handling complex factual reasoning tasks. Furthermore, it showcases the robustness of our method in open-domain QA settings, where knowledge retrieval may be more challenging." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "We present a multi-round retrofitting process of a multi-hop case which needs to be retrofitted iteratively in From this case, we show KGR's intermediate results, including atomic claim, critical triples, detailed verification, and iterative retrofitting. All these show the effectiveness of KGR, especially on reasoning with multi-hop complex tasks, verifying the feasibility of multi-turn retrofit to ensure that all facts in the generated answers align with the factual knowledge stored in the knowledge graph." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6" ], "heading": "Error Analysis", "publication_ref": [ "b26", "b38" ], "table_ref": [], "text": "In order to gain a comprehensive understanding of the KGR approach, we conducted an exhaustive analysis of incorrect cases based on the Mintaka and Simple Question datasets. After carefully examining the errors, we identified which component causes revision failures. The outcomes of this analysis are visualized in Figure 6.\nOn closer inspection, it becomes apparent that inaccuracies within the KGR are primarily caused by entity detection and fact selection. Conversely, claim extraction, claim verification, and response retrofitting demonstrate higher reliability. All these findings highlight the significance of refining entity detection and fact selection for further improvements.\nOn the other hand, our analysis explored the error reason For the claim verification and response retrofitting, the focus shifts to the model's ability to adhere to the cues provided by the few-shot prompts. Effectively discerning and subsequently rectifying answers within this framework presents a central challenge. The process of fact selection encounters challenges in extracting essential triples from a collection of triples that include irrelevant information or noise.\nFor a deeper insight into the effectiveness of fact selection, we conduct experiments on it. The effectiveness of entity detection will be shown in Appendix.\nImpact of chunk size&numbers of retrieved triples. As discussed above, considering the limitation of maximum input length for LLMs, we partition the retrieved triples into chunks for fact selection. However, it is worth noting that incontext learning might be influenced by example order (Lu et al. 2022) and potentially following the last answer presented (Zhao et al. 2021). Under these motivations, we conduct experiments on the Simple Question using ChatGPT. Specifically, we evaluate the effectiveness of fact selection when retrieved triples are in random order, referring red point in Figure 7(a). These experiments help us understand fact selection behavior under various hyperparameters, optimize chunk size, and refine triple retrieval strategies for improved efficiency.\nAs shown in Figure 7(a), the chunk size has minimal impact on triple selection capability, except for a chunk size of 100, which may cause worse long-distance dependency modeling. However, reducing the chunk size leads to lower precision and higher recall scores. This indicates that a smaller chunk size increases the chance of selecting both critical and irrelevant triples. Additionally, we observe that prompting LLMs with triples in random orders doesn't significantly affect triple selection. As shown in Figure 7(b), increasing the number of retrieved triples has a gradual positive impact on recall but significantly reduces precision. More retrieved triples may boost recall for critical knowledge and introduces numerous irrelevant triples, potentially compromising the effectiveness of the claim verification and negating the benefits of the fact selection.\nAll in all, the experiments focusing on the impact of chunk size and numbers of retrieved triples show that the core difficulty of retrieving fact knowledge based on LLMs is the tradeoff between precision and recall. This observation points to future research on fact selection based on LLMs." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a knowledge graph-based retrofitting framework that effectively mitigates factual hallucination during the reasoning process of LLMs based on the factual knowledge stored in KGs.\nExperiment results show that KGR can significantly improve the performance of LLMs on factual QA benchmarks especially when involving complex reasoning, which demonstrates the necessity and effectiveness of KGR in mitigating hallucination and enhancing the reliability of LLMs. As for future work, we plan to improve the effectiveness in each step of our KGR framework." }, { "figure_ref": [], "heading": "Appendix Impact of Multi-Turn Retrofitting", "publication_ref": [ "b5", "b14" ], "table_ref": [], "text": "From the above case, it can be observed that one potential drawback of employing the search engine-based retrofitting approach is the potential inaccuracy in the retrieved information. This could consequently lead to an erroneous revision of factually correct claims. This problem becomes more pronounced during multi-turn retrofitting.\nTo investigate the impact of multi-turn retrofitting, we performed multiple rounds of iterative revisions on identical questions from the HotpotQA dataset using text-davinci-003 on both CRITIC and KGR. The outcomes are depicted in Figure 8. Our observations are as follows: (1) Most of the factual errors can be retrofitted in the first turn in our KGR framework. (2) Within our KGR framework, the effectiveness of the retrofitting process tends to remain consistent owing to the reliability of the underlying knowledge graph. However, the CRITIC method shows a tendency to exhibit poorer performance as the number of iteration turns increases. By checking the error cases, we note that due to the inherent randomness of information retrieval, the evidence recalled by the search engine for claim verification can vary between rounds. This divergence leads to varying outcomes, introducing disparities in perspectives and factual accuracy, which subsequently leads to fluctuations in the final refined responses. In con- Effectiveness with Entity Detection.\nAs outlined in the Method Section, entity detection is responsible for identifying entities that make sense in claim verification. In Table 4, we present a comparison between our method and current widely-used approaches, Wikifier (Brank, Leban, and Grobelnik 2017) and SpaCy (Honnibal et al. 2020), to highlight the advantages of our proposed method. Wikifier employs a pagerank-based technique to identify a coherent set of relevant concepts, designed for Wikipedia entities. SpaCy offers a suite of functionalities such as tagging, parsing, named entity recognition, and text classification, leveraging recent advancements in both speed and neural network models. The result shows that our LLM-based method delivers superior performance on the Mintaka dataset, validating our decision to leverage the extensive capabilities of LLMs. However, despite the remarkable achievements of LLMs in comparison to alternative techniques, their effectiveness in extracting entities for claim verification remains insufficient. This presents an opportunity for future research in this area. " } ]
Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of large language models (LLMs). Existing methods usually only use the user's input to query the knowledge graph, thus failing to address the factual hallucination generated by LLMs during its reasoning process. To address this problem, this paper proposes Knowledge Graph-based Retrofitting (KGR), a new framework that incorporates LLMs with KGs to mitigate factual hallucination during the reasoning process by retrofitting the initial draft responses of LLMs based on the factual knowledge stored in KGs. Specifically, KGR leverages LLMs to extract, select, validate, and retrofit factual statements within the model-generated responses, which enables an autonomous knowledge verifying and refining procedure without any additional manual efforts. Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks especially when involving complex reasoning processes, which demonstrates the necessity and effectiveness of KGR in mitigating hallucination and enhancing the reliability of LLMs.
Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting
[ { "figure_caption": "Figure 1 :1Figure 1: An overview of KGR, our framework consists of five components: (1) claim extraction, (2) entity detection and KG retrieval, (3) fact selection, (4) claim verification, (5) response retrofitting. The core component of these five steps remains the LLM. Given a question and draft response as input, our framework can iteratively mitigate factual errors in LLM's response.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Example for the claim extraction in KGR, which decomposes the proposed answer into two atomic claims.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3: Example for the entity detection in KGR, which only extract the essential entities from the claim.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example for the fact selection in KGR, in which LLMs are prompted to select critical items among all retrieved triples.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Example for the claim verification and response retrofitting in KGR. The claim verification judges whether the claim aligns with searched triples and gives revision suggestions respectively. The response retrofitting incorporates the revision suggestions from all claims and gives a refined response.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Impact of chunk size and numbers of retrieved triples on the efficacy of the fact selection.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "When is Fré dé ric Chopin's father's birthday? Proposed Answer: Fré dé ric Chopin's father is Nicolas Chopin, he was born on June 17, 1771. > Claim: [\"Fré dé ric Chopin's father is Nicolas Chopin\", \"Nicolas Chopin was born on June 17, 1771\"] >> Verify Claim: Fré dé ric Chopin's father is Nicolas Chopin. >> Searched triples in KG: [('Fré dé ric Chopin', 'father', 'Nicolas Chopin')]The evidence suggests that Fré dé ric Chopin's father is indeed Nicolas Chopin.", "figure_data": ">> Verify Claim: Nicolas Chopin was born on June 17, 1771.>> Searched triples in KG: [('Nicolas Chopin', 'date of birth', '1771-04-15T00:00:00Z')]Above all, Fré dé ric Chopin's father is Nicolas Chopin, but he was born onApril 15, 1771, not June 17, 1771.Question: When is Fré dé ric Chopin's father's birthday?Here's", "figure_id": "tab_1", "figure_label": "Question", "figure_type": "table" }, { "figure_caption": "Fré dé ric Chopin's father is Nicolas Chopin, he was born on April 15, 1771. The evidence suggests that Nicolas Chopin was born on April 15, 1771, not June 17, 1771 as stated in the proposed answer.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results on three datasets using ChatGPT and text-davinci-003. We implement CoT using the prompt provided by CRITIC. QKR uses the same entity detection and fact selection method as KGR. We report both EM and F1 scores in the table.", "figure_data": "Simple QuestionMintakaHotpotQAChatGPT text-davinci-003ChatGPT text-davinci-003ChatGPT text-davinci-003Vanilla22.0/28.934.7/45.142.9/56.136.7/44.818.4/31.622.4/34.6CoT10.0/11.838.0/46.753.1/59.346.3/57.924.5/34.329.2/40.5CRITIC12.0/14.338.0/46.751.0/58.644.4/54.030.6/41.727.1/38.9QKR54.0/60.256.0/62.048.0/54.644.0/51.728.0/38.122.0/31.9KGR (ours) 58.0/60.754.0/57.653.1/60.852.0/60.232.7/39.234.0/47.2Simple Question Mintaka HotpotQACoT14.0/21.926.0/28.312.0/19.6QKR40.0/44.026.5/32.412.2/17.6KGR(ours)46.0/46.926.5/34.010.2/20.6", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on three datasets using Vicuna 13B. We report both EM and F1 scores in the table.", "figure_data": "3) By automatically generating and executing chain-of-verification via LLMs, our KGR approach exhibitsremarkable generalization capabilities in differentdatasets and is robust on open-domain setting. InHotpotQA, KGR performs well compared to the CoTand CRITIC methods. The HotpotQA presents an open-domain QA scenario where finding related triples in theKG can be challenging. Despite this difficulty, our methoddisplayed the ability to effectively utilize the searchedtriples and effectively leverage parametric knowledgeeven when no evidence was returned.4) Our framework can work well on compact size LMs,aligned LLMs, and misaligned LLMs, showing the gen-eralizability of KGR. We compare KGR with the strongbaselines CoT and QKR on Simple Question, Mintaka,and HotpotQA using Vicuna 13B. The result is shown inTable", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "In this case, the model-generated response shows a factual error Distribution of error case numbers across KGR stages: Analysis conducted on a sample of 50 instances from the Mintaka dataset and 50 instances from the Simple Question dataset reveals the occurrence of error numbers across various stages of the KGR process.in the initial reasoning step. It erroneously states that Alex Shevelev died in Moscow, Russia, whereas he actually passed away in Rome, Italy. After retrofitting this mistake, we encounter another factual error, which asserts that Rome is the capital of the Central Federal District. So, we need to retrofit it again based on the retrofitted response in the first iteration.", "figure_data": "Response RetrofittingMintaka Simple QuestionClaim VerificationFact SelectionEntity DetectionClaim Extraction024 Error Numbers68Figure 6:", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The city where Alex Shevelev died is the capital of what region? Answer: the Lazio region Initial prediction: Let's think step by step. Alex Shevelev died in Moscow , Russia. And it is the capital of the Central Federal District. So the answer is:", "figure_data": "Central Federal District.ClaimsFact KnowledgeVerificationRetrofitted ResponseAlex Shevelev died in( Alex Shevelev , place of death, Rome)The evidence suggests Alex ShevelevLet's think step by step. AlexKGRMoscow, Russia.died in Rome , not Moscow, Russia.Shevelev died in Rome, Italy .round1Moscow is the capital(Central Federal District, capital, Moscow ) The evidence suggests Moscow is theAnd it is the capital of the Central Federal District. So theof the Central Federalcapital of the Central Federal District.answer is: Central FederalDistrict.District.KGRAlex Shevelev died in Rome , Italy.( Alex Shevelev , place of death, Rome )The evidence suggests Alex Shevelev died in Rome.Shevelev died in Rome, Italy. Let's think step by step. Alex.round2Rome is the capital of the Central Federal Dis-( Rome , capital of, Lazio)The evidence suggests Rome is the cap-ital of Lazio not the Central FederalAnd it is the capital of Lazio . So the answer is: Laziotrict.District.", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Gnerated examples in HotpotQA by ChatGPT. We show the multi-turn retrofitting process of KGR. The red color refers to retrofitted factual statements. The other green colors refer to critical entities in claims. in different stages. The claim extraction often fails to express the central claim adequately, sometimes due to excessive use of pronouns that confuse the model's comprehension. Entity detection has issues with entity extraction granularity. It captures too many common entities like \"films\" or \"apple\", leading to excessive and useless triples for the claim verification. The fact selection has difficulty extracting the critical triples between multiple triples that contain noise.", "figure_data": "", "figure_id": "tab_8", "figure_label": "3", "figure_type": "table" } ]
Xinyan Guan; Yanjiang Liu; Hongyu Lin; Yaojie Lu; Ben He; Xianpei Han; Le Sun
[ { "authors": "A Agarwal; S Gawade; A P Azad; P Bhattacharyya", "journal": "", "ref_id": "b0", "title": "KITLM: Domain-Specific Knowledge InTegration into Language Models for Question Answering", "year": "2023" }, { "authors": "J Baek; A F Aji; A Saffari", "journal": "", "ref_id": "b1", "title": "Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering", "year": "2023" }, { "authors": "S Biderman; H Schoelkopf; Q G Anthony; H Bradley; K O'brien; E Hallahan; M A Khan; S Purohit; U S Prashanth; E Raff", "journal": "", "ref_id": "b2", "title": "Pythia: A suite for analyzing large language models across training and scaling", "year": "2023" }, { "authors": " Pmlr", "journal": "", "ref_id": "b3", "title": "", "year": "" }, { "authors": "A Bordes; N Usunier; S Chopra; J Weston", "journal": "", "ref_id": "b4", "title": "Large-scale simple question answering with memory networks", "year": "2015" }, { "authors": "J Brank; G Leban; M Grobelnik", "journal": "", "ref_id": "b5", "title": "Annotating documents with relevant wikipedia concepts", "year": "2017" }, { "authors": "S Cao; J Shi; L Pan; L Nie; Y Xiang; L Hou; J Li; B He; H Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base", "year": "2022" }, { "authors": "H Chase", "journal": "", "ref_id": "b7", "title": "LangChain", "year": "2022" }, { "authors": "A Chen; P Pasupat; S Singh; H Lee; K Guu", "journal": "", "ref_id": "b8", "title": "PURR: Efficiently Editing Language Model Hallucinations by Denoising Language Model Corruptions", "year": "2023" }, { "authors": "I.-C Chern; S Chern; S Chen; W Yuan; K Feng; C Zhou; J He; G Neubig; P Liu", "journal": "", "ref_id": "b9", "title": "FacTool: Factuality Detection in Generative AI-A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios", "year": "2023" }, { "authors": "W Commons", "journal": "", "ref_id": "b10", "title": "Main Page -Wikimedia Commons, the free media repository", "year": "2023-08" }, { "authors": "L Gao; Z Dai; P Pasupat; A Chen; A T Chaganty; Y Fan; V Zhao; N Lao; H Lee; D.-C Juan", "journal": "", "ref_id": "b11", "title": "Rarr: Researching and revising what language models say, using language models", "year": "2023" }, { "authors": "Z Gou; Z Shao; Y Gong; Y Shen; Y Yang; N Duan; W Chen", "journal": "", "ref_id": "b12", "title": "Critic: Large language models can self-correct with tool-interactive critiquing", "year": "2023" }, { "authors": "S Gunasekar; Y Zhang; J Aneja; C C T Mendes; A Del Giorno; S Gopi; M Javaheripi; P Kauffmann; G De Rosa; O Saarikivi", "journal": "", "ref_id": "b13", "title": "Textbooks Are All You Need", "year": "2023" }, { "authors": "M Honnibal; I Montani; S Van Landeghem; A Boyd", "journal": "", "ref_id": "b14", "title": "spaCy: Industrial-strength Natural Language Processing in Python", "year": "2020" }, { "authors": "Z Ji; N Lee; R Frieske; T Yu; D Su; Y Xu; E Ishii; Y J Bang; A Madotto; P Fung", "journal": "ACM Computing Surveys", "ref_id": "b15", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Z Ji; Z Liu; N Lee; T Yu; B Wilie; M Zeng; P Fung", "journal": "", "ref_id": "b16", "title": "RHO: Reducing Hallucination in Open-domain Dialogues with Knowledge Grounding", "year": "2022" }, { "authors": "J Jiang; K Zhou; Z Dong; K Ye; W X Zhao; J.-R Wen", "journal": "", "ref_id": "b17", "title": "StructGPT: A general framework for Large Language Model to Reason on Structured Data", "year": "2023" }, { "authors": "D Kang; T Hashimoto", "journal": "", "ref_id": "b18", "title": "Improved Natural Language Generation via Loss Truncation", "year": "2020" }, { "authors": "K Lee; D Ippolito; A Nystrom; C Zhang; D Eck; C Callison-Burch; N Carlini", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Deduplicating Training Data Makes Language Models Better", "year": "2022" }, { "authors": "B Li; G Fang; Y Yang; Q Wang; W Ye; W Zhao; S Zhang", "journal": "", "ref_id": "b20", "title": "Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness", "year": "2023" }, { "authors": "J Li; T Tang; W X Zhao; J Wang; J.-Y Nie; J.-R Wen", "journal": "", "ref_id": "b21", "title": "The Web Can Be Your Oyster for Improving Large Language Models", "year": "2023" }, { "authors": "N F Liu; K Lin; J Hewitt; A Paranjape; M Bevilacqua; F Petroni; P Liang", "journal": "", "ref_id": "b22", "title": "Lost in the Middle: How Language Models Use Long Contexts", "year": "2023" }, { "authors": "T Liu; X Zheng; B Chang; Z Sui", "journal": "", "ref_id": "b23", "title": "Towards Faithfulness in Open Domain Table-to-text Generation from an Entity-centric View", "year": "2021" }, { "authors": "X Liu; H Lai; H Yu; Y Xu; A Zeng; Z Du; P Zhang; Y Dong; J Tang", "journal": "", "ref_id": "b24", "title": "WebGLM: Towards An Efficient Web-Enhanced Question Answering System with Human Preferences", "year": "2023" }, { "authors": "P Lu; B Peng; H Cheng; M Galley; K.-W Chang; Y N Wu; S.-C Zhu; J Gao", "journal": "", "ref_id": "b25", "title": "Chameleon: Plug-andplay compositional reasoning with large language models", "year": "2023" }, { "authors": "Y Lu; M Bartolo; A Moore; S Riedel; P Stenetorp", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity", "year": "2022" }, { "authors": "P Manakul; A Liusie; M J Gales", "journal": "", "ref_id": "b27", "title": "Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models", "year": "2023" }, { "authors": "Meta ", "journal": "OpenAI", "ref_id": "b28", "title": "Wikimedia movement -Meta, discussion about Wikimedia projects", "year": "2022" }, { "authors": "L Ouyang; J Wu; X Jiang; D Almeida; C Wainwright; P Mishkin; C Zhang; S Agarwal; K Slama; A Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "P Sen; A F Aji; A Saffari", "journal": "", "ref_id": "b30", "title": "Mintaka: A complex, natural, and multilingual dataset for end-to-end question answering", "year": "2022" }, { "authors": "R Tian; S Narayan; T Sellam; A P Parikh", "journal": "", "ref_id": "b31", "title": "Sticking to the facts: Confident decoding for faithful data-totext generation", "year": "2019" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar; A Rodriguez; A Joulin; E Grave; G Lample", "journal": "", "ref_id": "b32", "title": "LLaMA: Open and Efficient Foundation Language Models", "year": "2023" }, { "authors": "D Vrandečić; M Krötzsch", "journal": "Communications of the ACM", "ref_id": "b33", "title": "Wikidata: a free collaborative knowledgebase", "year": "2014" }, { "authors": "J Wei; X Wang; D Schuurmans; M Bosma; F Xia; E Chi; Q V Le; D Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b34", "title": "Chain-of-thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Z Yang; P Qi; S Zhang; Y Bengio; W W Cohen; R Salakhutdinov; C D Manning", "journal": "", "ref_id": "b35", "title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "T Zhang; C Wang; N Hu; M Qiu; C Tang; X He; J Huang", "journal": "", "ref_id": "b36", "title": "DKPLM: Decomposable Knowledgeenhanced Pre-trained Language Model for Natural Language Understanding", "year": "2022" }, { "authors": "Z Zhang; X Han; Z Liu; X Jiang; M Sun; Q Liu", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "ERNIE: Enhanced Language Representation with Informative Entities", "year": "2019" }, { "authors": "Z Zhao; E Wallace; S Feng; D Klein; S Singh", "journal": "", "ref_id": "b38", "title": "Calibrate before use: Improving few-shot performance of language models", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b39", "title": "", "year": "" }, { "authors": "C Zheng; L Li; Q Dong; Y Fan; Z Wu; J Xu; B Chang", "journal": "", "ref_id": "b40", "title": "Can We Edit Factual Knowledge by In-Context Learning", "year": "2023" } ]
[]
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11" ], "table_ref": [], "text": "Scene Text Recognition (STR) adopts computer vision and machine learning methods [1,2] to recognize individual characters or words in a range of diverse and intricate scenarios, and has found utility across various practical applications. However, this technique remains susceptible to the limitations posed by low-resolution (LR) images, which may engender erroneous recognition results due to image blurring or spatial distortion.\nIn recent years, Scene Text Image Super-Resolution (STISR) has emerged as a pivotal area of research, aiming to enhance the quality and clarity of scene text images. In the early stage, researchers employed conventional Single Image Super-Resolution (SISR) methods [3] on manually downsampled datasets of scene text images. To further enhance the quality of generated text region, the Text Super-Resolution Network (TSRN) [4] is introduced alongside a specialized STISR dataset named TextZoom. Recent state-of-the-art ap-proaches [5,6,7,8] mostly use diverse forms of text-based guidance such as text mask and recognition results to furnish semantic-level information to TSRN backbone. Yet, despite the growing complexity of their guidance mechanisms, these approaches still fall short in generating satisfactory results when confronted with extremely challenging scenarios such as heavily blurred images, as depicted in Figure 1. Acknowledging the inherent limitations posed by the generation capability of TSRN backbones in the aforementioned methods, we propose RGDiffSR, based on generative diffusion model. Diffusion models have emerged as the forefront of generative models recently, not only achieving SOTA performance in general SISR tasks [9,10], but also exhibiting notable power in integrating cross-modal information in large text-to-image diffusion models such as GLIDE [11] and Imagen [12]. By employing the diffusion sampling process, RGDiffSR generates text images exhibiting heightened fidelity and diversity, even in challenging scenarios. Furthermore, to guide the model recovering text regions with more distinctive character shapes, we introduce a Recognition-Guided Denoising Network. This novel component leverages both the low-resolution image and recognition results from a STR recognizer as conditions within the diffusion process, merging semantic guidance with pixel-level information through an attention mechanism." }, { "figure_ref": [], "heading": "DPMN(+TATT)", "publication_ref": [], "table_ref": [], "text": "LR RGDiffSR HR\nOur contributions can be summarized as follows: Denoising Network provides noise prediction generated with semantic guidance in diffusion process, resulting in a substantial improvement in the legibility of text regions. (3) Experiment results show that RGDiffSR surpasses existing STISR methods largely in terms of recognition accuracy without compromising fidelity." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Overview", "publication_ref": [], "table_ref": [], "text": "Given low-resolution image x LR ∈ R H×W ×C , where H, W, C represents height, width and channels respectively, the goal of STISR is to generate high-resolution images x SR ∈ R f H×f W ×C that enhance legibility for both human and artificial recognizers. Here f signifies the scale factor. The overall architecture of RGDiffSR is shown in fig 2. We adopt the Latent Diffusion Model as the basic super-resolution pipeline to reduce the computational complexity while retaining the generative capability of diffusion models. During training stage, the model is given LR-HR image pair\n(x LR , x k HR ∈ R f H×f W ×C ).\nThe HR image x HR is first compressed by the latent encoder E into latent representation z ∈ R H×W ×C , which share the same dimensions with LR image, then be added with a series of Gaussian noises, eventually turning into z T in Gaussian distribution. Meanwhile, the LR image is fed into the recognition guidance generation module to extract the recognition guidance c RG , which will be sent into the diffusion model along with the LR images as condition, to guide the denoising network recover the original latent z 0 from Gaussian vector z T . Finally, the SR image x SR is reconstructed through latent decoder D from z 0 ." }, { "figure_ref": [], "heading": "Diffusion models", "publication_ref": [ "b12", "b13" ], "table_ref": [], "text": "The latent encoder/decoder is a VQ-regularized [13] 2x autoencoder trained separately following [14], compressing the HR image to a perceptually equivalent latent space to lower the computational demands. Then the diffusion model can be decomposed into 2 processes: the forward diffusion process and the reverse process. In forward process, a sequence of small Gaussian noise is gradually added to the latent vector z for T steps. The sizes of noise are controlled by a variance schedule {β t } T t=1 . Suppose that the latent data distribution is q(z), this forward process can be formulated as:\nq(z t |z t-1 ) = N (z t ; 1 -β t z t-1 , β t I)(1)\nBy using some reparameterization tricks, z t at any timestep t can be sampled through:\nq(z t |z 0 ) = N (z t ; √ ᾱt z 0 , (1 -ᾱt )I)(2)\nwhere α t = 1 -β t , ᾱt = t k=1 α k . Eventually, the latent vector would resemble the standard Gaussian:\nq(z T |z 0 ) = N (z T ; 0, I)(3)\nSince the β t is small enough, the distribution of reverse step q(z t-1 |z t ) will also be Gaussian. Therefore the reverse step p θ (z t-1 |z t ) can be modeled through a neural network θ, which corresponds to the denoising network. In superresolution tasks, diffusion models generate corresponding HR images by taking the LR images as condition, and the conditional form of reverse process can be written as:\np θ (z t-1 |z t , x LR ) = N (z t-1 ; µ θ (z t , x LR , t), βt I), (4) µ θ (z t , x LR , t) = 1 √ α t (z t - 1 -α t √ 1 -ᾱt ϵ θ (z t , x LR , t)),(5)\nwhere βt = 1-ᾱt-1 1-ᾱt β t , ϵ θ is the noise predicted by neural network, and the mean µ θ is interpreted through Bayes rules.\nIn training phase, we optimize the variational lower bound of negative log-likelihood E[-log p θ (z 0 )], getting the L2form loss function:\nL DM = E E(x HR ),ϵ∼N (0,1),t [∥ϵ -ϵ θ (z t , x LR , t) 2 ∥](6)\nIn inference phase, we first sample the latent vector from a random Gaussian noise through the reverse process:\nz t-1 = 1 √ α t (z t - 1 -α t √ 1 -ᾱt ϵ θ (z t , x LR , t)) + β t ϵ t ,(7)\nwhere ϵ t ∼ N (0, I). Then the recovered latent vector z 0 is decoded to original pixel space by the latent decoder, generating the SR image x SR = D(z 0 )." }, { "figure_ref": [], "heading": "Recognition Guidance Generation Module", "publication_ref": [], "table_ref": [], "text": "The Recognition Guidance Generation Module (RGGM) is a STR recognizer, which takes the LR image as input and predicts the probability distribution of which class each character belongs to. Thus the recognition guidance can be denoted as:\nc RG = R(x LR ) ∈ R L×|A| ,\nwhere L denotes the max predict length, and |A| is the size of the character set. Considering the significance of recognition guidance in enhancing the quality of text regions within the SR image, elevating the recognition accuracy of the recognizer is noticeably beneficial. Consequently, during the training phase, the recognizer is fine-tuned by a text recognition loss:\nL recog = ∥R(x LR ) -R(x HR )∥ 1(8)" }, { "figure_ref": [], "heading": "Recognition-Guided Denoising Network", "publication_ref": [], "table_ref": [], "text": "The recognition-guided denoising network is based on U-Net with the attention mechanism, as depicted in Fig 2 . Following the typical U-Net architecture, the denoising network consists of 4 encoder blocks, 1 middle block, and 4 decoder blocks.\nEach block b i takes the feature f i-1 from the former block and the timestep embedding t e as input, where t e is embedded through a sinusoidal positional encoding, and f 0 is the concatenation of z T and x LR . Specifically, in each Recognition-Guided Encoder/Decoder Block (RGEB/RGDB) B i , the image feature f i-1 will go through 2 recognition-guided residual block (RGRB) and a downsample/upsample layer. In RGRB, f i-1 is first fused with the timestep embedding t e through an element-wise sum in the common residual block. Subsequently, the output h res i,0 is fed into a Multihead Self Attention (MSA) layer that employs dot-product attention to capture global correlations between pixels. After passing layer-norm layers(omitted in Fig 2) and a Feed Forward Network(FFN), the feature h M SA i,0 is sent to Muitihead Cross Attention(MCA) layer to absorb the semantic from recognition guidance c RG . The whole process in RGEB can be written as:\nh res i,0 = Res(f i-1 , t e )(9)\nh M SA i,0 = F F N (LN (M SA(h res i,0 )))(10)\nh M CA i,0 = F F N (LN (M CA(h M SA i,0 , c RG , c RG ))) (11) h M CA i,1 = RGRB 1 (h M CA i,0\n, t e , c RG )\nf i = Downsample(h M CA i,1 )(12)\n, where Eq. 9,10,11 compose the first residual block RGRB 0 .\nTo further extract the global relation, an MSA layer is also applied between 2 residual blocks in the middle block. The configurations in decoder blocks mirror those in encoder blocks, with the only alteration of replacing Downsample layers with upsample layers.\nThe attention design empowers the denoising network to effectively leverage semantic information contained in text regions, consequently generating images that demonstrate more distinct character shapes." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b18", "b19", "b16" ], "table_ref": [], "text": "RGDiffSR is implemented in pytorch 2.0.1, trained with an AdamW optimizer for 250 epochs. The batch size is 64, and the learning rate is set to 1e-6. The channel size of the feature in encoder block is multiplied by 1,2,2,4 in order, with 160 as the initial size. The number of heads in each MSA and MCA layer is 8. In training, the total timesteps of diffusion process is set to 1000, and the noise weight {β t } T t=1 is scheduled following [19]. In sampling, a DDIM [20] sampler is used to accelerate the reverse process, with 200 DDIM steps. CRNN [17] is used as the STR recognizer in recognition guidance generation module." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b3" ], "table_ref": [], "text": "All experiments are conducted on the TextZoom [4] dataset, which contains 21,740 LR-HR text image pairs, captured in diverse real-world scenarios using cameras with varying focal lengths. 4,373 pairs of them are divided into three distinct test subsets based on their recognition difficulty, namely easy (1,619 pairs), medium (1,411 pairs), and hard (1,343 pairs). The size of LR and HR images are standardized to 16 × 64 and 32 × 128 respectively." }, { "figure_ref": [], "heading": "Comparison to State-of-the-Arts", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The recognition accuracy presented in Table 1 emphasizes the superior capabilities of RGDiffSR across various situations compared to previous SOTA method. Notably, our approach performs even better in hard cases, displaying a substantial accuracy improvement of 3.8% and 3.9% respectively when employing ASTER and CRNN as recognizers. Additionally, the visual representation of the generated SR images, as depicted in Figure 3, demonstrates that RGDiffSR also achieves impressive fidelity and legibility in generated images." }, { "figure_ref": [], "heading": "Effectiveness of Recognition-Guided Denoising Network", "publication_ref": [ "b14", "b15", "b4", "b6" ], "table_ref": [], "text": "To verify the effectiveness of Recognition-Guided Denoising Network, we perform experiments on U-net with different En-ASTER [15] MORAN [16] CRNN [ performance, albeit with a slight drop in accuracy. The improvement in PSNR/SSIM could be foreseen since the denoising network is optimized by pixel-wise loss functions. On the other hand, the decrease in accuracy indicates that higher fidelity in quantized metrics doesn't necessarily lead to higher recognition accuracy. This phenomenon is also observed in previous studies [5], [7], indicating the inherent trade-off between fidelity and accuracy." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel recognition-guided diffusion model for scene text image super-resolution (RGDiffSR). By integrating diffusion-based generative network and recognition guidance, RGDiffSR emerges as a cutting-edge solution for STISR, which exploits the inherent cross-modal generation capability of diffusion models, leading to improved fidelity and diversity in the generated images, especially in challenging scenarios. The recognition-guided denoising network serves as a critical component of RGDiffSR, expertly integrating semantic information into the diffusion process, which results in enhanced clarity and sharper character shapes in the generated images. Experiments on the TextZoom dataset show that RGDiffSR consistently outperforms existing SOTA methods across various metrics, bringing generative methods back to the frontier of STISR again." } ]
Scene Text Image Super-Resolution (STISR) aims to enhance the resolution and legibility of text within low-resolution (LR) images, consequently elevating recognition accuracy in Scene Text Recognition (STR). Previous methods predominantly employ discriminative Convolutional Neural Networks (CNNs) augmented with diverse forms of text guidance to address this issue. Nevertheless, they remain deficient when confronted with severely blurred images, due to their insufficient generation capability when little structural or semantic information can be extracted from original images. Therefore, we introduce RGDiffSR, a Recognition-Guided Diffusion model for scene text image Super-Resolution, which exhibits great generative diversity and fidelity even in challenging scenarios. Moreover, we propose a Recognition-Guided Denoising Network, to guide the diffusion model generating LR-consistent results through succinct semantic guidance. Experiments on the TextZoom dataset demonstrate the superiority of RGDiffSR over prior state-of-the-art methods in both text recognition accuracy and image fidelity.
RECOGNITION-GUIDED DIFFUSION MODEL FOR SCENE TEXT IMAGE SUPER-RESOLUTION
[ { "figure_caption": "Fig. 1 .1Fig. 1. SR images generated by DPMN+TATT (bottom-left) and RGDiffSR (bottom-right) from heavily blurred LR image.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "( 1 )RecognitionFig. 2 .12Fig. 2. The overall architecture of RGDiffSR and the structure of Recognition-Guided Encoder Block (bottom-right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Comparison of downstream text recognition accuracy with other SOTA methods on TextZoom dataset. Bolded numbers denote the best results. Qualitative comparison with other SOTA methods.", "figure_data": "17]", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Average", "figure_data": "methodPSNR SSIMaccuracybicubic20.35 0.696147.2%TATT21.52 0.793063.6%C3-STISR21.60 0.793164.1%DPMN(+TATT)21.49 0.792563.9%TSAN22.10 0.783564.1%RGDiffSR-25021.31 0.758266.2%RGDiffSR-100021.88 0.796265.9%recognition accuracy on ASTER and pa-rameter amount when different Encoder/Decoder Blocks arerecognition-guided.coder/Decoder Blocks being guided with recognition results.As shown in Table 2, when all Encoder/Decoder blocks arerecognition-guided, the average accuracy is largely boostedby 7%. Intriguingly, when only the first 2 Encoder Blocksand the last 2 Decoder Blocks are recognition-guided, the re-sult only exhibits a marginal drop of 0.1% in comparison tothe fully guided version. This observation strongly impliesthat recognition guidance more effectively interacts with shal-low feature maps, introducing the light version of RGDiffSR,which not only preserves accuracy but also reduces parameteramount by 25%.3.5. Trade-off between Fidelity and AccuracyThough the generated SR images exhibit commendable fi-delity in Fig 3, the model achieving the highest recognitionaccuracy seems comparatively weaker in traditional fidelityevaluation metrics like PSNR and SSIM. However, as de-picted in Table 3, if RGDiffSR is trained to 1000 epochs, themodel also manages to achieve a competitive PSNR/SSIM", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "PSNR", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Yuxuan Zhou; Liangcai Gao; Zhi Tang; Baole Wei
[ { "authors": "S Fang; H Xie; Y Wang; Z Mao; Y Zhang", "journal": "IEEE Computer Society", "ref_id": "b0", "title": "Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition", "year": "" }, { "authors": "Darwin Bautista; Rowel Atienza", "journal": "Springer Nature Switzerland", "ref_id": "b1", "title": "Scene text recognition with permuted autoregressive sequence models", "year": "2022" }, { "authors": "Chao Dong; Chen Change Loy; Kaiming He; Xiaoou Tang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b2", "title": "Image super-resolution using deep convolutional networks", "year": "2016-02" }, { "authors": "Wenjia Wang; Enze Xie; Xuebo Liu; Wenhai Wang; Ding Liang; Chunhua Shen; Xiang Bai", "journal": "Springer International Publishing", "ref_id": "b3", "title": "Scene text image super-resolution in the wild", "year": "2020" }, { "authors": "Jingye Chen; Haiyang Yu; Jianqi Ma; Bin Li; Xiangyang Xue", "journal": "CoRR", "ref_id": "b4", "title": "Text gestalt: Stroke-aware scene text image super-resolution", "year": "2021" }, { "authors": "Jianqi Ma; Zhetong Liang; Lei Zhang", "journal": "", "ref_id": "b5", "title": "A text attention network for spatial deformation robust scene text image super-resolution", "year": "2022-06" }, { "authors": "Minyi Zhao; Miao Wang; Fan Bai; Bingjia Li; Jie Wang; Shuigeng Zhou", "journal": "", "ref_id": "b6", "title": "C3-stisr: Scene text image superresolution with triple clues", "year": "2022" }, { "authors": "Shipeng Zhu; Zuoyan Zhao; Pengfei Fang; Hui Xue", "journal": "", "ref_id": "b7", "title": "Improving scene text image super-resolution via dual prior modulation network", "year": "2023" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b8", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "Haoying Li; Yifan Yang; Meng Chang; Shiqi Chen; Huajun Feng; Zhihai Xu; Qi Li; Yueting Chen", "journal": "Neurocomputing", "ref_id": "b9", "title": "Srdiff: Single image super-resolution with diffusion probabilistic models", "year": "2022-03" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b10", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "Curran Associates, Inc", "ref_id": "b11", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Patrick Esser; Robin Rombach; Bjorn Ommer", "journal": "", "ref_id": "b12", "title": "Taming transformers for high-resolution image synthesis", "year": "2021-06" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Bjorn Ommer", "journal": "", "ref_id": "b13", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022-06" }, { "authors": "Baoguang Shi; Mingkun Yang; Xinggang Wang; Pengyuan Lyu; Cong Yao; Xiang Bai", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "Aster: An attentional scene text recognizer with flexible rectification", "year": "2019" }, { "authors": "Canjie Luo; Lianwen Jin; Zenghui Sun", "journal": "CoRR", "ref_id": "b15", "title": "A multiobject rectified attention network for scene text recognition", "year": "2019" }, { "authors": "Baoguang Shi; Xiang Bai; Cong Yao", "journal": "CoRR", "ref_id": "b16", "title": "An end-toend trainable neural network for image-based sequence recognition and its application to scene text recognition", "year": "2015" }, { "authors": "Xiangyuan Zhu; Kehua Guo; Hui Fang; Rui Ding; Zheng Wu; Gerald Schaefer", "journal": "Proceedings of the AAAI Conference on Artificial Intelligence", "ref_id": "b17", "title": "Gradient-based graph attention for scene text image super-resolution", "year": "2023-06" }, { "authors": "Alex Nichol; Prafulla Dhariwal", "journal": "", "ref_id": "b18", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b19", "title": "Denoising diffusion implicit models", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 54.43, 533.95, 120.83, 12.48 ], "formula_id": "formula_0", "formula_text": "(x LR , x k HR ∈ R f H×f W ×C )." }, { "formula_coordinates": [ 2, 358.32, 388.43, 200.67, 9.68 ], "formula_id": "formula_1", "formula_text": "q(z t |z t-1 ) = N (z t ; 1 -β t z t-1 , β t I)(1)" }, { "formula_coordinates": [ 2, 363.78, 429.75, 195.22, 17.25 ], "formula_id": "formula_2", "formula_text": "q(z t |z 0 ) = N (z t ; √ ᾱt z 0 , (1 -ᾱt )I)(2)" }, { "formula_coordinates": [ 2, 389.18, 488.14, 169.81, 9.68 ], "formula_id": "formula_3", "formula_text": "q(z T |z 0 ) = N (z T ; 0, I)(3)" }, { "formula_coordinates": [ 2, 324.51, 594.2, 234.48, 39.85 ], "formula_id": "formula_4", "formula_text": "p θ (z t-1 |z t , x LR ) = N (z t-1 ; µ θ (z t , x LR , t), βt I), (4) µ θ (z t , x LR , t) = 1 √ α t (z t - 1 -α t √ 1 -ᾱt ϵ θ (z t , x LR , t)),(5)" }, { "formula_coordinates": [ 2, 330.24, 708.67, 228.76, 12.64 ], "formula_id": "formula_5", "formula_text": "L DM = E E(x HR ),ϵ∼N (0,1),t [∥ϵ -ϵ θ (z t , x LR , t) 2 ∥](6)" }, { "formula_coordinates": [ 3, 63.42, 103.25, 234.79, 23.23 ], "formula_id": "formula_6", "formula_text": "z t-1 = 1 √ α t (z t - 1 -α t √ 1 -ᾱt ϵ θ (z t , x LR , t)) + β t ϵ t ,(7)" }, { "formula_coordinates": [ 3, 54.43, 249.74, 116.19, 11.22 ], "formula_id": "formula_7", "formula_text": "c RG = R(x LR ) ∈ R L×|A| ," }, { "formula_coordinates": [ 3, 108.39, 341.56, 189.82, 9.65 ], "formula_id": "formula_8", "formula_text": "L recog = ∥R(x LR ) -R(x HR )∥ 1(8)" }, { "formula_coordinates": [ 3, 77.75, 641.71, 220.46, 12.69 ], "formula_id": "formula_9", "formula_text": "h res i,0 = Res(f i-1 , t e )(9)" }, { "formula_coordinates": [ 3, 69.59, 658.39, 228.61, 12.69 ], "formula_id": "formula_10", "formula_text": "h M SA i,0 = F F N (LN (M SA(h res i,0 )))(10)" }, { "formula_coordinates": [ 3, 68.66, 675.07, 229.54, 29.38 ], "formula_id": "formula_11", "formula_text": "h M CA i,0 = F F N (LN (M CA(h M SA i,0 , c RG , c RG ))) (11) h M CA i,1 = RGRB 1 (h M CA i,0" }, { "formula_coordinates": [ 3, 87.24, 694.15, 210.96, 26.98 ], "formula_id": "formula_12", "formula_text": "f i = Downsample(h M CA i,1 )(12)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b10", "b2", "b9", "b11", "b12", "b2", "b13", "b15", "b16", "b17", "b18", "b19", "b18", "b20", "b21", "b12", "b22", "b18", "b23", "b24", "b25", "b26", "b27", "b28", "b29" ], "table_ref": [], "text": "A caricature is a visual portrayal of a person that simplifies or exaggerates their most visible characteristics through sketches or creative drawings [1], which are primarily used to express humor and for entertainment. In traditional practice, caricatures are manually created by artists who carefully analyze the variations between an individual's unique features and the standard human facial characteristics. It is becoming more intriguing and essential to explore the automated generation of caricatures from given real images, as crafting a caricature demands significant effort, labor, and skill from the artist.\nComputer vision applications encompass a broad spectrum, including the capability to create caricatures without requiring an artist's direct involvement. Much like the process artists employ when creating caricatures, a computer vision-based approach can also be divided into two key phases. Firstly, it involves identifying distinctive features and enhancing them, and secondly, infusing the exaggerated image with artistic styles to match the artist's preferences. This division into two independent categories adds flexibility and disentanglement, resulting in the creation of high-quality caricatures.\nThe automated generation of a caricature from real images is a non-trivial challenge. Apart from imbuing the photo with a texture style reminiscent of caricatures, we should also take spatial exaggerations into consideration [2], [3]. Previous methods for creating facial caricatures required the expertise of professionals to achieve satisfactory outcomes [4]. The issue of exaggerating facial features remains an open problem in research areas like detection [5] and recognition [6].\nCertain methods incorporate additional data, such as user interaction [7] or by increasing the shape representation's divergence from the average, as in the case of 2D landmarks or 3D meshes [8]- [11] to tackle the exaggeration challenge. With the advancement in computer vision techniques, numerous automated caricature generation methods accomplish the exaggeration task by employing deep neural networks in an image-to-image translation manner [3], [10], [12], [13]. Certain approaches employ point-based warping techniques to convert real images into caricatures [3].\nFurthermore, there has been considerable research into automatic portrait style transfer, which is based on image style transfer [14]- [16] and image-to-image translation [17]. Deep learning techniques have been very successful in performing image translation by learning from representation data examples [18], [19]. Unfortunately, paired real and caricature are not commonly found. Training the translation process in a supervised manner is not practical, and the creation of such a dataset can be a laborious task. One of the readily accessible caricature datasets is WebCaricature [20], encompassing 6042 caricatures and 5974 photographs spanning 252 distinct identities. However, it's worth noting that the dataset's quality is subpar, with caricatures exhibiting inconsistent styles and exaggerations.\nDue to the scarcity of paired image data, image-to-image translation is increasingly shifting towards training with unpaired images [19], [21], [22], as well as gaining insights from unpaired portraits and caricatures [13], [23]. Several studies [19], [24] have introduced unsupervised cross-domain image translation approaches, aiming to learn both geometric deformation and appearance translation simultaneously. However, training on unpaired images may introduce significant variations in exaggerations due to the substantial gap in shape and appearance between real and caricature images, often leading to unsatisfactory outcomes. Additionally, differences in poses and scales among images can make it challenging to differentiate facial features.\nNeural style transfer techniques employ deep neural networks to transfer artistic styles from a reference to images and excel in stylizing appearances but do not enhance the geometric features [25], [26]. However, the advancement of Generative Adversarial Networks (GANs) [27] has led to the emergence of state-of-the-art face generators like StyleGAN [28], [29], which offer disentangled and high-fidelity images through transfer learning.\nOur face caricature approach is different from the previous methods. Our main goal is to exaggerate facial features while keeping them realistic and usable in real-world scenarios. Following the work in [30], our caricature exaggerates the eye and mouth regions, keeping the face contour and other facial features unchanged. We create our realistic caricatures with the focus on three goals: (1) realistic face caricature with exaggerated eyes and mouth region, (2) making sure our caricature identity is the same as the input face,(3) our caricature should be realistic enough to be usable in realworld scenarios, and (4) unconditional visual style transfers and conserving all facial attribute from the real image to the caricature faces. We proposed a novel caricature creation with a realistic style applicable to the real world. Style translation refers to the conversion of one style representation to another style representation. We utilize an unpaired caricature learning method to achieve our goal. We exaggerate facial features and the stylization of appearance through a two-step process: face caricature generation and face caricature projection. The initial phase of the face caricature generation step is the creation of new caricature face datasets from real images. We then train a style generator using the real and the new caricature datasets, discussed in Section III-B. The face caricature projection employs an encoder which is trained with our pretrained generator. The encoder is trained using real and our new caricature images to project similar real and caricature faces. Additionally, using the projected real and caricature images, we achieve an incremental facial exaggeration from the real to the caricature images, which provides flexibility in our method. The projection of the real and caricature images preserves the facial identity, attributes, and expressions from the input images, discussed in Section III-C. Our method also addresses facial occlusions like reading glasses or sunglasses to enhance the robustness of our model. This work presents several significant contributions:\n1) Our approach employs an unpaired learning procedure to produce caricatured faces from real facial images. We don't require a pair of real and caricature images. It accomplishes both facial exaggeration and style transfer from the real face image.\n2) We produce caricature face datasets from real face images. We train StyleGAN using the real and the new caricature faces, enabling the synthesis of different styles of real and caricature faces. We further designed an encoder to get the full translation of expressions, poses, and attributes from the real faces. 3) Our caricature projection provides an incremental exaggeration of facial features. This incremental process provides flexibility in our caricature projection as the extent of facial exaggeration can be performed according to one's preference. 4) Our generated caricatures exhibit superior realism and quality compared to state-of-the-art methods. Our caricatures maintain high quality, making them more visually convincing when used in real-world scenarios.\nThe remainder of the paper is structured as follows: Section II provides the necessary face caricature background and reviews recent advances in facial image generation and StyleGAN inversion; Section III outlines the two-stage proposed framework, explaining face caricature generation and face caricature projection; Section IV details implementation settings and dataset uses; Section V details experiment settings and evaluates the results; and Section VI concludes this paper with discussions." }, { "figure_ref": [], "heading": "II. RELATED WORK A. Caricature Creation", "publication_ref": [ "b30", "b31", "b32", "b33", "b34", "b35", "b36", "b12", "b2", "b12", "b2", "b11", "b37" ], "table_ref": [], "text": "Creating caricatures entails the recognition and exaggeration of unique facial characteristics while preserving the individual's identity. Caricatures can be crafted using three approaches: distorting facial attributes, employing style transfer, or utilizing methods that combine both techniques.\nConventional techniques operate by amplifying the deviation from the average, achieved through methods such as explicitly identifying and warping landmarks [31], [32] or employing data-driven approaches to estimate unique facial attributes [33], [34]. As generative networks have advanced, some image-to-image translation methods [35], [36] have been undertaken to incorporate style transfer. Nevertheless, these networks are unsuitable for applications involving significant spatial variations, resulting in outputs with diminished visual quality. Zhang et al. [37] introduced an approach that aims to acquire a disentangled feature representation of various facial attributes, enabling the generation of realistic portraits that exhibit appropriate exaggerations and a rich diversity in style.\nCao et al. [13] employ two CycleGANs, trained on image and landmark spaces, to handle texture rendering and geometry deformation. WarpGAN [3] surpasses visual quality and shape exaggeration, providing flexibility in spatial variability for both image geometry and texture. CariGAN [13], on the other hand, is a GAN trained with unpaired images, focusing on learning image-to-caricature translation. Shi et al. [3] introduce an endto-end GAN framework that simultaneously trains warping and style. AutoToon [12] utilizes deformation fields to apply exaggerations and is trained in a supervised manner using paired data derived from artist-warped photos to learn warping fields. However, it is limited to mapping to a single domain, making it unable to produce diverse exaggerations. Abdal et al. [38] introduced a technique for crafting 3D caricatures that permits the modification and animation of personalized artistic 3D avatars using artistic datasets." }, { "figure_ref": [], "heading": "B. Style Transfer", "publication_ref": [ "b38", "b39", "b40", "b41", "b26", "b42", "b43", "b21", "b20", "b44", "b27", "b45", "b46", "b47", "b48", "b49", "b50" ], "table_ref": [], "text": "One aspect of image synthesis that poses a challenge is style transfer, which aims to create a content image with multiple styles. Thanks to the practical ability of convolutional neural networks (CNNs) [39] to extract semantic features, numerous networks dedicated to style transfer have been developed. The initial style rendering process was introduced by Gatys et al. [40], who employed hierarchical features from a VGG network [41]. Gatys et al. [42] pioneered the first neural style transfer approach, utilizing a CNN to transfer style information from a style image to a content image. However, a drawback of this approach is that the style and content need to be similar, which is different from caricature images.\nA promising area of research lies in the application of Generative Adversarial Networks (GANs) [27] for image synthesis, which has yielded cutting-edge results in various domains such as text-to-image translation [43] and image inpainting [44]. Regarding unpaired image translation, methods like CycleGAN [22] have been utilized, leveraging a cycle consistency loss to achieve translation between different image domains. Additionally, approaches like StarGAN [21], [45] employ a single generator to learn mappings across various image domains. However, capturing the geometric transformations required for direct photo-to-caricature mapping in an image-to-image translation framework remains a challenging task.\nStyleGAN [28], [46] excels at producing high-fidelity facial images with fine-grained control over hierarchical channel styles. Many techniques leverage StyleGAN for the generation of high-quality images and the manipulation of facial characteristics, as well as for various applications related to faces, including swapping, restoration, de-aging, and reenactment [47], [48]. Pinkney and Adler [49] further enhanced StyleGAN using sparse cartoon data, demonstrating its effectiveness in generating lifelike cartoon faces. Additionally, DualStyleGAN [50] provides customizable control over dual style transfers, catering to both the extended artistic portrait domain and the original face domain. StyleCariGAN [51] produced shape exaggeration and stylization by mixing layers of photo and caricature styles." }, { "figure_ref": [ "fig_0" ], "heading": "III. STYLE BASED CARICATURE CREATION", "publication_ref": [ "b27", "b45" ], "table_ref": [], "text": "Our proposed method operates transparently and understandably, comprising two distinct stages: face caricature generation and face caricature projection, as illustrated in Figure 1. In the initial stage, our focus is creating new facial caricature datasets, which exaggerate eyes and mouth regions while preserving the facial contours. Subsequently, we train a style generator called StyleGAN [28] using the real and our new caricature datasets, which can generate highly realistic images in different styles. In the second stage, we design a projection model to produce high-quality caricature faces from real facial images and the incremental exaggeration of facial features. Our proposed method projects the genuine facial image into a caricature representation, emphasizing the unique and exaggerated facial features that constitute an individual's appearance.\nA. Background 1) StyleGAN: We use StyleGAN2's [46], a style-based network that controls the synthesis of images. To train StyleGAN, a large dataset of real images is used, which is then processed to learn the underlying patterns and characteristics of the data. We use real and caricature faces for our method. The model learns to generate real and caricature images visually similar to those in the training set.\n2) Projection Techniques: We can project an input image into an equivalent output image using the StyleGAN architecture by employing two distinct approaches: latent code optimization and encoder-based methods. Our approach is predominantly focused on encoder-based methods for several compelling reasons. Firstly, these methods offer significant speed advantages, as they can map the latent code in a single forward pass. The encoder-based approach contrasts with the optimization-based approach, which can be computationally demanding for each image. Secondly, the output of an encoder resides within a compact and well-defined space, rendering it more suitable for subsequent editing and manipulation tasks.\n3) Nature of StyleGAN Latent space: The StyleGAN latent space plays a crucial role in creating and manipulating the characteristics of the generated real and caricature faces. The latent space vectors control various aspects of image generation, like facial features, colors, textures, etc. Another characteristic of StyleGAN latent space is the nature of disentanglement, where each latent space direction corresponds to specific features or attributes of the generated image. A smooth interpolation between two points in the latent space creates images that transition between different attributes." }, { "figure_ref": [ "fig_1", "fig_2", "fig_2", "fig_3", "fig_4", "fig_5", "fig_1", "fig_3", "fig_6" ], "heading": "B. Face Caricature Generation", "publication_ref": [ "b27", "b51", "b52", "b53", "b54", "b55", "b56", "b57", "b57", "b58", "b27", "b51" ], "table_ref": [], "text": "In the face caricature generation, datasets are formed by creating exaggerated facial representations featuring enlarged eyes and mouths using real face images. Here, we also discuss generating exaggerated faces with facial obstacles, like the faces with eyeglasses. After creating caricature faces, We followed the generation process by training a style generator with our caricature face dataset. 1) Face Caricature Dataset Creation: To create our caricature faces, we utilize real-face images randomly sampled from the FFHQ [28] and CelebA-HQ [52] dataset. Both datasets provide a diverse range of genders, races, ages, expressions, and poses, ensuring the variety and representation of our caricature faces. The pipeline for caricature creation is divided into three stages: (i) facial landmark enlargement, (ii) face patch rescaling, and (iii) image matting, as illustrated in Figure 2.\nIn the first stage, we use landmark detectors to detect the facial landmarks in the real input image I real . Specifically, we employ a pre-trained detector from the Dlib library [53], which estimates the location map of the facial structure. The Dlib library detects 68 facial landmarks, each assigned specific (x, y) coordinates ranging from 0 to 67. These landmarks correspond to different parts of the face, such as the eyes, eyebrows, nose, mouth, and face contour, as depicted in Figure 3. We represent the input face image with identified landmarks as I l real , and the coordinates are highlighted with green markers. This initial stage of landmark detection provides crucial information about the facial structure, enabling us to proceed to the subsequent steps of face patches and blending, as well as image matting and blending. In the second stage, face patch rescaling, we perform several operations, including the production of face patches, the exaggeration of these patches, and blending them into the original image to create the caricature effect. Our focus for exaggeration is on the eyes and mouth regions of the face. To accurately target the eye regions, we group the landmark indexes into the left and right eyes, as depicted in Figure 3. The mouth area consists of the upper and lower lips, and we consider specific landmark indexes for these regions. In the case of the mouth, we utilize the top landmark indexes for the upper lip and the bottom indexes for the lower lip. Using these landmark indexes, we produce face patches that will undergo exaggeration. We achieve this by enlarging the coordinates of the landmarks corresponding to the eye and mouth regions, as illustrated in Figure 4. The resulting image I ls real displays the face with enlarged landmark coordinates, which are highlighted in pink. To further enhance the exaggeration effect, we scale the face patches to a factor of 1.5, resulting in exaggerated face patches represented as I p real . These exaggerated patches seamlessly blend into the original image I real .\nFor the blending process, we employ the Poisson image editing technique [54], which ensures seamless and natural integration of the exaggerated patches with the original image. This technique considers factors such as image illumination and texture, resulting in a visually pleasing caricature effect. By applying these operations, we can generate the final caricature image Îcari , where the distinctive exaggerated features, such as enlarged eyes and mouth, seamlessly blend into the original face image while maintaining a natural appearance. The Poisson editing method influences both image illumination and texture and is represented as follows:\nv = argmin v iϵS,jϵNi∩S ((v i -v j ) -(s i -s j )) 2 + iϵS,jϵNi∩¬S ((v i -t j ) -(s i -t j )) 2 ,(1)\nwhere υ represents the pixel values of the new image, s corresponds to the pixel values of the source image, t represents the pixel values of the target image, S signifies the destination domain, and N i denotes a set of neighboring pixels of i.\nIn the third stage, we tackle the problem of blurriness that can occur along the contours of the face, especially in cases where faces exhibit extreme poses during the blending process in stage two. To mitigate this blurring effect, we apply an image matting technique. First, we generate face masks from the previously obtained caricature image Îcari using a face segmentation method [55], resulting in a mask image Îfm cari . In this mask, the foreground corresponds to the face region, while the background encompasses the remaining areas. Notably, we perform segmentation only for the face region, excluding the hair, as blurring tends to occur mainly in the hair-background region.\nNext, we generate a trimap mask Îtm cari from the face mask Îfm cari , using trimap mask generation process [56]. It involves applying a series of erosion and expansion operations to the foreground region of the face mask, using specific parameter values tailored to our method. With the face caricature image Îcari and the trimap mask Îtm cari in hand, we apply an image matting method [57]. This technique effectively addresses the blurring issue by enhancing the sharpness and clarity of the face contours in the caricature image. The image matting process utilizes both the caricature image and the trimap mask to generate a refined caricature image Îim cari . By employing this image matting stage, we can improve the overall visual quality of the caricature image by reducing blurring effects around the face contours, resulting in a more polished and realistic appearance.\nThe blurring on the face contour is removed by performing alpha blending. The image alpha blending technique requires a foreground, a background, and an alpha mask. We set the Îcari as foreground, I real as background, and Îim cari as alpha mask. The alpha blending can be performed using the following equation:\nI p = α p F p + (1 -α p )B p ,(2)\nwhere α p denotes the matte and within the range value of [0,1], and F p and B p correspond to the pixel values for the foreground and background, respectively. When α p = 1 or 0, it signifies that the pixel at that position unequivocally belongs to the foreground or background, respectively. Otherwise, such a pixel is termed a partial or mixed pixel. Following the ultimate blending procedure, we produce our caricatured face denoted as Îf cari . Figure 5 illustrates eliminating blurring after the matting process.\n2) Face Caricature Dataset with Occulsion: For generating our caricature dataset, we use various images, including faces with eyeglass occlusions. We address the caricature generation for face occlusion caused by eyeglasses to enrich our caricature dataset. The faces with eyeglasses can be categorized into two: (i) Reading glasses and (ii) sunglasses. We organized all the transparent glasses as reading glasses and the remaining as sunglasses.\nFace Caricature with Reading Glasses: The whole pipeline for reading glass caricature generation is shown in Figure 6. We can divide the reading glass caricature generation into five stages: (i) glass removal, (ii) correction, (iii) caricature generation, (iv) putting back glasses, and (iv) lighting correction.\nThe first stage is glass removal, where we remove both the reading glass and the cast shadow from the face image. We employ two networks, Shadow Mask Network and Glass Mask Network [58], for the glass removal. Given an input image I real , we generate two masks: a glass mask M g real using the glass mask network and a shadow mask M s real using the shadow mask network. We use item removal from [58] to remove both the eyeglass and the shadow from the face image and generate a new face image I ng real with no eyeglass and cast shadow.\nThe second stage is the correction stage, where information on the image that was lost during the item removal stage is retrieved. We use an image restoration method [59] to restore the degraded image and restore lost details. The corrected image Îng real restores both quality and fidelity and shows robustness to the degraded parts.\nThe third stage is the caricature generation process. The face image now has no reading glasses, so it performs the caricature generation method discussed in the previous section and generates appropriate caricature Îng cari . We put the glasses from I real into Îng cari in the fourth stage. We first generate a glass image I mg real with only glasses using a bitwise AND mask operation using I real and M g real . We perform the alpha blending presented in Equation 2 to put back the reading glass from I real in our caricature face. We set the I mg real as foreground, Îng cari as background, and M g real as alpha mask. The generated caricature face with reading glasses is represented as Îg cari . The final step is the lighting correction. We must ensure the light illumination is preserved from I real during this whole caricature process. We generate a light mask M l real from I real by keeping a specific threshold that only the illuminated area is highlighted. We perform the alpha blending technique in Equation 2 to retrieve the lost light illumination from I real . We set I real as foreground, Îg cari as background, and M l real as alpha mask. Finally, we create our reading glass caricature image, I f cari . Face Caricature with Sunglasses: We consider the face with sunglasses where it can't be see-through. The caricature generation of faces with sunglasses is a simple, straightforward process where we exaggerate face patches only for the mouth region. After the landmark detection in Figure 2, only the mouth landmark has been enlarged, represented in Figure 4. The patch blending presented in Equation 1 is performed only for the mouth patch and generates a caricature face with sunglasses. The remaining steps are same as in Section III-B. 3) Face Caricature Dataset: We have successfully generated a diverse collection of caricature face images encompassing various attributes such as gender, race, age, expression, pose, illumination, etc. We use the FFHQ [28] and CelebA-HQ [52] datasets for our caricature creation. Some examples of our caricature dataset are illustrated in Figure 7. We employ an encoder-based method with our pretrained StyleGAN, G. The encoder E is trained using real and new caricature faces. We perform iterative steps to enhance the quality of our generated images and make them more faithful to the input faces. We perform a background blending process to get the background information the projected image cannot generate from the input image." }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "4) Style Generator:", "publication_ref": [ "b27", "b28", "b59" ], "table_ref": [], "text": "The final step for the caricature generation is the training of StyleGAN [28], [29] architecture. The StyleGAN architecture comprises two networks: a mapping network and a synthesis network. The mapping network, denoted as f , is an 8-layer Multi-Layer Perceptron (MLP) responsible for mapping a given latent code z from the set Z to generate w in the set W . It can be represented as f : Z → W . The synthesis network, g, consists of 18 convolutional layers, with each layer being controlled via adaptive instance normalization (AdaIN) [60]. AdaIN incorporates the learned affine transformation \"A\" derived from the latent code w at each layer. Additionally, a scalable Gaussian noise input \"B\" is introduced into each layer of the synthesis network g.\nThe architectural design ensures that each style influences only a single convolution. Random latent codes serve as a means to control the styles of the generated images. The StyleGAN training process exclusively used the real and newly created face caricature images. Following the training of Style-GAN, the generator can produce real and caricature images with diverse facial attributes, including variations in skin tone, hair color, shapes, and more. It's crucial to underscore that our caricature generation generator stands out from previous approaches in a notable manner in terms of realism and usability. After training the StyleGAN, we generate random samples from the latent space to visualize how our caricature performs. The results are high quality and realistic, as shown in Figure 8. We can also generate different styles for different identities, and some examples are shown in Figure 9." }, { "figure_ref": [ "fig_9", "fig_10" ], "heading": "C. Face Caricature Projection", "publication_ref": [ "b60", "b61", "b62", "b63", "b64", "b65", "b66", "b67", "b68", "b69", "b24", "b70" ], "table_ref": [], "text": "Our caricature projection technique employs an encoder trained with two different datasets. The encoder is trained using real and caricature images with our pretrained StyleGAN from Section III-B4. The training framework of the encoder is shown in Figure 10. For training the encoder, denoted as E, with our pretrained StyleGAN generator, represented as G, given an input source image I real , we first create a corresponding caricature from the real input face, I cari , following the process in Section III-B1. The newly created caricature faces with the real faces are used in the training of E with the primary objective of\nI f cari = G(E(I cari )), such that I f cari ≈ I cari and I f real = G(E(I real )), such that I f real ≈ I real .\nTo enhance the quality of our generated images and make them more faithful to the input, we perform two forward passes through the encoder, E, and generator G.\nOur goal is to efficiently and effectively produce high-quality real and caricatured faces, all while preserving the desired characteristics and visual resemblance to the input images. Follows a methodology similar to the PSP [61] and e4e [62] approaches. We utilize a Feature Pyramid Network [63] built upon a ResNet [64] backbone, extracting style features from three intermediate levels. Our pretrained StyleGAN is kept fixed during the caricature projection process. Much like the PSP network, we employ \"Mapper\", a small mapping network, which is trained to extract learned styles from the corresponding feature maps for each of the 14 target styles (for 256 x 256 images). This small mapping network is fully convolutional, downsampling the feature map to generate the corresponding 512-dimensional style input. It achieves this through a series of 2-strided convolutions followed by LeakyReLU activations. Specifically, the small feature map from the Mapper generates styles W 0 -W 2 , the medium feature map generates styles W 3 -W 6 , and the large feature map generates styles W 7 -W 17 . We incorporate Restyle's [65] iterative refinement method to enhance the reconstruction quality with each iterative step. We perform a single training iteration per batch with our model trained. The iterative outcome for I real is I iter real and I cari is I iter cari . To further transfer the background from I real , a common approach involves applying a blending technique, as depicted in Equation 2, as a post-processing step that swaps the inner face of I f real and I f cari with I real . We execute the image matting technique, followed by the alpha blending process The disentangled latent spaces also facilitate smooth and predictable transitions between real and caricature faces. To perform the incremental facial exaggeration process, we employ our trained encoders, E, and our pretrained StyleGAN. The overview of our incremental facial exaggeration is shown in Figure 11. Given an input image I real , we create the corresponding caricature of the real face, I cari . We fed the I real and I cari to the encoder E. We Project two latent codes in the StyleGAN latent space W , one for I real , represented as E(I real ) = z real where z real is the real latent code, and another for I cari , represented as E(I cari ) = z cari where z cari is the caricature latent code. We perform a latent walk from z real to z cari with the objective of\nI f cari = G(z real + n cari ) if n cari = 1\n, where n cari is the incremental latent steps of z cari direction. We can perform a uniform iteration represented as I e cari . We visualize more results in Section V-C.\n2) Losses: To achieve our objective, we employ a variety of losses during the training of our encoder. We incorporate the non-saturating GAN loss [66] along with R 1 regularization [67] as the adversarial loss, proposed in [68].\nThe purpose of regularization is to encourage the encoder to produce latent-style vectors that are closer to the average latent vector. The formulation of the regularization loss is as follows:\nL reg =∥ G(E(x)) -w ∥ 2 ,(3)\nwhere w represents the average style vector obtained from our pre-trained generator.\nWe employ the pixel-wise L2 loss,\nL 2 =∥ x -G(E(x)) ∥ 2 ,(4)\nIn order to preserve the perceptual similarity, we use LPIPS [69] loss. The image is preserved better [70] as compared to the traditional approach [25].\nL LP IP S =∥ F (x) -F (G(E(x))) ∥ 2 ,(5)\nTo generate a caricature face that retains similar facial characteristics, we employ identity loss. This involves integrating a dedicated recognition loss, which assesses the cosine similarity between the resulting image and its source.\nL ID (x) = 1 -⟨A(x), A(G(E(x)))⟩ ,(6)\nwhere A represents the pretrained ArcFace [71] network. Collectively, our overall loss function is presented as\nL(x) = λ 1 L 2 (x) + λ 2 L LP IP S (x) + λ 3 L ID (x) + λ 4 L reg (x),(7)\nwhere λ 1 , λ 2 , λ 3 , and λ 4 are constants defining the loss weights." }, { "figure_ref": [], "heading": "IV. IMPLEMENTATION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Dataset", "publication_ref": [ "b27", "b51" ], "table_ref": [], "text": "To showcase the efficacy of our approach, we produce caricature datasets and conducted experiments using a diverse dataset that encompasses two widely recognized datasets: FFHQ [28] and CelebA-HQ [52]. The FFHQ dataset comprises 70,000 high-quality facial images, which we segmented into three groups based on the presence of eyeglasses: no glasses, reading glasses, and sunglasses. Specifically, we assigned approximately 56,500 images to the no-glasses group, 10,600 images to the reading glasses group, and 2,900 images to the sunglasses group. Similarly, the CelebA-HQ dataset contains 30,000 high-quality facial images, and we also categorized these into three groups: no glasses, reading glasses, and sunglasses. Here, we allocated approximately 28,500 images to the no-glasses group, 1,000 images to the reading glasses group, and 500 images to the sunglasses group.\nWe use our new caricature and real faces from the FFHQ dataset for training our StyleGAN model. The FFHQ dataset's considerable size and high-quality image content render it suitable for effectively training a robust and representative caricature generator. For the encoder E training, we use the FFHQ real and our new caricature dataset as the training set and the CelebA-HQ real and our new caricature dataset as the testing set. By using these diverse datasets and splitting them into different groups based on eyeglass presence, we aimed to assess the ability of our approach to handle various scenarios and generate accurate caricatured faces across different styles and eyeglasses." }, { "figure_ref": [], "heading": "B. Implementation Details", "publication_ref": [ "b27", "b45", "b70", "b60" ], "table_ref": [], "text": "We trained a StyleGAN model [28], [46] using real and our caricature datasets. The input and output image resolution for our caricature generation task was set to 256 x 256 pixels since our hardware resources are limited. The training process for the StyleGAN model was conducted on four Nvidia Titan Xp GPUs, each with 12 GB of RAM. It took approximately eight days to train the model using a batch size 16. For the encoder E training, we utilized the ResNet-IRSE50 architecture from Arcface [71], a pretrained model commonly used for facial recognition tasks. In our E training process, we set the values of the constants λ as follows: λ 1 = 1, λ 2 = 0.8, λ 3 = 0.5, and λ 4 = 0.005. These constants were used to control and balance different aspects of the training process. We set other training details the same as [61]." }, { "figure_ref": [], "heading": "V. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_11", "fig_12" ], "heading": "A. Experiments on Caricature generation", "publication_ref": [], "table_ref": [], "text": "The process of patch blending is of utmost importance in creating the face caricature dataset. The quality of the StyleGAN-generated images greatly depends on the seamless blending of these patches. However, when dealing with extreme head poses, the blending process can sometimes lead to blurriness. To address this blurring issue, we employ a face mask that eliminates all blurriness, resulting in a more natural-looking image. Additionally, we introduce a face mask in conjunction with a matting mask, and we compare the outcomes, as demonstrated in Figure 12. The image matting mask successfully eliminates blurriness along the facial contours, yielding a more natural appearance than just the face mask. It's worth noting that the face segmentation mask tends to produce unnatural edges, which can adversely impact the final result.\nWhile removing reading glasses, the facial details concealed behind the glasses are inevitably lost. To address this issue, we employ a correction technique to recover the lost information. We illustrate the various stages of the reading glass removal process in Figure 13. After the eyeglass removal, a significant portion of the information in the eye region is degraded, which can adversely impact caricature generation. However, the correction method not only restores both quality and fidelity but also exhibits remarkable resilience in handling the deteriorated portions, ultimately enhancing the creation of a superior caricature dataset. " }, { "figure_ref": [ "fig_13", "fig_14" ], "heading": "B. Experiments on Caricature Projection", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Following the training of our caricature projection using the encoders trained on real and our caricature datasets, we conducted a series of experiments to assess the efficacy of our projection method. We evaluate the encoder results by comparing the input and output faces for the real and caricature images, as illustrated in Figure 14. The outcome of projecting real and caricature images demonstrates the effectiveness of our approach, as the input and the resulting projected images exhibit minimal differences. It is evident that our real and caricature projections consistently yield attractive and aesthetically pleasing outcomes. Table I shows the evaluation of our projection results.\nDuring our projection method, each iterative step enhances the image quality, as illustrated in Figure 15. The iterative process enhanced the eyeglasses information, as demonstrated in rows 1, 2, 3, and 6. It's noteworthy that there is a substantial improvement in head pose and facial expression, as observed in row 4. There is also an improvement in the lighting and skin color resemblance with the input image, as observed in row 5." }, { "figure_ref": [ "fig_15", "fig_16" ], "heading": "C. Experiments on Incremental Projection", "publication_ref": [], "table_ref": [], "text": "We perform an incremental caricature projection method in which we gradually exaggerate facial features, as demonstrated in Figure 16. The visual result shows that the exaggeration affects the eyes and mouth, leaving all other facial attributes unchanged. The exaggeration steps are crucial in our method as they hold great significance in our approach, as the extent to which individuals prefer facial exaggeration varies. This step provides flexibility and robustness in our caricature projection process.\nFurthermore, we introduce a style-mixing element into the exaggeration process. During the exaggeration process, we can select and incorporate the desired style, as shown in Figure 17. The desired style can be blended by mixing the style codes within the finer layers of StyleGAN." }, { "figure_ref": [ "fig_17", "fig_18", "fig_19" ], "heading": "D. Comparison to state-of-the-art method", "publication_ref": [ "b71", "b61", "b64", "b2", "b50", "b49", "b51", "b70" ], "table_ref": [ "tab_1" ], "text": "We evaluate the performance of our caricature projection method by comparing it to the state-of-the-art technique. This comparison encompasses all the encoder-based to assess the efficacy of our method comprehensively. In our qualitative evaluation, as depicted in Figure 18, we conduct experiments utilizing various approaches. We explore three encoder-based methods: Hyperstyle [72], e4e [62] and Restyle [65]. These encoders are trained using a pretrained StyleGAN, which is trained using only our caricature images. Hyperstyle tends to generate caricatures that closely resemble the original image in terms of structure, as it tunes the StyleGAN weights to retrieve the original image rather than caricature faces. Conversely, the caricatures produced by the e4e encoder yield superior caricature results when compared to Hyperstyle, and the results provide convincing caricature results. The Restyle results resemble more the real images than the caricature faces. Finally, our method outperforms all techniques, particularly the facial exaggeration and the expression and head pose, which more closely resemble those of the real image. Furthermore, our approach excels at handling occluded faces and produces caricatures that more closely resemble the original eyeglasses. Overall, our approach demonstrates superior results compared to existing techniques, offering a more faithful representation of the original image's characteristics while achieving highquality caricature results.\nMoreover, we conducted a qualitative analysis of various state-of-the-art caricature methods, comparing our outcomes with those of WarpGAN [3], StyleCariGAN [51], and Dual-StyleGAN [50], as shown in Figure 19. All results were generated using the pretrained models provided by the respective authors using CelebA-HQ [52]. The WarpGAN struggled to produce caricatures with proper facial structures and weakly stylized images. The StyleCariGAN had difficulty preserving the original image's identity and heavily relied on the chosen style. The DualStyleGAN yielded convincing results but was limited in retaining the original attributes. In contrast, our caricature results excelled in quality, maintaining both style and the facial attributes of the original image. The exaggeration achieved in our projected caricature faces holds promise for practical applications in real-world scenarios. We also performed a quantitative evaluation to assess the degree of resemblance between real and caricature images, as shown in Table II. The identity similarly calculation uses the ArcFace [71] method. The Result shows that our method demonstrated the most favorable score, making it effectively exaggerate facial features and align with our primary goal of making it applicable in real-world scenarios.\nWe showcase the outcomes of our approach using various facial images captured in diverse conditions, as shown in Figure 20. Our methodology consistently delivers outstanding facial caricature results marked by realism and the retention of the original facial attributes. We visualize the generation of different style types that can be incorporated with our caricature face. Moreover, our method generates faces with occlusions, such as reading glasses and sunglasses. Furthermore, it displays versatility by producing caricatured faces across different age groups and adapting to various artistic styles." }, { "figure_ref": [ "fig_20" ], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, We generate realistic facial caricatures featuring exaggerated features suitable for real-world applications. Our methodology is carefully crafted to emphasize exaggerating the eyes and mouth while preserving the original facial contours. We have introduced an innovative caricature generation method that comprises two key stages: face caricature generation and face caricature projection. In the face caricature generation phase, we construct caricature datasets using real images. Subsequently, we train a styleGAN to synthesize various styles of real and caricatured faces. The face caricature projection step takes input images and transforms them into corresponding real and caricatured faces. Our caricature projection process excels at producing highly realistic results while faithfully retaining the original facial attributes and identity. We also perform an incremental caricature projection method in which we gradually exaggerate facial features. We emphasize the importance of the exaggeration steps in our technique because different people have varying preferences regarding facial exaggeration. This process gives our caricature projection process flexibility and resilience. Our caricatures stand out in their superior realism and quality compared to previous methods. They offer visually convincing results suitable for real-world applications. Our approach introduces an innovative method for crafting exaggerated facial representations while maintaining a realistic style. Our future work includes exploring face de-identification using our caricature-projected faces to conceal the important facial features that can be used to identify the individual. Our core concept revolves around using caricature faces to protect individuals' privacy. We illustrate one example of protecting the privacy of an individual using our caricature faces in Figure 21." }, { "figure_ref": [], "heading": "A. Limitations", "publication_ref": [], "table_ref": [], "text": "It's crucial to explore the adaptability of our method in various applications. However, it's essential to acknowledge that our approach is tailored to specific applications and does have inherent limitations when applied in broader contexts. Notably, we emphasize exaggerating features in the eyes and mouth region, limiting the range of caricatures we can generate. Additionally, our method relies on real images, restricting its stylistic diversity and making it less suitable for producing different out-of-domain caricatures. Our method is entirely automated, and future improvements could involve enhancing controllability through additional caricature examples or user interaction. Nonetheless, our approach holds great promise for specific applications, and we are eager to refine and expand its capabilities in the future." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "ACKNOWLEDGMENT This work was supported by the Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2019-0-00203, Development of 5G-based Predictive Visual Security Technology for Preemptive Threat Response) and also by the MSIT (Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program (IITP-2022-RS-2022-00156389) supervised by the IITP (Institute for Information & communications Technology Planning & Evaluation)." } ]
Caricature is an exaggerated form of artistic portraiture that accentuates unique yet subtle characteristics of human faces. Recently, advancements in deep end-to-end techniques have yielded encouraging outcomes in capturing both style and elevated exaggerations in creating face caricatures. Most of these approaches tend to produce cartoon-like results that could be more practical for real-world applications. In this study, we proposed a high-quality, unpaired face caricature method that is appropriate for use in the real world and uses computer vision techniques and GAN models. We attain the exaggeration of facial features and the stylization of appearance through a two-step process: Face caricature generation and face caricature projection. The face caricature generation step creates new caricature face datasets from real images and trains a generative model using the real and newly created caricature datasets. The Face caricature projection employs an encoder trained with real and caricature faces with the pretrained generator to project real and caricature faces. We perform an incremental facial exaggeration from the real image to the caricature faces using the encoder and generator's latent space. Our projection preserves the facial identity, attributes, and expressions from the input image. Also, it accounts for facial occlusions, such as reading glasses or sunglasses, to enhance the robustness of our model. Furthermore, we conducted a comprehensive comparison of our approach with various state-of-the-art face caricature methods, highlighting our process's distinctiveness and exceptional realism.
High-Quality Face Caricature via Style Translation
[ { "figure_caption": "Fig. 1 .1Fig. 1. We present the overview of our proposed method. Our method consists of two key steps: Face Caricature Generation and Face Caricature Projection. In the first step, Face Caricature Generation, we create a caricature dataset from real faces. A generative model is trained with real and caricature faces, which can produce face caricatures and real images with different styles. The second step, Face Caricature Projection, involves training an encoder using the pretrained StyleGAN. (a) The encoder training process uses real and newly created caricature faces. (b) The first row shows the incremental facial exaggeration from real to caricature faces, and the second row shows the style change with facial exaggeration.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. The steps for our face caricature data creation. We perform multiple image operations techniques to achieve our face caricature dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Face landmarks representation: The landmark position with the 68 indexes representation on the FFHQ dataset.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Face landmark enlargement for patch generation. (b-d) represents the eye landmark enlargement. (e-g) represents the mouth landmark enlargement. The landmarks produced by the detector are represented in green, and the enlarged landmarks are represented in pink.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. (a) The blurring of face patches after the blending process due to extreme head pose. (b) The result of removal of blurring after the image matting.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig.6. The workflow of creating the reading glass caricature face dataset. We perform additional steps to overcome the face occlusion challenges for reading glass caricature creation.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Examples of our dataset produced in the face caricature creation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Random sample of caricature faces generated by the StyleGAN.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. After StyleGAN training, we represent different styles for a specific face. Each row represents one identity.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig.10. The encoder training for our Face caricature projection. We employ an encoder-based method with our pretrained StyleGAN, G. The encoder E is trained using real and new caricature faces. We perform iterative steps to enhance the quality of our generated images and make them more faithful to the input faces. We perform a background blending process to get the background information the projected image cannot generate from the input image.", "figure_data": "", "figure_id": "fig_9", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. After training the encoder, we perform incremental caricature projection by performing latent walking from the real image toward the direction of the corresponding caricature image.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Experiment results with various masks during the blending process. (a) The real input face, (b) our face caricature dataset before blurring removal, (c) the facial mask obtained from image segmentation, (d) the blending of the original image with the facial mask, (e) the facial mask acquired through image matting, (f) the blending of the original image with the image matting mask, (g) blurring effect to the eyes and mouth region, (h) display of unnatural blending results, and (i) demonstrating the appropriate and natural blending of the image matting face mask with the original face.", "figure_data": "", "figure_id": "fig_11", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 13 .13Fig. 13. Results of various stages in the process of generating a reading glass caricature dataset: (a) The real input face, (b) the result after removing reading glasses and cast shadows, (c) the result after the face correction technique, (d) after applying our face caricature, and (e) the reading glass caricatured face.", "figure_data": "", "figure_id": "fig_12", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Fig. 14 .14Fig. 14. The projection result of the encoder trained with real and caricature images with background blending. (a) The real input face, (b) The real projected face, (c) The caricature input face, and (d) The caricature projected face.", "figure_data": "", "figure_id": "fig_13", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Fig. 15 .15Fig. 15. Result of our iterative steps during our caricature projection. (a) The real input face, (b) the initial projected face after caricature creation from the corresponding real face, (c) the final iterative step, and (d) our projected caricature face (with background blending).", "figure_data": "", "figure_id": "fig_14", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Fig. 16 .16Fig. 16. Result of our incremental facial exaggeration steps from the real face to the corresponding caricature face. (a) The real input face, and (b) the final caricature face.", "figure_data": "", "figure_id": "fig_15", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Fig. 17 .17Fig. 17. Result of our incremental facial exaggeration steps with desirable style change from the real face to the corresponding caricature face. (a) The real input face.", "figure_data": "", "figure_id": "fig_16", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Fig. 18 .18Fig. 18. Qualitative comparison results of different face projection techniques using the StyleGAN which is trained on our caricature dataset. (a) The real input face, (b) hyperstyle encoder, (c) e4e encoder, (d) restyle encoder and (e) our projected caricature face.", "figure_data": "", "figure_id": "fig_17", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Fig. 19 .19Fig. 19. Qualitative comparison results of different face caricature methods. (a) The input real face, (b) WarpGAN, (c) StyleCariGAN, (d) DualStyleGAN, and (e) our projected caricature face.", "figure_data": "", "figure_id": "fig_18", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Fig. 20 .20Fig. 20. Visual results obtained using our method. (a) The input real face, (b) our projected caricature face, and mixed style caricature faces.", "figure_data": "", "figure_id": "fig_19", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Fig. 21 .21Fig. 21. Visualization for protecting the privacy of an individual using our caricature faces in a full image. (a) The input real face, and (b) our projected caricature face.", "figure_data": "", "figure_id": "fig_20", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "COMPARISON BETWEEN THE REAL AND CARICATURE PROJECTIONSRESULTS.Projection↓ LPIPS↓ L2↑ SSIMReal Face Projection0.0490.0080.91Caricature Face Projection0.0560.0090.90", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "RESULTS FOR DIFFERENT CARICATURE CREATION METHODS.", "figure_data": "Method↑Identity ↓FID↓LPIPS↓L2↑SSIMWarpGAN [3]0.2674.600.480.650.25StyleCariGAN [51]0.1152.350.470.410.32DualStyleGAN [50]0.08104.510.420.510.33Our0.3738.080.060.010.85", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Lamyanba Laishram; Muhammad Shaheryar; Jong Taek Lee; Soon Ki
[ { "authors": "S B Sadimon; M S Sunar; D Mohamad; H Haron", "journal": "IEEE", "ref_id": "b0", "title": "Computer generated caricature: A survey", "year": "2010" }, { "authors": "K Cao; J Liao; L Yuan", "journal": "ACM Trans. Graph", "ref_id": "b1", "title": "Carigans: Unpaired photo-to-caricature translation", "year": "2018" }, { "authors": "Y Shi; D Deb; A K Jain", "journal": "", "ref_id": "b2", "title": "Warpgan: Automatic caricature generation", "year": "2019" }, { "authors": "E Akleman; J Palmer; R Logan", "journal": "Citeseer", "ref_id": "b3", "title": "Making extreme caricatures with a new interactive 2d deformation technique with simplicial complexes", "year": "2000" }, { "authors": "J Yaniv; Y Newman; A Shamir", "journal": "ACM Transactions on graphics (TOG)", "ref_id": "b4", "title": "The face of art: landmark detection and geometric style in portraits", "year": "2019" }, { "authors": "H.-C Shin; J H Park; S.-D Kim", "journal": "IEEE Transactions on Multimedia", "ref_id": "b5", "title": "Combination of warping robust elastic graph matching and kernel-based projection discriminant analysis for face recognition", "year": "2007" }, { "authors": "L Liang; H Chen; Y.-Q Xu; H.-Y Shum", "journal": "", "ref_id": "b6", "title": "Example-based caricature generation with exaggeration", "year": "2002" }, { "authors": "S E Brennan", "journal": "Leonardo", "ref_id": "b7", "title": "Caricature generator: The dynamic exaggeration of faces by computer", "year": "1985" }, { "authors": "Z Mo; J P Lewis; U Neumann", "journal": "", "ref_id": "b8", "title": "Improved automatic caricature by feature normalization and exaggeration", "year": "2004" }, { "authors": "X Han; K Hou; D Du; Y Qiu; S Cui; K Zhou; Y Yu", "journal": "IEEE transactions on visualization and computer graphics", "ref_id": "b9", "title": "Caricatureshop: Personalized and photorealistic caricature sketching", "year": "2018" }, { "authors": "Q Wu; J Zhang; Y.-K Lai; J Zheng; J Cai", "journal": "", "ref_id": "b10", "title": "Alive caricature from 2d to 3d", "year": "2018" }, { "authors": "J Gong; Y Hold-Geoffroy; J Lu", "journal": "", "ref_id": "b11", "title": "Autotoon: Automatic geometric warping for face cartoon generation", "year": "2020" }, { "authors": "K Cao; J Liao; L Yuan", "journal": "", "ref_id": "b12", "title": "Carigans: Unpaired photo-to-caricature translation", "year": "2018" }, { "authors": "C Li; M Wand", "journal": "", "ref_id": "b13", "title": "Combining markov random fields and convolutional neural networks for image synthesis", "year": "2016" }, { "authors": "A Selim; M Elgharib; L Doyle", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b14", "title": "Painting style transfer for head portraits using convolutional neural networks", "year": "2016" }, { "authors": "J Liao; Y Yao; L Yuan; G Hua; S B Kang", "journal": "", "ref_id": "b15", "title": "Visual attribute transfer through deep image analogy", "year": "2017" }, { "authors": "J Kim; M Kim; H Kang; K Lee", "journal": "", "ref_id": "b16", "title": "U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation", "year": "2019" }, { "authors": "G E Hinton; R R Salakhutdinov", "journal": "science", "ref_id": "b17", "title": "Reducing the dimensionality of data with neural networks", "year": "2006" }, { "authors": "X Huang; M.-Y Liu; S Belongie; J Kautz", "journal": "", "ref_id": "b18", "title": "Multimodal unsupervised image-to-image translation", "year": "2018" }, { "authors": "J Huo; W Li; Y Shi; Y Gao; H Yin", "journal": "", "ref_id": "b19", "title": "Webcaricature: a benchmark for caricature recognition", "year": "2017" }, { "authors": "Y Choi; M Choi; M Kim; J.-W Ha; S Kim; J Choo", "journal": "", "ref_id": "b20", "title": "Stargan: Unified generative adversarial networks for multi-domain image-toimage translation", "year": "2018" }, { "authors": "J.-Y Zhu; T Park; P Isola; A A Efros", "journal": "", "ref_id": "b21", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "R Wu; X Tao; X Gu; X Shen; J Jia", "journal": "", "ref_id": "b22", "title": "Attribute-driven spontaneous motion in unpaired image translation", "year": "2019" }, { "authors": "M.-Y Liu; T Breuel; J Kautz", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Unsupervised image-to-image translation networks", "year": "2017" }, { "authors": "J Johnson; A Alahi; L Fei-Fei", "journal": "Springer", "ref_id": "b24", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "J Liao; Y Yao; L Yuan; G Hua; S B Kang", "journal": "ACM Trans. Graph", "ref_id": "b25", "title": "Visual attribute transfer through deep image analogy", "year": "2017-07" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b26", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "T Karras; S Laine; T Aila", "journal": "", "ref_id": "b27", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "T Karras; S Laine; M Aittala; J Hellsten; J Lehtinen; T Aila", "journal": "", "ref_id": "b28", "title": "Analyzing and improving the image quality of stylegan", "year": "2020-06" }, { "authors": "L Laishram; M Shaheryar; J T Lee; S K Jung", "journal": "Springer", "ref_id": "b29", "title": "A stylebased caricature generator", "year": "2023" }, { "authors": "B Gooch; E Reinhard; A Gooch", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b30", "title": "Human facial illustrations: Creation and psychophysical evaluation", "year": "2004" }, { "authors": "P.-Y C W ; H Liao; T.-Y Li", "journal": "", "ref_id": "b31", "title": "Automatic caricature generation by analyzing facial features", "year": "2004" }, { "authors": "J Liu; Y Chen; W Gao", "journal": "", "ref_id": "b32", "title": "Mapping learning in eigenspace for harmonious caricature generation", "year": "2006" }, { "authors": "Y Zhang; W Dong; C Ma; X Mei; K Li; F Huang; B.-G Hu; O Deussen", "journal": "IEEE Transactions on image processing", "ref_id": "b33", "title": "Data-driven synthesis of cartoon faces using different styles", "year": "2016" }, { "authors": "Z Zheng; C Wang; Z Yu; N Wang; H Zheng; B Zheng", "journal": "Neurocomputing", "ref_id": "b34", "title": "Unpaired photo-to-caricature translation on faces in the wild", "year": "2019" }, { "authors": "W Li; W Xiong; H Liao; J Huo; Y Gao; J Luo", "journal": "Neural Networks", "ref_id": "b35", "title": "Carigan: Caricature generation through weakly paired adversarial learning", "year": "2020" }, { "authors": "K Zhang; W Luo; L Ma; W Ren; H Li", "journal": "IEEE Transactions on Multimedia", "ref_id": "b36", "title": "Disentangled feature networks for facial portrait and caricature generation", "year": "2021" }, { "authors": "R Abdal; H.-Y Lee; P Zhu; M Chai; A Siarohin; P Wonka; S Tulyakov", "journal": "", "ref_id": "b37", "title": "3davatargan: Bridging domains for personalized editable avatars", "year": "2023" }, { "authors": "L A Gatys; A S Ecker; M Bethge; A Hertzmann; E Shechtman", "journal": "", "ref_id": "b38", "title": "Controlling perceptual factors in neural style transfer", "year": "2017" }, { "authors": "L Gatys; A S Ecker; M Bethge", "journal": "Advances in neural information processing systems", "ref_id": "b39", "title": "Texture synthesis using convolutional neural networks", "year": "2015" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b40", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "L A Gatys; A S Ecker; M Bethge", "journal": "", "ref_id": "b41", "title": "Image style transfer using convolutional neural networks", "year": "2016" }, { "authors": "S Reed; Z Akata; X Yan; L Logeswaran; B Schiele; H Lee", "journal": "PMLR", "ref_id": "b42", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "R Yeh; C Chen; T Y Lim; M Hasegawa-Johnson; M N Do", "journal": "", "ref_id": "b43", "title": "Semantic image inpainting with perceptual and contextual losses", "year": "2016" }, { "authors": "Y Choi; Y Uh; J Yoo; J.-W Ha", "journal": "", "ref_id": "b44", "title": "Stargan v2: Diverse image synthesis for multiple domains", "year": "2020" }, { "authors": "T Karras; M Aittala; J Hellsten; S Laine; J Lehtinen; T Aila", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b45", "title": "Training generative adversarial networks with limited data", "year": "2020" }, { "authors": "A Melnik; M Miasayedzenkau; D Makarovets; D Pirshtuk; E Akbulut; D Holzmann; T Renusch; G Reichert; H Ritter", "journal": "", "ref_id": "b46", "title": "Face generation and editing with stylegan: A survey", "year": "2022" }, { "authors": "W Xia; Y Zhang; Y Yang; J.-H Xue; B Zhou; M.-H Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b47", "title": "Gan inversion: A survey", "year": "2022" }, { "authors": "J N Pinkney; D Adler", "journal": "", "ref_id": "b48", "title": "Resolution dependent gan interpolation for controllable image synthesis between domains", "year": "2020" }, { "authors": "S Yang; L Jiang; Z Liu; C C Loy", "journal": "", "ref_id": "b49", "title": "Pastiche master: Exemplarbased high-resolution portrait style transfer", "year": "2022" }, { "authors": "W Jang; G Ju; Y Jung; J Yang; X Tong; S Lee", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b50", "title": "Stylecarigan: caricature generation via stylegan feature map modulation", "year": "2021" }, { "authors": "Z Liu; P Luo; X Wang; X Tang", "journal": "", "ref_id": "b51", "title": "Deep learning face attributes in the wild", "year": "2015" }, { "authors": " Dlib", "journal": "", "ref_id": "b52", "title": "C++ library", "year": "2022" }, { "authors": "P Pérez; M Gangnet; A Blake", "journal": "", "ref_id": "b53", "title": "Poisson image editing", "year": "2003" }, { "authors": "C Yu; J Wang; C Peng; C Gao; G Yu; N Sang", "journal": "", "ref_id": "b54", "title": "Bisenet: Bilateral segmentation network for real-time semantic segmentation", "year": "2018" }, { "authors": "V Gupta; S Raman", "journal": "IEEE", "ref_id": "b55", "title": "Automatic trimap generation for image matting", "year": "2016" }, { "authors": "G Park; S Son; J Yoo; S Kim; N Kwak", "journal": "", "ref_id": "b56", "title": "Matteformer: Transformer-based image matting via prior-tokens", "year": "2022" }, { "authors": "J Lyu; Z Wang; F Xu", "journal": "", "ref_id": "b57", "title": "Portrait eyeglasses and shadow removal by leveraging 3d synthetic data", "year": "2022" }, { "authors": "S Zhou; K Chan; C Li; C C Loy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b58", "title": "Towards robust blind face restoration with codebook lookup transformer", "year": "2022" }, { "authors": "X Huang; S Belongie", "journal": "", "ref_id": "b59", "title": "Arbitrary style transfer in real-time with adaptive instance normalization", "year": "2017" }, { "authors": "E Richardson; Y Alaluf; O Patashnik; Y Nitzan; Y Azar; S Shapiro; D Cohen-Or", "journal": "", "ref_id": "b60", "title": "Encoding in style: a stylegan encoder for imageto-image translation", "year": "2021" }, { "authors": "O Tov; Y Alaluf; Y Nitzan; O Patashnik; D Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b61", "title": "Designing an encoder for stylegan image manipulation", "year": "2021" }, { "authors": "T.-Y Lin; P Dollár; R Girshick; K He; B Hariharan; S Belongie", "journal": "", "ref_id": "b62", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b63", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Y Alaluf; O Patashnik; D Cohen-Or", "journal": "", "ref_id": "b64", "title": "Restyle: A residualbased stylegan encoder via iterative refinement", "year": "2021" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Curran Associates, Inc", "ref_id": "b65", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "L Mescheder; A Geiger; S Nowozin", "journal": "PMLR", "ref_id": "b66", "title": "Which training methods for gans do actually converge?", "year": "2018" }, { "authors": "Y Nitzan; A Bermano; Y Li; D Cohen-Or", "journal": "ACM Trans. Graph", "ref_id": "b67", "title": "Face identity disentanglement via latent space mapping", "year": "2020-11" }, { "authors": "R Zhang; P Isola; A A Efros; E Shechtman; O Wang", "journal": "", "ref_id": "b68", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "S Guan; Y Tai; B Ni; F Zhu; F Huang; X Yang", "journal": "", "ref_id": "b69", "title": "Collaborative learning for faster stylegan embedding", "year": "2020" }, { "authors": "J Deng; J Guo; N Xue; S Zafeiriou", "journal": "", "ref_id": "b70", "title": "Arcface: Additive angular margin loss for deep face recognition", "year": "2019" }, { "authors": "Y Alaluf; O Tov; R Mokady; R Gal; A Bermano", "journal": "", "ref_id": "b71", "title": "Hyperstyle: Stylegan inversion with hypernetworks for real image editing", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 76.03, 346.05, 223.99, 49.84 ], "formula_id": "formula_0", "formula_text": "v = argmin v iϵS,jϵNi∩S ((v i -v j ) -(s i -s j )) 2 + iϵS,jϵNi∩¬S ((v i -t j ) -(s i -t j )) 2 ,(1)" }, { "formula_coordinates": [ 5, 383.88, 165.13, 179.15, 9.65 ], "formula_id": "formula_1", "formula_text": "I p = α p F p + (1 -α p )B p ,(2)" }, { "formula_coordinates": [ 7, 311.98, 363.35, 251.06, 38.95 ], "formula_id": "formula_2", "formula_text": "I f cari = G(E(I cari )), such that I f cari ≈ I cari and I f real = G(E(I real )), such that I f real ≈ I real ." }, { "formula_coordinates": [ 8, 61.21, 398.2, 155.02, 13.68 ], "formula_id": "formula_3", "formula_text": "I f cari = G(z real + n cari ) if n cari = 1" }, { "formula_coordinates": [ 8, 120, 558.69, 180.02, 9.65 ], "formula_id": "formula_4", "formula_text": "L reg =∥ G(E(x)) -w ∥ 2 ,(3)" }, { "formula_coordinates": [ 8, 124.79, 631.5, 175.24, 9.65 ], "formula_id": "formula_5", "formula_text": "L 2 =∥ x -G(E(x)) ∥ 2 ,(4)" }, { "formula_coordinates": [ 8, 97.68, 704.02, 202.34, 9.65 ], "formula_id": "formula_6", "formula_text": "L LP IP S =∥ F (x) -F (G(E(x))) ∥ 2 ,(5)" }, { "formula_coordinates": [ 8, 361.63, 301.15, 201.4, 9.65 ], "formula_id": "formula_7", "formula_text": "L ID (x) = 1 -⟨A(x), A(G(E(x)))⟩ ,(6)" }, { "formula_coordinates": [ 8, 312.02, 365.9, 251.02, 20.91 ], "formula_id": "formula_8", "formula_text": "L(x) = λ 1 L 2 (x) + λ 2 L LP IP S (x) + λ 3 L ID (x) + λ 4 L reg (x),(7)" } ]
10.1109/ICPADS.2017.00013
2024-03-20
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b42", "b43", "b37", "b46", "b4", "b28", "b20", "b8", "b50", "b7", "b33", "b32", "b21", "b11", "b19", "b49", "b33", "b24", "b42", "b17", "b29", "b2", "b42", "b15", "b22", "b15", "b42", "b12" ], "table_ref": [], "text": "Data processing pipelines in edge devices increasingly rely on deep learning models to identify patterns and extract insights from multimodal IoT data. Examples include predictive maintenance in industrial automation, object identification and tracking in smart camera systems (Qu et al., 2022), activity and healthcare trackers in mobile (Ravi et al., 2016), wearable and hearable applications (Maag et al., 2017;Sabry et al., 2022). In all these systems, deep models are deployed and run along with other tasks, under constraints and priorities dictated by the current context and available resources, including storage, CPU time, energy and bandwidth. A large body of work explore different techniques to optimize and compress deep models without hurting accuracy and generalization abilities, while accelerating their execution in software (Boehm et al., 2018) and in hardware (Jouppi et al., 2017). Model pruning and quantization (Han et al., 2016) have become part of standard deep learning deployment pipelines, e.g., TFLMicro (David et al., 2021) and Dense layer [3,2] Dense layer [2,4] Dense layer [4,2] #input feature #output feature (#neurons) • Default memory layout after slicing\n• Store W 1 T and change the multiply order: x T W 1 = (W 1 T x) T • Apply at compile time for a contiguous memory Slicing off two neurons in the first hidden layer changes dimensions of W 1 and W 2\nFigure 1: Example of layer slicing in a fully-connected model. We slice off two neurons, i.e., computational units, in the first hidden layer of the network with 4 neurons, 3 inputs and 2 outputs. The matrices W 1 and W 2 store the weights along all connections, and the respective columns and rows in W 1 and W 2 get eliminated by layer slicing. This breaks the contiguous memory layout and the memory arrangement of W 1 , yet not W 2 .\nA transpose of W 1 and a change of the multiply order preserve contiguous memory of W 1 after slicing.\nTensorRT (Vanholder, 2016), to enable deep learning on severely constrained embedded hardware operated by low-power microcontrollers with only a few kB of RAM. Pruning covers a set of methods that reduce the model size by eliminating unimportant operations (weights, neurons, kernels) in the model (Dai et al., 2018;Li et al., 2017). These methods date back to the optimal brain damage (LeCun and et al., 1990) and the optimal brain surgeon (Hassibi and Stork, 1993), which suggest to prune the weights based on the Hessians of the loss function. Recent pruning methods (Entezari and Saukh, 2019;Han et al., 2015;Timpl et al., 2022) propose to prune network weights with a low magnitude or a low magnitude increase. (Li et al., 2017) propose to prune channels in CNNs based on a filter weight norm, while Hu et al. (2016) use the average percentage of zeros in the output to prune unimportant channels. One drawback of compile-time optimizations, e.g., pruning, is that the resulting models are resourceagnostic. They thus yield suboptimal performance in many interesting applications where resource availability depends on different dynamically changing factors such as available energy, task priority and timing constraints. Another drawback of one-shot model compression techniques, is that these are applied to the whole model, making exploration of different options for a resource-aware on-device model reconfiguration challenging. Both problems are described in detail below. Dynamic resource constraints. Many interesting applications can make use of resource-aware deep models, i.e., models that can adapt their execution to available computational resources and time constraints. For example, camera image processing by a drone or a car may depend on the respective speed (Qu et al., 2022). Processing interesting and relevant data can justify using more energy and computational time than when running regular environment scans (Gherman et al., 2021). A naive solution to address dynamic resource constraints is to store several independent deep models and switch between them as resource availability and task priorities change. The drawback of this approach is both the increased memory consumption to store these independent models which does not scale, and the overhead of switching between models at runtime, e.g., by loading these from flash to RAM and reallocating the necessary tensors.\nSeveral approaches have been proposed to adapt a model to dynamic resource constraints. These can be classified into the methods that implement early exit predictions at a cost of a reduced accuracy, e.g., BudgetRNN (Kannan and Hoffmann, 2021) and ASVN (Bambusi et al., 2021), and those that build a subnetwork structure using weight sharing, with each subnetwork being more efficient yet possibly less accurate, e.g., DRESS (Qu et al., 2022) and NestDNN (Fang et al., 2018). This work falls into the latter category, yet makes use of structured sparsity constructively to allow for additional flexibility and hardware support on typical IoT devices. Deep learning support on IoT devices. Modern IoT hardware often provides support for running deep learning models. This includes support for floating point instructions, dedicated instructions for frequent operations, such as MLA, FMA, and their vector versions. Deep model frameworks for a specific platform make use of the available hardware-specific features to provide maximum speedup. However, the methods that introduce subnetwork structures to a model can not rely on the toolchain support to optimize the execution of each subnetwork. For examples, fine-grained weight pruning may lead to accelerated execution due to an abundance of zeros in the weight matrices and sparse filters, yet unconstrained locations of these zeros present a difficulty to make use of unstructured sparsity. Moreover, there are specific algorithms designed to best handle different sparsity levels (Hoefler et al., 2021), that may or may not enjoy the available hardware support. To overcome the issues, NestDNN (Fang et al., 2018) prunes convolutional filters; these have to be paged in or out when switching from one multi-capacity model to another. DRESS (Qu et al., 2022) relies on a specialized hardware to ensure the applied sparsity pattern translates into computation efficiency. Hardware accelerators may, however, appear too power-hungry and expensive for battery-powered IoT devices. Specialized hardware is also inflexible as the deep learning technology advances fast and hardware modernizations are costly.\nContributions. We present the design of Resource-Efficient Deep Subnetworks (REDS), featuring a nested submodel structure to address dynamic resource constraints on mobile and IoT devices. In contrast to DRESS and NestDNN, REDS introduces a novel hardware-aware method to design the nested subnetwork structure, i.e., the structured sparsity patterns, by formulating and solving an iterative knapsack problem, provide theoretical guarantees and empirical evidence of outstanding solution performance. Furthemore, we leverage permutation invariance of neurons (Entezari et al., 2021) to keep the subnetwork weight tensors in contiguous memory regions, i.e., dense layers remain dense in all lower-capacity subnetworks. This makes the description of the subnetwork structure elegant, reduces REDS adaptation time, and allows for further hardware-specific optimizations. REDS code will be made publicly available. 1 Our contributions are:\n• We present a novel hardware-aware method to convert a model into the REDS structure, which can efficiently adapt to dynamically changing resource constraints.\n• We formulate the optimization problem as an iterative knapsack, present theoretical analysis and provide empirical evidence of the solution effectiveness, especially in the low-data regime (Sec. 3 and Sec. 4).\n• REDS make use of the permutation invariance of neurons to enable hardware-specific optimizations (Sec. 3). In particular, for resource-constrained devices that use data caches, REDS provide a compiletime optimization to ensure all subnetwork weights reside in contiguous memory (Sec. 5).\nIn the next section we explain REDS on a simple fully-connected neural network (DNN)." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "A Base Example", "publication_ref": [ "b12", "b27", "b38", "b42", "b16", "b22", "b39", "b47", "b47", "b9", "b47", "b36" ], "table_ref": [], "text": "We first illustrate advantages of REDS on a fully-connected network depicted in Fig. 1. The three dense weight matrices W 3×4 1 , W 4×4 2 and W 4×2 3 connect neurons in consecutive layers that are stored in memory as unfolded one-dimensional arrays. We assume the row-major format is used to store and access model weights in memory, which is the standard choice on most hardware architectures. Given a trained and optimized model, weight matrices are mapped to continuous memory regions, allowing for a straightforward use of vector instructions to speed up on-device inference.\nREDS organizes a model into a set of nested submodels, i.e., the active weights of a child subnetwork are fully contained in its parent subnetwork. To enable this structure, we slice each parent subnetwork into an active and an inactive part, where the active part shapes the child subnetwork. REDS makes use of structured sparsity, i.e., model slicing occurs at the level of individual neurons and convolutional filters. Slicing off two neurons in the first hidden layer in Fig. 1 leads to the removal of two columns and two rows in the weight matrices W 1 and W 2 respectively.\nFirstly, even though any neuron can be removed from a layer, we re-order neurons to have a group of active neurons followed by inactive neurons due to the permutation invariance phenomenon of neural networks (Entezari et al., 2021;Jordan et al., 2022). A permutation does not change the function of the network, but allows optimizing the memory layout to keep subnetwork weights in contiguous memory. The above observations make the subnetwork layout to be stored efficiently on-device. In fact, it is sufficient to store one integer value denoting the slicing point for each layer in each subnetwork, which corresponds to the number of active computational units. This is possible since active and inactive units build continuous groups in memory. Activating a particular subnetwork means changing the size of the dimension of the weight tensor in each layer. An additional bit is used to indicate optimization for caches, i.e., that a flipped operation order is applied.\nSecondly, REDS achieves minimal adaptation overhead : only the width of the layers has to be updated to switch to a different model. Switching from one REDS submodel to another requires adjusting only the sizes of the layers. Due to a local scope of a slice, the modification affects only the incoming and outgoing connections; inactive computational units do not participate in inference. Examples of slicing a dense and a convolutional networks are shown in Fig. 1 and Fig. 2. The computational cost of running inference using a subnetwork is not affected by the presence of other REDS networks, in sharp contrast to the approaches that use binary masks to select active neurons or channels (Mishra et al., 2021;Qu et al., 2022).\nFinally, pruning a neuron in one layer may remove more multiply-accumulate operations (MACs) than in another layer. In our example in Fig. 1, removing one neuron in the first hidden layer yields more MAC reduction than removing a neuron in the second hidden layer due to a different number of connections. Moreover, the contribution of these neurons to model accuracy may be different. Related research suggests several importance measures for individual weights, neurons, and convolutional filters (Frantar and Alistarh, 2022;Hoefler et al., 2021;Molchanov et al., 2019;Shen et al., 2022). These can be used to optimize a network for performance while keeping essential elements. Our problem is different: we build a nested structure, and this requires iterating over the subnetworks. DRESS starts from the largest network. NestDNN implements a top-down pruning and a bottom-up freeze-and-regrow algorithm.\nWe formulate the optimization problem for a parent-child subnetwork pair as a variant of the knapsack problem (Shen et al., 2022). To extend the solution to multiple nested subnetworks, we generalize the approach to an iterative knapsack problem (Della Croce et al., 2019). In contrast to Shen et al. (2022), our knapsack formulation models the dependency between layers by iteratively chaining constraints to enforce each subnetwork's weight tensors to be functionally correct (see Fig. 2). Furthermore, REDS knapsack formulation can constraint each subnetwork's peak memory usage known to be the major memory bottleneck for enabling neural networks on edge computing devices (Lin and et al., 2023). Our theoretical findings (Appendix A.2) suggest that the knapsack method applied to the smallest subnetwork first and letting it grow, i.e., the bottom-up approach, is more effective than starting from the largest subnetwork and pruning it down to the smallest one. We empirically show this in Sec. 4.\nIn the following sections we focus on specific contributions and design choices for REDS on the way to a fully-functioning framework. Sec. 3 discusses a pipeline to convert a model to REDS and presents a hardware-aware iterative knapsack method to choose the model slicing points. Sec. 5 revisits the memory layout optimization for cache and provides an empirical evaluation of the achieved gain on embedded hardware." }, { "figure_ref": [], "heading": "Resource-Efficient Deep Subnets", "publication_ref": [], "table_ref": [], "text": "Before describing the pipeline how we build REDS, we first argue why layer slicing by skipping a contiguous sequence of computational units, such as neurons or convolutional filters, doesn't constrain the expressiveness of REDS in any way. These computational units in all but the last layer of the network can be pruned arbitrary, but can be reordered to form a contiguous sequence without changing the network function. We then describe REDS construction pipeline. In the heart of it is the novel method to decide on the choice of the subnetwork structure of REDS based on the knapsack problem. Finally, we discuss how to fine-tune the submodels in REDS to recover lost accuracy due to the applied pruning of computational units." }, { "figure_ref": [ "fig_0" ], "heading": "Permutation invariance", "publication_ref": [ "b1" ], "table_ref": [], "text": "The optimization landscape of neural networks contains an abundance of minima and saddle points as a result of numerous symmetries. One important class are permutation symmetries, i.e., the neurons or channels in the same layer can be randomly permuted and, if the incoming and outgoing connections are adjusted accordingly, form structurally different, yet functionally identical networks. An example is given in Sec. 2. A layer with n neurons has n! permutations. A deep network with l such layers has l n! functionally identical solutions. For example, Ainsworth et al. (2022) calculated that ResNet50 contains 10 55 ′ 109 permutation symmetries.\nWe leverage permutation invariance of neural networks to permute the units of a layer in descending order based on their importance scores. This operation doesn't change the subnetwork function, but ensures that the nested subnetworks keep the most important units of each layer as part of their architecture, thus simplifying each subnetwork architecture search. To preserve the correct feature extraction process, the permutation operation is performed layerwise and the structural elements to permute depend on the layer type. In a standard convolution layer, for example, the weights are stored as a four-dimensional tensor (see Fig. 2). When the convolutional filters are permuted, the kernels in the subsequent layer are similarly reordered to maintain consistent input channel and kernel sequence during the forward pass. This permutation process involves manipulating the original weight tensors, initially unfolding the layer's tensor into a series of three-dimensional tensors-representing the convolutional filters-and subsequently unrolling each filter's tensor into a set of two-dimensional tensors, corresponding to the filter's kernels." }, { "figure_ref": [], "heading": "Importance and cost measures", "publication_ref": [ "b15", "b5", "b19", "b54", "b39", "b40", "b45", "b39", "b47", "b39", "b35" ], "table_ref": [], "text": "How do we identify key neurons or convolutional filters essential for constructing a high-precision subnetwork? Several studies employ performance scores to assess the significance of weights, neurons, and channels within a neural network. NestDNN (Fang et al., 2018) ranks each convolutional filter by calculating the L2 distance between feature maps of the same class and those of different classes. This method uses all the training data, which is often unavailable due to resource constraints when the model is used in production. Magnitude-based pruning methods, that use the magnitude of the weights to estimate their importance (Cai et al., 2020;Han et al., 2015;Zhu and Gupta, 2017), provide a computationally efficient way to rank the model's computational units. However, numerous studies (Molchanov et al., 2019;Mozer and Smolensky, 1988) have reported that the magnitude does not necessarily reflect the importance of a unit in terms of its contribution to the loss reduction.\nIn REDS, we adopt an efficient approach to score each model's computational units. This approach can be used on hardware platforms, which support on-device training through gradient descent. We compute for each of the encoder's weight i the importance as I i = |g i γ i |, where g i is the sum of the accumulated gradients computed from backpropagation (Rumelhart et al., 1986) and γ i is the scalar value of weight i (Molchanov et al., 2019;Shen et al., 2022). For each computational unit c, such as a convolutional filter or a fully-connected neuron, let W c be the set of its weights. Then its importance score, denoted as I c , is computed as the grouped sum of the importance scores over all these weights:\nI c = i∈Wc I i = i∈Wc |g i γ i |.\n(1)\nThe computational unit I approximates the squared difference of the loss when a unit is removed, thus it quantifies the contribution of each unit to the final prediction loss (Molchanov et al., 2019). Each computational unit is characterized by an importance score, denoted as I c , and computational costs defined in terms of model latency and peak memory usage. For each unit we compute an inference latency and peak memory usage predictors on a device. The former is defined as the number of multiply-accumulate operations (MACs) (Liberis et al., 2021), while the latter is computed as the size in bytes of the activation map." }, { "figure_ref": [], "heading": "Iterative knapsack problem", "publication_ref": [], "table_ref": [], "text": "This section describes the novel method to design REDS subnetwork structure by formulating and solving a variant of an iterative knapsack problem. The discussion in the main paper is limited to the simplest case, while the generalization to more complex model architectures, handling dependencies between layers and the theoretical analysis of the problem are moved to Appendix A.1 and Appendix A.2 for presentation clarity." }, { "figure_ref": [], "heading": "Problem formulation", "publication_ref": [ "b9", "b14", "b3", "b34", "b23" ], "table_ref": [], "text": "Given a pre-trained model, REDS identifies how to best slice the weight tensors by formulating the problem as an iterative knapsack problem with k stages: the items included in a knapsack with capacity c have to be included in all later stages, i.e., knapsacks with larger capacities (Della Croce et al., 2019;Faenza et al., 2023).\nThe items correspond to all the computational units that compose the model encoder architecture. Given a list of MACs and peak memory usage values REDS solves an iterative knapsack problem with as many stages as the predefined number of subnetworks. We give two heuristic algorithms that solve the corresponding iterative knapsack problem and hence find subnetwork architectures, i.e., slicing points for each layer and each subnetwork, that satisfy these capacity constraints while maximizing I to keep the most important units in each subnetwork. Both heuristics are based on solving several knapsack problems formulated as integer programs (Perron and Furnon) which we then solve using a mixed integer programming solver (Bixby, 2007). Due to permutation invariance, the items are stored and passed to the solver in descending order of their importance scores. This property makes the solver select items in descending order, thereby creating contiguous active and inactive units for the weight tensors in each subnetwork. Given C M ACs as the maximum number of MACs in a subnetwork and C P eakM em as the maximum peak memory usage for the model, i.e., the maximum size of each layer activation maps (Liberis and Lane, 2023), a single stage knapsack problem is formulated as follows:\nmax L l=1 u l i=1 x il • I il s.t. L l=1 u l i=1 x il • M ACs il ≤ C M ACs , L l=1 u l i=1 x il • P eakM em il ≤ C P eakM em , x il ∈ {0, 1}, ∀ l ∈ {1, . . . , L}, i ∈ {1, . . . , u l }, I i1 ≥ I i2 ≥ . . . I il ,(2)\nwhere x il is a binary decision variable taking value 1 if an item i from layer l is selected and 0 otherwise; I il is the importance score of item i from layer l; M ACs il is the number of MACs of item i from layer l; P eakM em il is the number of byte of the activation map of item i from layer l; L is the number of layers in the model and u l is the number of items in layer l. The above knapsack problem formulation does not account for dependencies between layers. Its generalization and adaptation for depthwise separable convolutions (Howard et al., 2017), for simplicity considering only M ACs as a constraint, is moved to Appendix A.1 for clarity of presentation.\nWeights 𝜽 Slice s 1 Slice s 2 Slice s N Input i 1 Input i 2 Input i N * 𝜋 1 ᐧ ℒ 1 𝜋 2 ᐧ ℒ 2 𝜋 N ᐧ ℒ N" }, { "figure_ref": [], "heading": "Bottom-up (BU) and top-down (TD) heuristics", "publication_ref": [], "table_ref": [], "text": "REDS examines two heuristics for solving the iterative knapsack problem named bottom-up (BU) and top-down (TD). The former iteratively calculates the subnetwork architectures by considering the tightest constraints for the smallest subnetwork first. Once a solution is found by the knapsack solver, these units are frozen, i.e., they are now part of all nested subnetworks, and the second smallest subnetwork is being computed by the solver. The latter top-down method determines solutions by considering the weakest constraints first and then iteratively searching the architectures for increasingly smaller subnetworks. Related works, including DRESS and NestDNN, use a variant of the top-down approach. Our theoretical analysis of the iterative knapsack problem (based on the classical 0-1 knapsack problem) in Appendix A.2 shows that the bottom-up approach promises a better worst-case performance. In particular, we prove that the solution found by the two-stage bottom-up iterative knapsack heuristic is not worse than 2 3 • Opt, where Opt is the optimal solution of the knapsack with larger capacity, and that this bound is tight. We then show that the tight bound for the top-down iterative knapsack is 1 2 • Opt, where Opt in this case is the optimal solution of the knapsack with smaller capacity. Since our generalized problem suited for depthwise separable convolutional architectures has the classical 0-1 knapsack as its core problem, we believe that a similar result is valid for this case as well." }, { "figure_ref": [ "fig_1" ], "heading": "REDS fine-tuning", "publication_ref": [ "b42", "b42", "b53", "b53" ], "table_ref": [], "text": "Fine-tuning models after pruning is a common practice to recover model accuracy. The REDS subnetworks found by the heuristics algorithm are fine-tuned simultaneously to recover their full accuracy. We first fine-tune the encoder's weights followed by the batch normalization layers (Qu et al., 2022). For each subnetwork we slice each layer to construct a subnetwork architecture by storing only the slicing point corresponding to the number of active computational units (see Fig. 3). During fine-tuning, the weights of each subnetwork i are reused by all lower-capacity subnetworks {i + k} N k=1 by dynamically creating a tensor slice for each layer, i.e., a tensor view object that points to the original weight tensor. We balance the contribution of the loss of each individual model with parameters {π i } N i=1 equal to the percentage of weights used in the encoder part of the model by the subnetwork i (Qu et al., 2022). (Zhang et al., 2017): training a network of each size from scratch (\"Scratch\"), conversion from a pre-trained network using two knapsack versions (\"Knapsack BU\" and \"Knapsack TD\"), and training REDS structure from scratch (\"REDS training\"). Reported results from three independent runs. The accuracy of each 100 % network reported by Zhang et al. (2017) is listed in the header row." }, { "figure_ref": [], "heading": "REDS Performance", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Evaluation setup", "publication_ref": [ "b51", "b52", "b31", "b6", "b35", "b6", "b10", "b23", "b26" ], "table_ref": [], "text": "We evaluate the performance of REDS, which comprises four nested subnetworks obtained by multiplying the full model MACs (100%) by the predefined constraints percentages (25%, 50%, and 75%), on three datasets: Google Speech Commands (Warden, 2018), FMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky et al., 2009). In addition, two nested subnetworks are evaluated on the Visual Wake Words dataset (Chowdhery et al., 2019) for 50% of MACs and 200 KB of peak memory usage as constraints. We measure the subnetworks' peak memory usage using the open-source tf-lite tools (Liberis et al., 2021), assuming the temporary buffer space is utilized for storing both input and output buffers (Chowdhery et al., 2019). REDS is evaluated on three pre-trained Google Speech Commands architectures (DNN, CNN, and DS-CNN) of two sizes (S and L) each introduced in Zhang et al. ( 2017), and on an ImageNet (Deng et al., 2009) pretrained MobileNetV1 (Howard et al., 2017) architecture fine-tuned on the Visual Wake Words dataset. For CIFAR10 and FMNIST REDS is tested on two pre-trained DS-CNN size S architectures. DNN is a 2-layer fully-connected architecture with layer width 144 (S) and 436 (L). CNN is an architecture with two convolutional layers followed by 3 fully-connected layers. The widths of convolutional layers are 28 and 30 for the S architecture and 60 and 76 for the L architecture, respectively. DS-CNN and MobileNetV1 are composed of a standard convolutional layer followed by several blocks of depth-wise and point-wise convolutional layers (see Fig. 2). The former has 4 blocks for the network of size S, and 5 blocks for the network of size L with a layer width of 64 and 276 respectively. The latter includes 13 blocks with an initial layer width of 32 that progressively increases to 1024 with the model's depth. Each convolutional layer in the CNN, DS-CNN and MobileNetV1 architectures is followed by a batch normalization layer (Ioffe and Szegedy, 2015)." }, { "figure_ref": [ "fig_3", "fig_8" ], "heading": "REDS empirical evaluation", "publication_ref": [ "b35", "b53" ], "table_ref": [ "tab_4" ], "text": "The main paper presents the results for DS-CNN architectures (S and L) on Google Speech Commands and MobileNetV1 (0.25x) on Visual Wake Words. The fine-tuning hyperparameters and additional results for other architectures and datasets are available in the extended Appendix C. The REDS structure for Google Speech Commands used in the main paper comprises only four nested subnetworks, however a larger number of subnetworks does not degrade the accuracy of REDS fine-tuned on Google Speech Commands (see Fig. 10). REDS was tested on tasks of varying difficulty (FMNIST and CIFAR10) without accuracy degradation compared to training from scratch. In contrast to the state-of-the-art optimizers for edge devices, such as µNAS (Liberis et al., 2021) performance when the solution is found by the BU and TD heuristics, respectively. Finally, the forth column reports REDS test set accuracy achieved by each REDS subnetwork trained from scratch, yet following the structure obtained by the BU heuristics. We observe that the accuracies of each submodel for the same percentage of MACs are similar, even for small S architectures with only 25 % of MACs. The accuracies of the full models, highlighted in bold in the top row are computed using pre-trained models from Zhang et al. (2017) and closely match our experimental results. The performance difference between the BU and TD heuristics is largely insignificant due to the fine-tuning step with extensive training data. Even though the models are overparameterized and in all cases achieve high performance, low-capacity subnetworks S sliced by the bottom-up approach yield minor performance gains. However in the low-data regime, where only few samples per class are available for fine-tuning, the performance differences between BU and TD heuristics are remarkable. Fig. 4 shows the performance when few-show learning is applied to fine-tune REDS structures. Our empirical findings confirm a consistently superior performance of the BU heuristic compared to the TD alternative. The differences diminish as more samples are used for fine-tuning (Entezari et al., 2023) and there is barely any difference if full-finetuning is applied using all available training data.\nFor the MobileNetV1 architecture, the REDS knapsack BU heuristic surpasses the fully-trained uniform pruning in terms of peak memory usage, while maintaining a negligible drop in accuracy (see Table 2). This validates the efficacy of the knapsack BU approach in constraining the peak memory usage within the subnetworks' architecture. Furthermore, imposing a peak memory usage constraint is essential to facilitate the deployment of the REDS structure within the confines of the available device RAM (see Fig. 7)." }, { "figure_ref": [], "heading": "REDS Optimization for Caches", "publication_ref": [], "table_ref": [], "text": "Embedded ML frameworks like Tensorflow Lite typically store model weight matrices in row-major order. This means, that each row of a weight matrix is stored contiguously in memory. Without the use of sub-networks these weights are also accessed in a contiguous fashion. However, when using subnetworks for model inference, some neurons and their corresponding weights are omitted, resulting in non-contiguous memory access. This effect is illustrated in Figure 1. We now show how by simply adapting the computational graph at compile time, we are able to optimize the computation of REDS subnetworks for devices with a cache memory architecture." }, { "figure_ref": [], "heading": "Row-major and column-major stores", "publication_ref": [], "table_ref": [], "text": "The calculation of a fully-connected neural network layer during the forward pass is a matrix multiplication\nH = σ(X T [m×b] • W [m×n]\n), where x is an input matrix of shape m × b (input samples × batch size), W the layer weights matrix of shape m × n (input samples × number of neurons) and σ the activation function. For simplicity we omit the typically used bias term. During matrix multiplication, like it is for instance implemented in Tensorflow Lite, an inner loop multiplies and sums each element of row x i of X T with column w j of W . This column-wise access of row-major ordered weights w ∈ W may lead to consecutive reads from memory addresses, which are not located close-by depending on the size of the matrix and the use of the REDS subnetworks. However, this can easily be circumvented by changing the matrix multiplication involving a transposed W , i.e., σ(X T • W ) = σ((W T • X) T ), which ensures contiguous memory accesses of the weights. Note that the additional transposition of the resulting matrix introduces no additional computations if the input matrix is a single sample, i.e., X has size [m × 1]. Both X [m×1] and X T\n[m×1] are stored equally in memory and, thus, the transposition is redundant in this case. Let us consider the simple example of X [2×2] and W [2×3] both stored in row-major mode. In the basic matrix multiplication cases X T • W , the order of the relative memory access locations of weights w is 0\n→ 3 → 1 → 4 → 2 → 5 → 0 → 3 → • • •. In the optimized cases (W T • X)\nT where W T is also stored in column-major form, the order of weights accesses is 0\n→ 1 → 0 → 1 → 0 → 1 → 2 → 3 → • • •.\nDepending on the memory and cache setup of a device, this highlights the potential for exploiting cache systems by simply changing the matrix multiplications to the optimized transposed form. " }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Optimizing REDS computational graph", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "The optimization proposed above can be implemented by assuring that each matrix multiplication within a computational graph of each submodel follows the for cache optimized access pattern and the corresponding weight matrices are stored in column-major order in memory. We now show the effect of using this optimized computational graph by benchmarking different matrix multiplications. To this end, we use a Raspberry Pi Pico board, which features a RP2040 chip based on a dual-core Arm Cortex-M0+ processor architecture with 264kB RAM and 16MB off-chip flash memory (RP2040, 2023). The chip also features execute-in-place (XIP) support, such that the flash memory can be treated like internal memory. These accesses to flash are cached with a 16kB, two way set-associative cache with 8 byte wide cache lines.\nFig. 6 shows the cache effect on the duration of matrix multiplications when we use the optimized computation graph compared to the basic one, i.e., the default way where W is stored in row-major mode and we calculate X T • W . We benchmark the matrix multiplication at different splits and use 25%, 50%, 75% and 100% of the neurons in W . In Fig. 6 (left) we show the duration of a matrix multiplication with differently shaped weights W times an input matrix X of shape 256 × 4, where the weights and inputs are 32 Bit wide floating points. Similarly,Fig. 6 (right) shows the duration of a weights matrix multiplication of size 512 × 256 with the same X, but now the weights and inputs are unsigned integers with different widths, i.e., 8, 16 and 32 Bit. We observe in all cases a clear speed-up in matrix multiplication when comparing the optimized and the basic computation approach, which is also evident from Table 3. For all test cases the cache-hit rate is above 97% when using optimized matrix multiplications. Additionally, we see a notable difference between floating point and integer based matrices. This difference is due to the fact that the RP2040 does not feature any floating point arithmetic support and, thus, the cache effect is diminished due to longer duration of basic multiplications. Finally, there is no notable difference in speed-up and cache-hit rate between the different splits.\nWe can therefore conclude that a simple change in the computational graph at compile time, i.e., using column-major weight matrices, leads to increased matrix multiplication speeds on devices featuring a cached memory. Cache optimization is thus essential to speed-up inference of nested submodels part of REDS." }, { "figure_ref": [ "fig_10", "fig_10" ], "heading": "REDS on Mobile and IoT", "publication_ref": [ "b25", "b5", "b35", "b18" ], "table_ref": [], "text": "We evaluate REDS models on two mobile phones, specifically Xiaomi Redmi Note 9 Pro and Google Pixel 6, and on two IoT devices, namely Arduino Nano 33 BLE Sense and Infineon CY8CKIT-062S2. For the mobile phone evaluation, we employed the official Google TFLite benchmarking tool, which measures the model's inference time on Android-based mobile devices. On IoT devices we deployed and evaluated the models using the cloud-based platform Edge Impulse (Hymel et al., 2022). We report the number of parameters, accuracy, and latency on the mobiles and on the IoT devices for the full architectures (100% MACs) and the subnetworks architecture (75%, 50% and 25% MACs) found by the BU knapsack solvers. The obtained results are reported in Fig. 8 for the DNN, CNN and DS-CNN architectures S (top row) and L (bottom row) on Google Speech Commands. All results are averages over three runs. TFLite and TFLMicro lack support for runtime adaptation of model weight tensors. We extended the TFLMicro framework to support REDS out of the box2 and measure on-device inference and submodel switching times. The first left-most column of plots in Fig. 8 shows the relationship between the subnetworks' MACs and the number of model parameters. All curves are close to linear, yet the parameters of these linear relationships are architecture-specific. The DS-CNN models have the lowest number of parameters, thanks to their more sophisticated architecture, which is more efficient on our dataset compared to the standard convolutional and dense networks. DS-CNN models also yield better accuracy, even for only 25% of MACs, while DNN models perform worst, as can be observed in the second column of plots. This result can be attributed to the successful generalization of the iterative knapsack problem to support depth-wise convolutions (cf.f Appendix A.1). We also compare REDS to several baselines. Recent works Tan and Le (2019) consider model optimization by pruning an equal share of computational blocks in each layer. We use the block selection strategy based on the weights magnitude (i.e., L1) used in Cai et al. (2020), and a random selection. REDS knapsack solution outperforms both baselines achieving higher accuracy for the same percentage of MACs. In addition, the red star in the first two figures shows the performance of the constrained neural architecture search for microcontrollers µNAS as reported in Liberis et al. (2021). The network architecture uses depth convolutions, yet is more complex than our DS-CNN. µNAS shows 3% performance improvement over DS-CNN at the cost of 32% higher model size. However, µNAS took 39 GPU-days to compute the solution, in contrast to minutes for DS-CNN and hours for MobileNetV1 taken by our knapsack solver.\nThe last two plots on the right show REDS performance on mobile and IoT devices. We observe a difference of three orders of magnitude in the inference times between the two categories. DNN models perform best thanks to the more optimized algorithm and libraries for matrix-matrix multiplication (Goto and Geijn, 2008). All models show a linear relationship between the percentage of MACs and inference time. This empirical evidence validates MACs as a robust predictor of model latency, extending its applicability to mobile devices. We also evaluate REDS against early exit linear classifiers and magnitude-based uniform pruning on the DS-CNN S and L architectures with respect to MACs and test set accuracy, as shown in Fig. 9. The knapsack solution of REDS demonstrates superior performance over both methods, most notably within the domain of lower parameter configurations, specifically the S size.\nOn Arduino Nano 33 BLE Sense, TFLMicro framework extended to support REDS yields 38±1µs model adaptation time for a 2-layer fully-connected network, while the model inference times for the same network with 25% and 50% of MACs are 2'131±27µs and 4'548±13µs respectively. This highlights the efficiency of permutation-based approach adopted by REDS. The energy consumption of REDS inference as measured with the Power Profiler Kit (PPK2) on Nordic nRF52840 for DS-CNN varies between 20mJ and 61mJ, whereas switching takes <0.01mJ. Results for other architectures are in the extended abstract. " }, { "figure_ref": [], "heading": "Conclusion, Discussion, Outlook", "publication_ref": [ "b15" ], "table_ref": [], "text": "This paper presents Resource-Efficient Deep Subnetworks (REDS), a novel approach to adapt deep neural networks to dynamic resource constraints typical for mobile and IoT devices. REDS formulates the subnetworks architectures search as an iterative knapsack problem, taking into account the dependencies between layers, MACs and peak memory usage. Moreover, REDS employs neuron permutation invariance to facilitate adaptation to variable resource availability without compromising the models' accuracy or adding runtime overhead. Notably, REDS ensure the subnetworks' weights are stored in contiguous memory, enhancing cache optimization and compatibility with hardware-specific enhancements like vector instructions. Experimental evaluation on seven benchmark architectures demonstrates the effectiveness of REDS on mobile and IoT devices and superior performance of current pruning and neural architecture search state of the art methods. We perform a theoretical analysis of the knapsack solution space and prove the worst-case performance bounds for the two heuristic algorithms. Discussion and Outlook. Our study has some limitations. First, we do not test the efficiency of REDS when combined with quantized models, which is left for future work. Secondly, the support of REDS for specific layers like skip connections is not explored and should be addressed in the future to support further advanced architectures like MobileNet v2. We plan to augment REDS with a task scheduler and deploy REDS in the context of a specific application with dynamically changing resource constraints (Fang et al., 2018). " }, { "figure_ref": [ "fig_0" ], "heading": "A Iterative Knapsack Theory", "publication_ref": [], "table_ref": [], "text": "A.1 Knapsack for depth-wise convolutions\nIn this section we define a generalized knapsack framework for depth-wise convolutions. Given a m × n input matrix A, the first layer L 0 is a standard convolutional layer (cf.f Fig. 2) applied to A that consists of N 0 filters and hence produces N 0 output channels. It is followed by depth-wise convolutional blocks B 1 , . . . , B d , where each block i is built by a depth-wise layer B D i and a point-wise layer B P i for i = 1, . . . , d. A depth-wise layer applies one filter on each of the N i-1 input channels. The point-wise layer B P i then applies N i filters, where the kernel number for each filter has to be equal to the number of filters of the previous layer. If we want to reduce the size of the network, we can decide on how many filters we use at L 0 and at each B P i . E.g., if we choose k filters at L 0 the layer B D 1 will have k filters and one filter of B P 1 will have k as kernels number. The resulting optimization method can be used for structural pruning of a neural network, i.e., for choosing an optimal number (≤ N 0 ) of filters of L 0 and (at most N i ) filters of B P i such that we maximize performance of the network obeying a constraint on time or space complexity, which in REDS is the number of MACs.\nFormally, we introduce an integer decision variable x 0 to decide on the number of filters we use at L 0 , and x i on the number of filters to use at each B P i . Since every computational unit consumes computational resources and contributes to the overall accuracy, we also introduce decision variables that control whether a unit is used by a subnetwork or not. For L 0 we introduce N 0 binary variables y 1 , . . . , y N0 , and for every block B i , i = 1, . . i is the importance score of a standard convolution filter i in the first layer and W 1 is its number of MACs. For each depth-wise point-wise block i, P i t is the importance score for the depth-wise filter t, P i kt is the importance score of the corresponding kernel k of the point-wise filter t in the subsequent layer N i . Due to permutation invariance (Sec. 3.1) the importance scores P i and P i t are stored in descendant order. The problem is formulated as:\nmax N0 i=1 y i • P 1 i + d i=1 Ni-1 t=1 d i t • P i t + Ni k=1 g i kt • P i kt (1) s.t. N0 i=1 y i • W 1 + d i=1 Ni-1 t=1 d i t • W 2 + Ni k=1 g i kt • W 3 ≤ C (2) N0 i=1 y i = x 0 (3) and Ni-1 t=1 d i t = x i-1 ∀i (4) Ni k=1 f i k = x i ∀i (5) and f i k ≥ f i k+1 ∀i (6) g i kt ≤ f i k ∀i, k, t (7) and f i k ≤ Ni-1 t=1 g i kt ∀i, k (8) Ni-1 t=1 g i kt ≤ x i-1 ∀i, k (9) Ni-1 t=1 g i kt ≥ x i-1 -(1 -f i k ) • N i-1 ∀i, k(10)\nThe knapsack for depth-wise convolutions is described as:\n(1) The objective function maximizes the total importance score of the chosen architecture. " }, { "figure_ref": [], "heading": "MACs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.3 REDS on FMNIST and CIFAR10", "publication_ref": [ "b35" ], "table_ref": [ "tab_11", "tab_12" ], "text": "The results in Table 8 and Table 9 show REDS performance using DS-CNN architecture of size S on FMNIST and CIFAR10. BU heuristic was used to obtain the results. REDS supports a different data domain without degrading the accuracy of the pre-trained model, reported in the header row. Compared to the state-of-the-art such as µNAS Liberis et al. (2021), REDS demonstrates a faster architecture search time for both FMNIST and CIFAR10. In the former, REDS takes 19 minutes as opposed to 3 days; in the latter, REDS takes 90 minutes as opposed to 39 days while requiring less memory for model storage for both datasets. After finding and freezing the 25% MACs subnetwork architecture, the BU heuristic takes only a few seconds to find the other 50% and 75% MACs subnetworks architectures. " }, { "figure_ref": [], "heading": "MACs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.4 REDS energy efficiency", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors are grateful to Markus Gallacher for his support with the energy efficiency analysis of REDS. Christopher Hinterer and Julian Rudolf contributed to extending Micro framework to support REDS on Arduino Nano 33 BLE Sense. This research was funded in part by the Austrian Science Fund (FWF) within the DENISE doctoral school (grant number DFH 5). The results presented in this paper were computed using computational resources of HLR resources of the Zentralen Informatikdienstes of Graz University of Technology." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "(2) Solution MACs must comply with the constraint C.\n(3) The number of filters chosen in the first convolution layer.\n(4) The number of filters chosen in the depth-wise layer i must match the number of filters picked in the previous layer i -1.\n(5) The number of the point-wise filters chosen in layer i.\n(6) Point-wise filters are chosen in ascending order to impose a contiguous solution.\n(7) If kernel t of filter k is chosen then the whole filter k is chosen.\n(8) A point-wise filter k is chosen only if one of its kernels is chosen.\n(9) The number of kernels in the filter k of point-wise layer i must ≤ the number of filters taken in the previous depth-wise layer.\n(10) If filter k in layer i is chosen then the constraints ( 9) and ( 10) together ensure that the number of kernels t of filter k in layer i equals the number of point-wise filters in the previous block (i.e., the number of filters in the depth-wise layer i). If filter k in layer i is not chosen, constraints ( 9) and ( 10) together imply that all kernels t of filter k at layer i are zero (the right-hand side of ( 9) is ≤ 0, since N i-1 is an upper bound on x i-1 )." }, { "figure_ref": [], "heading": "A.2 Iterative knapsack problem", "publication_ref": [ "b9", "b14", "b30" ], "table_ref": [], "text": "In this section we prove that the order matters if we want to pack a knapsack iteratively. We are looking at the 2 stage iterative knapsack problem, where the items of a solution for capacity c/2 have to be a subset of the items of a solution for capacity c. We give our analysis and theoretical findings under the natural assumption that all items of the knapsack have weight ≤ c/2. We first consider the bottom-up iterative knapsack heuristic and show that the quality of a worst-case solution is bounded by 2 3 • Opt and the bound is tight. We then analyze the top-down iterative knapsack heuristic and show that in this case a worst-case solution has a tight lower worst-case bound of 1 2 • Opt. Approximation results for the related incremental knapsack problem were given in Della Croce et al. 2019 andFaenza et al. 2023. For a set of items I, let P (I) (W (I)) denote the total profit (weight) of all items in I. For a binary knapsack problem (see Kellerer et al. 2004 for a general overview) we say that a split item I s according to some ordering O of the items and capacity c exists, if there is an item I s with the following property: all the items\nBottom-up knapsack. Let I(Opt c ) denote the optimal solution set for a knapsack of capacity c. Consider the following iterative heuristic A c : we first find an optimal solution of the knapsack with capacity c/2, then we fix the selected items I(Opt c/2 ) and solve the knapsack defined on the remaining items and capacity c -W (I(Opt c/2 )). We denote this second set of items as I(A c/2 ). Hence, the overall item set of this heuristic is given by\n). Note that there may be P (I(A c/2 )) > P (I(Opt c/2 )), since the corresponding knapsack capacity in the second step can be larger than c/2.\nTheorem A.1. A c yields a worst case ratio of 2 3 , i.e., P (I(A c )) ≥ 2 3 • P (I(Opt c )), if all items of the knapsack have a weight ≤ c/2. This bound is tight.\nProof. Let us arrange the items of I(Opt c ) in the following order O: we first take all the items from I(Opt c ) ∩ I(Opt c/2 ) in an arbitrary order. Note that these are the items that are in both optimal solutions, i.e., for both capacity c and c/2. Then we take all the items that are not included in I(A c/2 ) followed by the items of I(Opt c ) ∩ I(A c/2 ) (again in arbitrary order). Now we have two cases:\nCase 1: There does not exist a split item in I(Opt c ) with respect to O and capacity c/2. Hence W (I(Opt c/2 )) = c/2. It is easy to see that in this case A c = Opt c .\nCase 2: Let I s be the split item in I(Opt c ) with respect to O and capacity c/2. In this case we get that the weight of all the items I O b before I s as well as the weight of all the items I O f that follow I s is smaller than c/2. It follows that P (I(Opt c/2 )) ≥ P (I O b ) and that P (I(A c/2 )) ≥ P (I O f ). Since all items have a weight ≤ c/2 and by the fact that I s is not contained in I(Opt c/2 ) we know that its profit is less or equal than the minimum of P (I(A c/2 )) and P (I(Opt c/2 )). Therefore, it holds that P (I s ) ≤ 1 2 P (I(A c )). Hence we get:\nIt remains to show the bound is tight. We introduce the following knapsack instance with four items and a large positive constant P . item:\n1 2 3 4 weight: c/3 + ϵ c/3 c/3 c/3 profit:\nP + ϵ P P P\nHere I(Opt c/2 ) = {1} which only leaves space for one additional item for the larger capacity. Hence we get that P (I(A c )) = 2P + ϵ, whereas P (I(Opt c )) = 3P .\nTop-down knapsack. We now consider a heuristic D c/2 consisting of an iterative top-down knapsack packing. We first solve the knapsack with capacity c to optimality, and then solve the knapsack problem defined only on the items I(Opt c ) with capacity c/2 to optimality. I(D c/2 ) corresponds to the items in this second and smaller knapsack.\nTheorem A.2. D c/2 yields a worst case ratio of 1 2 , i.e., P (I(D c/2 )) ≥ 1 2 • P (I(Opt c/2 )) if all items of the knapsack have a weight ≤ c/2. This bound is tight.\nProof. Consider a knapsack of size c with optimal solution set I(Opt c ) and the knapsack problem with capacity c/2 defined on the restricted item set I(Opt c ) with solution set I(D c/2 ). We will show that: P (I(D c/2 )) ≥ 1 2 • P (I(Opt c/2 )). We first arrange the items of I(Opt c ) in an ordering O ′ such that they start with those items contained also in I(Opt c/2 ). Then we identify the split item I s according to O ′ for capacity c/2 and partition I(Opt c ) into three parts. D 1 = I O ′ b , D 2 = I s and D 3 contains all the remaining items. If no split item exists, we simply set I s = ∅. We now show that:\nAssuming that this is not the case, we would get that:\n)) 2 This would imply\nHowever, since I(Opt c/2 ) ∩ D 3 = ∅ and W (I(Opt c/2 )) ≤ c/2, I(Opt c/2 ) ∪ D 3 would constitute a feasible solution better than I(Opt c ), which is a contradiction. Thus, we have shown (3).\nFor i = 1, . . . , 3, there is W (D i ) ≤ c/2 and all items in D i are available for D c/2 . Therefore, (3) implies\nIt remains to show the bound is tight. We introduce the following knapsack instance with four items and a large positive constant P . Item:\nHere I(Opt c ) = {1, 2, 3}. D c/2 then selects one of these items and no more items fit into the knapsack. Opt c/2 selects item 4, which shows that the ratio of 1 2 is tight. Note that in case that we have instances, where the weight of certain items is greater that c/2, it is easy to construct instances with arbitrary bad ratios for both cases." }, { "figure_ref": [], "heading": "B Training Details", "publication_ref": [ "b53", "b0" ], "table_ref": [], "text": "Table 4 summarizes the hyper-parameters used to train different networks. We refer to Zhang et al. 2017 regarding the description of the network architectures adopted in this paper (referred to as DNN, CNN and DS-CNN, sizes S and L). We used TensorFlow (Abadi et al., 2016) version 2.11. All models were trained on a workstation with 16 NVIDIA Tesla K80 GPUs and 32 Intel Xeon CPUs. To conduct all our experiments and compute the baselines we trained and optimized over 100 models. This translated into >1000 h of compute. We always use 80:10:10 split for training, validation, and testing. All results constitute averages over three runs. " }, { "figure_ref": [], "heading": "C Further REDS Performance Evaluation", "publication_ref": [], "table_ref": [], "text": "C.1 REDS performance on DNN and CNN architectures\nIn addition to the results on DS-CNN reported in the main paper, we show in Table 5 and Table 6 REDS performance on DNN and CNN architectures (with full fine-tuning) and compare to training model of each capacity from scratch and training REDS from scratch. Despite full fine-tuning, the results for S architecture show superior performance of the BU heuristic over TD.\nFig. 11 shows the impact of the architecture on REDS structure found by the knapsack BU solver. We present the results for DNN S, L and CNN S, L from left to right, respectively.\nC.2 REDS performance with 10 nested subnetworks Table 7 and Fig. 12 show the performance of DNN, CNN and DS-CNN on Google Speech Commands, when REDS structure comprises 10 subnetworks, compared to 4 subnetworks in the main paper. A larger number of subnetworks does not degrade model accuracy." } ]
Deep models deployed on edge devices frequently encounter resource variability, which arises from fluctuating energy levels, timing constraints, or prioritization of other critical tasks within the system. Stateof-the-art machine learning pipelines generate resource-agnostic models, not capable to adapt at runtime. In this work we introduce Resource-Efficient Deep Subnetworks (REDS) to tackle model adaptation to variable resources. In contrast to the state-of-the-art, REDS use structured sparsity constructively by exploiting permutation invariance of neurons, which allows for hardware-specific optimizations. Specifically, REDS achieve computational efficiency by (1) skipping sequential computational blocks identified by a novel iterative knapsack optimizer, and (2) leveraging simple math to re-arrange the order of operations in REDS computational graph to take advantage of the data cache. REDS support conventional deep networks frequently deployed on the edge and provide computational benefits even for small and simple networks. We evaluate REDS on seven benchmark architectures trained on the Visual Wake Words, Google Speech Commands, FMNIST and CIFAR10 datasets, and test on four off-the-shelf mobile and embedded hardware platforms. We provide a theoretical result and empirical evidence for REDS outstanding performance in terms of submodels' test set accuracy, and demonstrate an adaptation time in response to dynamic resource constraints of under 40µs, utilizing a 2-layer fully-connected network on Arduino Nano 33 BLE.
REDS: Resource-Efficient Deep Subnetworks for Dynamic Resource Constraints
[ { "figure_caption": "Figure 2 :2Figure 2: Filter removal in a DS-CNN model with one standard convolutional layer and one depth-wise and point-wise convolutional block with layers width equal to two. The dimensions of the sliced filters, the sequence of the reorganized kernels, and the weight tensors chaining constraints are highlighted in red.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: REDS fine-tuning. The parameters π i N i=1 ensure the contribution of individual models to the loss aligns with the fraction of the shared weights (Qu et al., 2022).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: DS-CNN S and L few-shots finetuning on Google Speech Commands (Warden, 2018) with MACs percentages as constraints. The subnetworks obtained from the BU Knapsack formulation exhibit faster recovery in accuracy compared to the TD Knapsack ones.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Analysis of the subnetworks' structured sparsity obtained by the knapsack BU and TD heuristics (Sec. 3.3) with MACs percentages as constraints. Left to right: DS-CNN S and DS-CNN L. The slicing point of each subnetwork is visualized with a different color. The results of the TD heuristic are reported in black.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Duration of the matrix multiplication using differently sized weight matrices of floats (left) and different unsigned integer precisions for a 512 x 256 weight matrix (right) at different weight matrix slicing points. The cache optimization shows a clear computational time reduction for all the different test scenarios.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Peak memory usage analysis of MobileNetV1 processed by the REDS knapsack BU heuristic. Placing constraints on MACs (left), and on both MACs and peak memory usage (right). The peak memory usage constraint enables REDS subnetworks optimized for edge devices.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: REDS size S (top row) and L (bottom row) architecture analysis. The two plots on the left show the number of model parameters and model accuracy as a function of MAC percentage in each REDS submodel. The two plots on the right evaluate model inference time on two classes of devices: more powerful platforms comprising Xiaomi Redmi Note 9 Pro (Qualcomm Snapdragon 720G, ARM Cortex-A76), Pixel 6 (Octa-core 2x2.8 GHz Cortex-X1, 2x2.25 GHz Cortex-A76, 4x1.8 GHz Cortex-A55); and low-power IoT platforms including Arduino Nano 33 BLE Sense (nRF52840, ARM Cortex-M4) and Infineon CY8CKIT-062S2.", "figure_data": "", "figure_id": "fig_10", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :Figure 10 :910Figure 9: DS-CNN S (left) and L (right) BU comparison to early exit linear classifiers and subnetworks uniform magnitude (L1) structural pruning.", "figure_data": "", "figure_id": "fig_11", "figure_label": "910", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Analysis of the subnetwork architecture obtained by the knapsack BU heuristics. From left to right: DNN S, DNN L, CNN S and CNN L on Google Speech Commands. The patterns as to which computational units constitute a child subnetwork are architecture-specific.", "figure_data": "", "figure_id": "fig_13", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: REDS size S (top row) and L (bottom row) architectures analysis finetuned on Google Speech Commands Warden (2018) with ten subnetworks. The plots from left to right show the subnetworks size, the subnetworks accuracy and the subnetworks inference time as a function of MAC percentage.", "figure_data": "", "figure_id": "fig_15", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "w 11 w 12 w 13 w 14 w 21 w 22 w 23 w 24 w 31 w 32 w 33 w 34 W 1 W 2 v 11 v 12 v 13 v 14 v 21 v 22 v 23 v 24 v 31 v 32 v 33 v 34 v 41 v 42 v 43 v 44", "figure_data": "w 11w 12w 13w 14w 21w 22w 23w 24w 31w 32w 33w 34v 11v 12v 13v 14v 41v 42v 43v 44", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "• W 2 needs no optimization: len new (W 2 ) := len(W 2 ) / 2", "figure_data": "W 1W 2W 3W 1W 2v 21v 22v 23v 24v 31v 32v 33v 34Weight Tensor SlicingContiguous Memory for Hardware and Cache Optimization", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Test set accuracy [%] of training S and L depth-wise separable convolutional architectures (DS-CNN) from", "figure_data": "MACsSmall (S) -Accuracy 94.96Large (L) -Accuracy 95.1ScratchKnapsack BU Knapsack TD REDS trainingScratchKnapsack BU Knapsack TD REDS training100%93.83±0.2293.38±0.4593.34±0.2193.19 ±0.1894.87 ±0.3394.46±0.0894.35 ±0.2194.05 ±0.1475%93.37±0.3493.18±0.2193.03±0.2692.85 ±0.4794.27 ±0.0894.32±0.1394.18 ±0.1793.76 ±0.0250%93.41±0.6792.12±0.2492.26±0.5091.50 ±0.1094.11 ±0.2694.17±0.0194.00 ±0.2593.08 ±0.0625%91.46±0.8089.64±0.7688.59±1.6985.14 ±1.0793.80 ±0.1493.20±0.1993.17 ±0.7392.11 ±0.42", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": ", REDS demonstrate a faster solution search time, taking only a few minutes for DS-CNN and hours for MobileNetV1 as opposed to days by µNAS.The effectiveness of REDS for MCUs and mobile architectures is shown in Table1and Table2respectively. Training from scratch shows the accuracy achieved by S and L DS-CNN architectures when trained in isolation, i.e., not as part of the REDS nested structure. The next two columns show the resulting REDS", "figure_data": "Peak-Memory-Usage Test AccuracyREDS162 KB74.8%MobileNet v1 (0.25x)223 KB75.71%", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparative analysis of REDS and MobileNet v1 (0.25x) fromChowdhery et al. (2019) on Visual Wake Words. REDS exhibits lower peak memory usage compared to the uniformly pruned fully trained network, with a negligible drop in accuracy.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Speed-up and cache-hit rate for different matrix data types. There is no notable difference across the different splits 25, 50, 75 and 100%.", "figure_data": "float uint8 uint16 uint32Speed-up12%58.6% 54.5% 47.4%Cache-Hit-Rate Baseline99.1% 90.9% 91.7% 90.9%Cache-Hit-Rate Optimized 99.7% 99.4% 98.9% 97.6%", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": ". , d, we introduce binary variables f i k and g i kt , where k ∈ {1, . . . , N i } and t ∈ {1, . . . , N i-1 }, f i k to indicate whether a filter k is used in B P i , and g i kt if filter k with kernel t of B P i is used. For B D i we introduce the decision variables d i t to decide if a depth-wise filter t ∈ {1, . . . , N i-1 } is used. In the model P 1", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Test set accuracy[%] of training S and L fully-connected (DNN) architectures taken fromZhang et al. (2017): training a network of each size from scratch (\"Scratch\"), conversion from a pre-trained network using two knapsack versions (\"Knapsack BU\" and \"Knapsack TD\"), and training REDS structure from scratch (\"REDS training\"). Reported results from three independent runs. The accuracy of each 100 % network reported inZhang et al. (2017) is listed in the header row.", "figure_data": "MACsSmall (S) -Accuracy 92.24Large (L) -Accuracy 93.24ScratchKnapsack BU Knapsack TD REDS trainingScratchKnapsack BU Knapsack TD REDS training100%91.10±0.2391.60±0.3991.20 ±0.3588.89 ±0.2692.94±0.2092.83±0.2692.97±0.1590.97 ±0.2175%90.40±0.2790.63±0.1990.20 ±0.1387.64 ±0.4692.74±0.1292.74±0.2392.54 ±0.0790.67 ±0.0950%89.07±0.2488.39±0.3188.39 ±0.5285.99 ±0.1992.44±0.3091.95±0.2291.93 ±0.2890.36 ±0.3025%82.57±0.4179.52±0.6779.28 ±0.5479.25 ±0.4090.98±0.6090.24±0.3590.31 ±0.0388.25 ±0.66", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The same as in Table5for S and L convolutional architectures (CNN) fromZhang et al. (2017).", "figure_data": "", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "±0.35 86.19 ±0.26 91.16 ±0.45 93.1 ±0.1 93.5 ±0.15 94.34 ±0.07 90% 82.93 ±0.4 86.17 ±0.36 90.4 ±0.47 92.84 ±0.27 93.33 ±0.11 94.32 ±0.1 80% 82.67 ±0.53 86.1 ±0.08 90.1 ±0.07 92.77 ±0.06 92.84 ±0.15 94.31 ±0.1 70% 81.67 ±0.53 85.78 ±0.11 89.43 ±0.41 92.25 ±0.47 92.64 ±0.24 94.21 ±0.14 60% 80.37 ±0.57 85.66 ±0.25 88.84 ±0.25 92.28 ±0.11 92.27 ±0.31 94.08 ±0.03 50% 78.26 ±0.53 85.33 ±0.09 88.4 ±0.2 92.03 ±0.23 91.04 ±0.51 93.97 ±0.04 40% 75.37 ±1.26 84.8 ±0.45 85.76 ±0.18 91.78 ±0.04 89.42 ±0.66 93.83 ±0.22 30% 67.76 ±1.59 82.66 ±0.18 81.92 ±0.49 90.52 ±0.54 87.63 ±1.03 93.59 ±0.14 20% 52.36 ±6.99 80.19 ±0.39 73.74 ±0.04 88.61 ±0.51 84.14 ±2.57 93.35 ±0.15 10% 23.93 ±5.88 50.7 ±7.5 58.87 ±5.27 81.15 ±1.39 58.46 ±3.35 90.38 ±0.17 Test set accuracy [%] from Small (S) and Large (L) pretrained fully-connected (DNN), convolutional (CNN), and depth-wise separable convolutional (DS-CNN) networks taken fromZhang et al. (2017). For each pre-trained architecture, REDS can support ten subnetworks obtained from the Knapsack BU formulation. The accuracies of the DS-CNN and CNN subnetworks do not degrade drastically until the lowest percentage of MACs considered. In contrast, the accuracies in the DNN subnetworks show a more pronounced drop from 30% MACs.", "figure_data": "DNNCNNDS-CNNSmallLargeSmallLargeSmallLarge100%83.07", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Acc (%) -Pre-trained 90.59 Model Size (Kb) Time Taken (m) Analysis of the BU knapsack subnetworks obtained from a depth-wise separable convolutional (DS-CNN) S network, pre-trained onFMNIST Xiao et al. (2017). REDS supports a different data domain without degrading the accuracy of the pre-trained model, reported in the header row.", "figure_data": "100%91.6 ±0.2128.54-75%91.51 ±0.28107.731.5850%90.75 ±0.3287.45.83 [s]25%89.22 ±0.4566.6319 [m]MACs Acc (%) -Pre-trained 79.36 Model Size (Kb)Time Taken100%81.07 ±0.71128.54-75%80.17 ±0.69109.412.89 [s]50%76.72 ±1.3788.0110.59 [s]25%68.66 ±1.6569.6390 [m]", "figure_id": "tab_11", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "The same evaluation as in Table8for CIFAR10Krizhevsky et al. (2009).", "figure_data": "", "figure_id": "tab_12", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Table 10 reports the energy consumption of inference time by different network architectures measured by Power Profiler Kit (PPK2) on Nordic nRF52840 (Arduino Nano 33 BLE Sense). In comparison, the model adaptation time takes less than 0.01 mJ. Energy consumption of REDS size S architectures measured by the Power Profiler Kit (PPK2) on Nordic nRF52840. The results are obtained by performing an inference pass for each subnetwork model and recording the inference current.", "figure_data": "MACs DNN CNN DS-CNN[mJ][mJ][mJ]100%0.1890.616175%0.1586.0144.0350%0.0750.333.8125%0.0544.8120.44", "figure_id": "tab_13", "figure_label": "10", "figure_type": "table" } ]
Francesco Corti; Balz Maag; Joachim Schauer; Ulrich Pferschy; Olga Saukh
[ { "authors": "M Abadi; P Barham; J Chen; Z Chen; A Davis; J Dean; M Devin; S Ghemawat; G Irving; M Isard; M Kudlur; J Levenberg; R Monga; S Moore; D G Murray; B Steiner; P Tucker; V Vasudevan; P Warden; M Wicke; Y Yu; X Zheng", "journal": "", "ref_id": "b0", "title": "Tensorflow: a system for large-scale machine learning", "year": "2016" }, { "authors": "S K Ainsworth; J Hayase; S Srinivasa", "journal": "", "ref_id": "b1", "title": "Git Re-Basin: Merging models modulo permutation symmetries", "year": "2022" }, { "authors": "F Bambusi; F Cerizzi; Y Lee; L Mottola", "journal": "", "ref_id": "b2", "title": "The case for approximate intermittent computing", "year": "2021" }, { "authors": "B Bixby", "journal": "Transportation Research Part B", "ref_id": "b3", "title": "The gurobi optimizer", "year": "2007" }, { "authors": "M Boehm; B Reinwald; D Hutchison; A V Evfimievski; P Sen", "journal": "", "ref_id": "b4", "title": "On optimizing operator fusion plans for large-scale machine learning in systemml", "year": "2018" }, { "authors": "H Cai; C Gan; T Wang; Z Zhang; S Han", "journal": "", "ref_id": "b5", "title": "Once-for-all: Train one network and specialize it for efficient deployment", "year": "2020" }, { "authors": "A Chowdhery; P Warden; J Shlens; A Howard; R Rhodes", "journal": "", "ref_id": "b6", "title": "Visual wake words dataset", "year": "2019" }, { "authors": "B Dai; C Zhu; D Wipf", "journal": "", "ref_id": "b7", "title": "Compressing neural networks using the variational information bottleneck", "year": "2018" }, { "authors": "R David; J Duke; A Jain; V J Reddi; N Jeffries; J Li; N Kreeger; I Nappier; M Natraj; S Regev; R Rhodes; T Wang; P Warden", "journal": "Proceedings of Machine Learning and Systems", "ref_id": "b8", "title": "TensorFlow Lite micro: Embedded machine learning for TinyML systems", "year": "2021" }, { "authors": "F Della Croce; U Pferschy; R Scatamacchia", "journal": "Discrete Applied Mathematics", "ref_id": "b9", "title": "On approximating the incremental knapsack problem", "year": "2019" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b10", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "R Entezari; O Saukh", "journal": "", "ref_id": "b11", "title": "Class-dependent compression of deep neural networks", "year": "2019" }, { "authors": "R Entezari; H Sedghi; O Saukh; B Neyshabur", "journal": "", "ref_id": "b12", "title": "The role of permutation invariance in linear mode connectivity of neural networks", "year": "2021" }, { "authors": "R Entezari; M Wortsman; O Saukh; M M Shariatnia; H Sedghi; L Schmidt", "journal": "", "ref_id": "b13", "title": "The role of pre-training data in transfer URL", "year": "" }, { "authors": "Y Faenza; D Segev; L Zhang", "journal": "Mathematical Programming", "ref_id": "b14", "title": "Approximation algorithms for the generalized incremental knapsack problem", "year": "2023" }, { "authors": "B Fang; X Zeng; M Zhang", "journal": "", "ref_id": "b15", "title": "NestDNN-device deep learning for continuous mobile vision", "year": "2018" }, { "authors": "E Frantar; D Alistarh", "journal": "", "ref_id": "b16", "title": "SPDY: accurate pruning with speedup guarantees", "year": "2022" }, { "authors": "M.-P Gherman; Y Cheng; A Gomez; O Saukh", "journal": "", "ref_id": "b17", "title": "Towards On-demand Gas Sensing", "year": "2021" }, { "authors": "K Goto; R A V D Geijn", "journal": "ACM Transactions on Mathematical Software (TOMS)", "ref_id": "b18", "title": "Anatomy of high-performance matrix multiplication", "year": "2008" }, { "authors": "S Han; J Pool; J Tran; W Dally", "journal": "", "ref_id": "b19", "title": "Learning both weights and connections for efficient neural network", "year": "2015" }, { "authors": "S Han; H Mao; W J Dally", "journal": "", "ref_id": "b20", "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "year": "2016" }, { "authors": "B Hassibi; D G Stork", "journal": "", "ref_id": "b21", "title": "Second order derivatives for network pruning: Optimal brain surgeon", "year": "1993" }, { "authors": "T Hoefler; D Alistarh", "journal": "", "ref_id": "b22", "title": "Sparsity in deep learning: Pruning and growth for efficient inference and training in neural networks", "year": "2021" }, { "authors": "A G Howard; M Zhu; B Chen; D Kalenichenko; W Wang; T Weyand; M Andreetto; H Adam", "journal": "", "ref_id": "b23", "title": "Mobilenets: Efficient cnns for mobile vision applications", "year": "2017" }, { "authors": "H Hu; R Peng; Y.-W Tai; C.-K Tang", "journal": "", "ref_id": "b24", "title": "Network trimming: A data-driven neuron pruning approach towards efficient deep architectures", "year": "2016" }, { "authors": "S Hymel; C Banbury; D Situnayake; A Elium; C Ward; M Kelcey; M Baaijens; M Majchrzycki; J Plunkett; D Tischler", "journal": "", "ref_id": "b25", "title": "Edge impulse: An mlops platform for tiny machine learning", "year": "2022" }, { "authors": "S Ioffe; C Szegedy", "journal": "pmlr", "ref_id": "b26", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "K Jordan; H Sedghi; O Saukh; R Entezari; B Neyshabur", "journal": "", "ref_id": "b27", "title": "Repair: Renormalizing permuted activations for interpolation repair", "year": "2022" }, { "authors": "N P Jouppi; C Young", "journal": "", "ref_id": "b28", "title": "In-datacenter performance analysis of a tensor processing unit", "year": "2017" }, { "authors": "T Kannan; H Hoffmann", "journal": "IEEE", "ref_id": "b29", "title": "Budget RNNs: Multi-capacity nns to improve in-sensor inference under energy budgets", "year": "2021" }, { "authors": "H Kellerer; U Pferschy; D Pisinger", "journal": "Springer", "ref_id": "b30", "title": "Knapsack Problems", "year": "2004" }, { "authors": "A Krizhevsky; V Nair; G Hinton", "journal": "", "ref_id": "b31", "title": "Cifar-100 and cifar-10 (canadian institute for advanced research)", "year": "2009" }, { "authors": "Y Lecun", "journal": "", "ref_id": "b32", "title": "Optimal brain damage", "year": "1990" }, { "authors": "H Li; A Kadav; I Durdanovic; H Samet; H P Graf", "journal": "", "ref_id": "b33", "title": "Pruning filters for efficient convnets", "year": "2017" }, { "authors": "E Liberis; N D Lane", "journal": "ACM IMWUT", "ref_id": "b34", "title": "Differentiable nn pruning to enable smart applications on microcontrollers", "year": "2023" }, { "authors": "E Liberis; Ł Dudziak; N D Lane", "journal": "", "ref_id": "b35", "title": "µNAS: Constrained neural architecture search for microcontrollers", "year": "2021" }, { "authors": "J Lin", "journal": "IEEE Circuits and Systems Magazine", "ref_id": "b36", "title": "Tiny machine learning: Progress and futures [feature", "year": "2023" }, { "authors": "B Maag; Z Zhou; O Saukh; L Thiele", "journal": "", "ref_id": "b37", "title": "BARTON: low power tongue movement sensing with in-ear barometers", "year": "2017" }, { "authors": "A Mishra; J A Latorre; J Pool; D Stosic; D Stosic; G Venkatesh; C Yu; P Micikevicius", "journal": "", "ref_id": "b38", "title": "Accelerating sparse deep neural networks", "year": "2021" }, { "authors": "P Molchanov; A Mallya; S Tyree; I Frosio; J Kautz", "journal": "Computer Vision and Pattern Recognition", "ref_id": "b39", "title": "Importance estimation for neural network pruning", "year": "2019" }, { "authors": "M C Mozer; P Smolensky", "journal": "NeurIPS", "ref_id": "b40", "title": "Skeletonization: A technique for trimming the fat from a network via relevance assessment", "year": "1988" }, { "authors": "L Perron; V Furnon", "journal": "", "ref_id": "b41", "title": "Or-tools", "year": "" }, { "authors": "Z Qu; S S Sarwar; X Dong; Y Li; E Sumbul; B De Salvo", "journal": "", "ref_id": "b42", "title": "DRESS: Dynamic real-time sparse subnets", "year": "2022" }, { "authors": "D Ravi; C Wong; B Lo; G.-Z Yang", "journal": "BSN", "ref_id": "b43", "title": "Deep learning for human activity recognition", "year": "2016" }, { "authors": " Rp", "journal": "", "ref_id": "b44", "title": "RP2040 Datasheet -A microcontroller by Raspberry Pi", "year": "2023" }, { "authors": "D E Rumelhart; G E Hinton; R J Williams", "journal": "Nature", "ref_id": "b45", "title": "Learning representations by back-propagating errors", "year": "1986" }, { "authors": "F Sabry; T Eltaras", "journal": "Journal of Healthcare Engineering", "ref_id": "b46", "title": "Machine learning for healthcare wearable devices: the big picture", "year": "2022" }, { "authors": "M Shen; H Yin; P Molchanov; L Mao; J Liu; J M Alvarez", "journal": "", "ref_id": "b47", "title": "Structural pruning via latency-saliency knapsack", "year": "2022" }, { "authors": "M Tan; Q Le", "journal": "PMLR", "ref_id": "b48", "title": "Efficientnet: Rethinking model scaling for convolutional neural networks", "year": "2019" }, { "authors": "L Timpl; R Entezari; H Sedghi; B Neyshabur; O Saukh", "journal": "", "ref_id": "b49", "title": "Understanding the effect of sparsity on neural networks robustness", "year": "2022" }, { "authors": "H Vanholder", "journal": "", "ref_id": "b50", "title": "Efficient inference with tensorrt", "year": "2016" }, { "authors": "P Warden", "journal": "CoRR", "ref_id": "b51", "title": "Speech commands: A dataset for limited-vocabulary speech recognition", "year": "2018" }, { "authors": "H Xiao; K Rasul; R Vollgraf", "journal": "", "ref_id": "b52", "title": "Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms", "year": "2017" }, { "authors": "Y Zhang; N Suda; L Lai; V Chandra", "journal": "", "ref_id": "b53", "title": "Hello edge: Keyword spotting on microcontrollers", "year": "2017" }, { "authors": "M Zhu; S Gupta", "journal": "", "ref_id": "b54", "title": "To prune, or not to prune: exploring the efficacy of pruning for model compression", "year": "2017" } ]
[ { "formula_coordinates": [ 6, 250.73, 120.22, 110.54, 20.06 ], "formula_id": "formula_0", "formula_text": "I c = i∈Wc I i = i∈Wc |g i γ i |." }, { "formula_coordinates": [ 6, 184.4, 530.88, 356.77, 130.87 ], "formula_id": "formula_1", "formula_text": "max L l=1 u l i=1 x il • I il s.t. L l=1 u l i=1 x il • M ACs il ≤ C M ACs , L l=1 u l i=1 x il • P eakM em il ≤ C P eakM em , x il ∈ {0, 1}, ∀ l ∈ {1, . . . , L}, i ∈ {1, . . . , u l }, I i1 ≥ I i2 ≥ . . . I il ,(2)" }, { "formula_coordinates": [ 7, 175.64, 87.95, 204.72, 78.88 ], "formula_id": "formula_2", "formula_text": "Weights 𝜽 Slice s 1 Slice s 2 Slice s N Input i 1 Input i 2 Input i N * 𝜋 1 ᐧ ℒ 1 𝜋 2 ᐧ ℒ 2 𝜋 N ᐧ ℒ N" }, { "formula_coordinates": [ 10, 72, 225.36, 102.45, 12.94 ], "formula_id": "formula_3", "formula_text": "H = σ(X T [m×b] • W [m×n]" }, { "formula_coordinates": [ 10, 72, 371.19, 468, 21.92 ], "formula_id": "formula_4", "formula_text": "→ 3 → 1 → 4 → 2 → 5 → 0 → 3 → • • •. In the optimized cases (W T • X)" }, { "formula_coordinates": [ 10, 90.49, 395.77, 177.19, 8.74 ], "formula_id": "formula_5", "formula_text": "→ 1 → 0 → 1 → 0 → 1 → 2 → 3 → • • •." }, { "formula_coordinates": [ 18, 141.48, 401.07, 329.03, 246.11 ], "formula_id": "formula_6", "formula_text": "max N0 i=1 y i • P 1 i + d i=1 Ni-1 t=1 d i t • P i t + Ni k=1 g i kt • P i kt (1) s.t. N0 i=1 y i • W 1 + d i=1 Ni-1 t=1 d i t • W 2 + Ni k=1 g i kt • W 3 ≤ C (2) N0 i=1 y i = x 0 (3) and Ni-1 t=1 d i t = x i-1 ∀i (4) Ni k=1 f i k = x i ∀i (5) and f i k ≥ f i k+1 ∀i (6) g i kt ≤ f i k ∀i, k, t (7) and f i k ≤ Ni-1 t=1 g i kt ∀i, k (8) Ni-1 t=1 g i kt ≤ x i-1 ∀i, k (9) Ni-1 t=1 g i kt ≥ x i-1 -(1 -f i k ) • N i-1 ∀i, k(10)" } ]
2024-01-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b11", "b6", "b22", "b3", "b36", "b1", "b10", "b29", "b27", "b19", "b4" ], "table_ref": [], "text": "Large Language Models (LLMs) have revolutionized the field of artificial intelligence. These models are trained with an internet-scale text corpus, enabling them to exhibit remarkable capabilities such as natural language generation, question answering, and translation [Brown et al., 2020;Du et al., 2022;Chiang et al., 2023]. Previous work suggests that * Equal contribution † Corresponding author these models contain vast general knowledge about the world and are capable of solving complex reasoning problems [Radford et al., 2019;Brown et al., 2020;Wei et al., 2022]. Recently, several works have attempted to use LLMs to generate action plans in an embodied environment [Ahn et al., 2022;Wang et al., 2023a;Driess et al., 2023;Song et al., 2023;Sha et al., 2023;Mao et al., 2023]. However, LLMs face challenges in generating effective end-to-end instructions for specific embodied tasks, especially in real-world dynamic scenarios. This limitation arises from two key factors. Firstly, LLMs do not possess the appropriate task incentives during the training process. Secondly, these models lack the capability to actively interact with the environment and gather realtime data [Carta et al., 2023]. Furthermore, the utilization of LLMs often requires substantial computational resources, e.g., memory and power. These requirements render their deployment impractical and expensive, especially when considering their use on lightweight edge devices. These challenges motivate us to address the following question:\nHow do we develop a lightweight, specialized agent that can quickly acquire the capabilities of LLMs for a specific sequential decision-making task?\nA commonly used solution is to train a specialized reinforcement learning (RL) based agent that starts learning from scratch. However, this approach often incurs a significant exploration cost, especially in high-dimensional and complex embodied environments with sparse reward signals, due to the low sampling efficiency of RL methods.\nIn this paper, we propose a novel approach called LLM for policy teaching (LLM4Teach), which utilizes a pre-trained LLM to expedite the training process of a small-scale RLbased student agent specialized for a target task. Specifically, in the early stage of training, the student agent queries the LLM-based teacher agent for action instructions and learns to mimic the behavior of its teacher through minimizing a distillation loss. As the learning process proceeds, the student gradually shifts from learning from its teacher to learning from the environment by upweighting a conventional RL loss. In another word, the objective function used for policy training is defined as a weighted average of the distillation loss and the RL loss. Since it allows the student agent to not only incorporate guidance from its LLM teacher but also learn from online interactions with the environment, LLM4Teach enables the student agent to identify and correct any mistakes made by its teacher, leading to improved performance on the target task compared to its teacher. Note that only the student agent is deployed and it shall not interact with the LLM in the test phase. That means the model finally deployed is very lightweight compared to an LLM. To summarize, our main contributions are:\n• We propose LLM4Teach, a policy distillation approach to address the limitations of LLM and RL-based agents for making sequential decisions in embodied settings.\n• We demonstrate the performance of our approach empirically by extensive experiments conducted on challenging embodied environments. In contrast to LLM-based agents, our approach shows improved accuracy and decreased computational workload. In comparison to RLbased agents, it has much greater sample efficiency.\n• As a byproduct, we demonstrate that relying solely on LLM can result in various types of incorrect decisions in embodied settings, while LLM4Teach offers an effective approach to mitigate or avoid the influence caused by such incorrect decisions. We also verify that offering uncertainty-aware guidance rather than deterministic guidance through LLM can improve the sample efficiency for the student agent." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "In this paper, we consider an algorithmic agent operating in a dynamic environment. The agent is required to make a series of decisions and take actions based on the current state of the environment, employing a specific policy to successfully complete a designated task. Here we provide a brief overview of relevant research in the literature." }, { "figure_ref": [], "heading": "LLM-based Agents", "publication_ref": [ "b38", "b39", "b2", "b42", "b15", "b28", "b40", "b4", "b14", "b21" ], "table_ref": [], "text": "LLMs have exhibited impressive reasoning abilities, motivating researchers to employ them as fundamental components for constructing LLM-based agents in diverse decisionmaking scenarios [Xi et al., 2023;Yang et al., 2023;Wang et al., 2023b;Biggie et al., 2023;Zhen et al., 2023]. Recent research has demonstrated that LLMs can generate high-level plans in response to natural language descriptions of a given situation [Huang et al., 2022;Shinn et al., 2023;Yao et al., 2022] . However, these plans may propose actions that are not compatible with the acting agent or the environment due to a lack of grounding in the specific problem domain. In addressing this issue, Ahn et al. [2022] proposed grounding LLMs through an affordance function of pre-trained skills, which assists LLMs in formulating feasible plans for execution by the agents. Additionally, Carta et al. [2023] proposed an approach in which the agent interacts with the environment and subsequently fine-tunes the LLMs using online collected data, thereby enhancing adaptation to the target task. However, frequent interaction with an LLM can be costly. Therefore, Hu et al. [2023] suggested an intelligent interaction approach that employs RL to determine when it is necessary to query the LLM, thus avoiding unnecessary interactions. Furthermore, Nottingham et al. [2023] optimized the selection of information presented to LLMs, thereby reducing the length of input contexts. While these methods reduce the cost of utilizing LLMs for decision-making tasks, they all necessitate online access to a pre-trained LLM when the agent is deployed online during the testing phase.\nIn contrast to the previously mentioned methods, our approach involves utilizing the LLM solely during the training phase to distill task-specific knowledge from the LLM into a RL-based agent. Subsequently, during the testing phase, only the lightweight student agent operates independently without dependence on the LLM." }, { "figure_ref": [], "heading": "LLM Assisted RL", "publication_ref": [ "b18", "b41", "b16", "b18", "b41", "b16", "b7" ], "table_ref": [], "text": "Several studies have investigated the potential of utilizing LLMs to support the standard RL process by tapping into the general knowledge embedded in LLMs. For example, Kwon et al. [2023]; Yu et al. [2023] and Klissarov et al. [2023] address complex scenarios where defining desired behaviors using a simple reward function is challenging. These studies employ LLMs to assist in assigning rewards. Kwon et al. [2023] use LLMs as proxy reward functions to automatically label trajectory data with rewards, while Yu et al. [2023] utilize LLMs to flexibly define reward parameters for optimizing and completing various robot tasks. In a different approach, Klissarov et al. [2023] leverage an offline dataset of behaviors and use LLMs' preferences over pairs of randomly sampled trajectories to construct a reward model. Furthermore, Du et al. [2023] and Colas et al. [2023] focus on learning diverse behaviors without relying on reward supervision, employing LLMs to generate novel goals during exploration in the environment.\nIn contrast to these previous works, our approach focuses on leveraging prior knowledge about the target task to enhance the initial exploration stage of an RL agent. This allows us to train the policy model with significantly less data, thereby improving the efficiency of the learning process." }, { "figure_ref": [], "heading": "Learning from Teacher Agents", "publication_ref": [ "b9", "b0", "b24", "b17", "b8", "b32", "b25", "b20" ], "table_ref": [], "text": "Prior research has sought to improve the inefficiencies of tabula rasa RL by utilizing existing teacher agents to guide the learning process of a specialized student agent for the specific problem at hand [Da Silva et al., 2020;Agarwal et al., 2022]. These instructions can manifest as demonstrations [Schaal, 1996], scalar feedback [Knox and Stone, 2009], or action advice [Da Silva et al., 2017]. Jump-start RL involves the use of a teacher agent to assist in gathering high-quality data during the initial exploration phase of RL [Uchendu et al., 2023]. Kickstarting RL combines on-policy distillation with RL, prompting the student agent to emulate the teacher's behavior while optimizing for accumulated returns [Schmitt et al., 2018]. Matthews et al. [2022] further extends this approach to hierarchical policies, transferring pre-trained lowlevel skill policies as teachers and training the student agent alongside a policy-over-teachers from scratch, which weighs the advice from each teacher agent at every time step.\nIn contrast, our approach does not rely on specialized teacher agents for the target problem. Instead, we harness the extensive general knowledge embedded in LLMs and pretrained fundamental skills to construct an LLM-based teacher agent and expedite the learning process of the student agent through on-policy distillation." }, { "figure_ref": [], "heading": "LLM4Teach", "publication_ref": [], "table_ref": [], "text": "In this section, we present our methodology LLM4Teach. To begin with, we fix the notations as follows." }, { "figure_ref": [], "heading": "Notations", "publication_ref": [], "table_ref": [], "text": "We consider a sequential decision-making problem formalized as a Markov Decision Process (MDP), denoted by ⟨S, A, T , R, γ⟩, where S and A denote the state and action spaces, respectively. The transition probability function is denoted as T : S × A → P(S), and the reward function is denoted as R : S × A × S → R. Additionally, γ represents the discount factor. The primary objective is to learn an optimal policy π : S → P(A), which maximizes the expected cumulative return over time:\nmax π E[ t γ t r t ].\nThe parameter of the policy π is denoted as θ. A standard gradient-based RL algorithm minimizes a surrogate loss, L RL (θ), using gradient descent with respect to θ. This loss is estimated using sampled trajectories, where each trajectory consists of a sequence of tuples of state, action, and reward." }, { "figure_ref": [ "fig_0" ], "heading": "The LLM4Teach Framework", "publication_ref": [], "table_ref": [], "text": "The core principle of LLM4Teach involves the utilization of a pre-trained LLM as a teacher agent to guide a lightweight student RL agent in swiftly acquiring a policy for real-time decision-making to accomplish a specific embodied task. The student agent is allowed to interact with the environment and receive feedback from these interactions to rectify any errors provided by the teacher agent. Following the training phase, only the lightweight student agent is utilized during the testing phase, yet it owns superior capability in accomplishing the target task compared to its teacher. The conceptual framework of this approach is depicted in Figure 1. In the subsequent sections, we present the process of training the student agent in detail." }, { "figure_ref": [], "heading": "On the LLM-based Teacher Agent", "publication_ref": [], "table_ref": [], "text": "In accordance with Ahn et al.\n[2022], we first notify the LLM of a set of K option policies Π : {π k : S → P(A)} related to the current task using appropriate prompts, where k ∈ {1, 2, ..., K} denotes the option index. When presented with a state s, the student agent requests guidance from the teacher agent for the next step action. The teacher agent initially selects a high-level option π k from the set Π, prompted by a textual description c(s) of the state s. Subsequently, an action suggestion a ∼ π k (s) is generated based on the chosen option, serving as an instruction provided by the teacher." }, { "figure_ref": [], "heading": "Generating Uncertainty-aware Instructions Using LLM", "publication_ref": [ "b13", "b33", "b4", "b1" ], "table_ref": [], "text": "The process of the student agent learning policies from the teacher agent can be seen as distilling important task-related knowledge from the LLM agent. As demonstrated in Hinton et al. [2015], incorporating uncertainty into knowledge distillation can improve sample efficiency and prevent model over-fitting. Consequently, we propose having the LLM offer uncertainty-aware soft instructions to the student agent. When the student agent sends a text description c(s) to the teacher agent, the teacher agent responds by providing a soft decision π T (•|s), i.e., a distribution over available policies, in Algorithm 1 The student agent's policy learning algorithm Require: an LLM agent, pre-trained option policies {π k }, initial policy parameter value θ, maximum allowable number of iterations T 1: for i = 1, 2, ..., T do 2:\nCollecting rollouts following the student agent's initial policy and stores the data in a buffer D 3:\nfor each transition (s, a, r) ∈ D do 4:\nGenerate a prompt with a textual description c(s) of the state s for the LLM-based teacher agent 5:\nGet the soft decision of the LLM-based teacher agent according to Equation (1) 6:\nend for 7:\nfor each gradient descent step do 8:\nθ ← θ -α∇ θ (L RL (θ) + λ i E s H (π T (•|s)||π θ (•|s))) 9:\nend for 10: end for the following way:\nπ T (•|s) = k P r LLM (k|c(s))π k (•|s),(1)\nwhere P r LLM (k|c(s)) represents the probability of the LLM teacher selecting the kth option given the textual description c(s) of the current state s, and π k (•|s) denotes the policy associated with the kth option. To estimate the uncertainties P r LLM (k|c(s)) in our experiments, we query the LLM multiple times with the same prompt to estimate the probability of each decision, similar to Wang et al. [2022]. An alternative approach is to access the logits of tokens relevant to option plans and convert them into probabilities [Carta et al., 2023;Ahn et al., 2022]. We conduct an ablation study on these two approaches in subsection 4.1." }, { "figure_ref": [], "heading": "On the Learning Process of the Student Agent", "publication_ref": [], "table_ref": [], "text": "The policy of the student agent, denoted as π θ (•|s), is learned by minimizing the following loss function:\nL(θ) = L RL (θ) + λE s∼π θ H (π T (•|s)||π θ (•|s)) ,(2)\nwhere L RL (θ) denotes the traditional loss used in RL algorithms to encode the feedback from the environment. This loss is typically designed to maximize the expected return or rewards obtained by the agent. We incorporate the teacher agent's guidance into the student agent's learning process by introducing regular terms H (π T (•|s)||π θ (•|s)) that describe the differences between teacher and student policies. This term captures the Kullback-Leibler (KL) divergence or Wasserstein distance between the policy of the student agent and the policy π T (•|s) of the teacher agent. To control the extent to which the student agent relies on the teacher agent, we introduce an annealing parameter λ. When λ is set to zero, the learning process reduces to a standard RL process without any influence from the teacher agent.\nWe initialize the annealing parameter λ with larger values during the initial stages of training. This setup ensures that the student agent pays more attention to the guidance provided by the LLM-based teacher agent, aiming to align its policy with that of the teacher.\nAs the training progresses, we gradually decay λ, allowing the student agent to shift its focus towards maximizing its expected return. By reducing the influence of the teacher's guidance, the student agent becomes more independent in its decision-making process and emphasizes its own learned policy. Specially, the annealing schedule used is designed as follow:\nλ i = λ 0 -ki if i < i 1 λ c if i 1 < i < i 2 0 otherwise ,(3)\nwhere i represents the index of the training iteration, k represents the decay rate, λ 0 is the initial value of λ, λ c is a constant value smaller than λ 0 , which is maintained from the i 1 th iteration to the i 2 th iteration, i 2 indicates the point at which the connection to the LLM-based teacher agent is closed. For more details on the annealing schedule used in our experiments, see Appendix A.4. This linear reduction of λ enables a smooth transition for the student agent from heavily relying on the teacher's guidance to prioritizing the RL objective. It provides a balance between learning from the teacher and acquiring autonomy in decision-making, ultimately leading to improved performance on the target task. When λ eventually reaches 0, we effectively remove the influence of the teacher's instructions on the student agent's policy. At this stage, the student agent no longer requires the teacher's guidance and learn solely from the environment feedback.\nThe full learning process is summarized in Algorithm 1." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We validated the performance of our method, LLM4Teach, through extensive experiments. The aim of the experiments is to demonstrate the specific advantages of LLM4Teach compared to RL baseline methods and approaches that solely rely on LLM for decision-making, and to test its potential in handling real-world sequential decision making problems." }, { "figure_ref": [], "heading": "Simulation Platforms", "publication_ref": [ "b5", "b31" ], "table_ref": [], "text": "MiniGrid offers a customizable grid world environment with various sizes, object types, and objectives, making it a simple representation of grid-based tasks [Chevalier-Boisvert et al., 2023]. These tasks pose a challenge for RL methods because of their sparse rewards.\nHabitat is a simulation platform specifically created to support the development of embodied AI systems [Szot et al., 2021]. It provides a comprehensive framework for defining and executing various embodied AI tasks, such as navigation, object rearrangement, and question-answering. Additionally, Habitat enables detailed configuration of embodied agents, including their physical attributes and sensor specifications." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b20", "b26" ], "table_ref": [], "text": "In the experiments, we include three baseline approaches to assess the performance of LLM4Teach.\nLLM soly First, we examine a scenario where only an LLM-based agent is utilized to make real-time decisions, without the involvement of the student agent. In this configuration, all decisions are solely made by the LLM-based agent. This approach allows us to investigate the potential of our proposed LLM4Teach framework in enabling the student agent to outperform its teacher in achieving the desired task.\nHierarchical RL Next, in light of the hierarchical nature of the tasks, we explore a hierarchical RL baseline approach that involves training the student agent with pre-trained option policies [Matthews et al., 2022]. By incorporating this approach into the experiments, we can assess the benefits of knowledge distillation using a pre-trained LLM that captures world knowledge.\nBaseline RL Finally, we include a Tabula rasa RL that is trained from scratch using the proximal policy optimization (PPO) algorithm [Schulman et al., 2017]. The policy model structure and the training loss function are set the same as our student agent in LLM4Teach." }, { "figure_ref": [ "fig_1" ], "heading": "Experiments on MiniGrid Experimental Setting", "publication_ref": [ "b5", "b11", "b36" ], "table_ref": [], "text": "We created four procedurally generated tasks in the MiniGrid environment [Chevalier-Boisvert et al., 2023]: {SimpleDoorKey, ColoredDoorKey, LavaDoorKey and Di-vergedDoorKey}. In each task, the agents are situated in rooms with varying layouts and their goal is to unlock the exit door using the correct key. In SimpleDoorKey, the agent must explore the room, find a key, and use it to unlock the exit door. In ColoredDoorKey, the exit door can only be unlocked with a key that matches its color, adding complexity for the agent to understand task-specific rules. LavaDoorKey introduces hazard grids (Lava) to the room, requiring the agent to quickly adapt to new elements. DivergedDoorKey presents two exit doors instead of one, allowing the agent to choose either door to escape, emphasizing the importance of using uncertainty-aware instructions to improve overall sample efficiency.\nFor every task, we incorporate 5 specialized options, which are: {explore, go to, pickup, drop, open}. All options, with the exception of explore, are dependent on specific conditions, such as interacting with an object, for example, pickup the red key. These expert policies are compiled under the fundamental task of SimpleDoorKey. Each option policy produces a Dirac delta distribution over actions based on the state. Additional information about the environments and options can be found in Appendix A.1.\nWe use ChatGLM-turbo [Du et al., 2022] as the LLM to construct our teacher agent. This powerful model enables our teacher agent to possess complex reasoning capabilities. To leverage these capabilities, we employ Chain-of-thought (CoT) [Wei et al., 2022] style prompts. The CoT prompts consist of multiple stages that guide the LLM's decisionmaking process. Firstly, the LLM is prompted to summarize the scene, providing a condensed description of the environment. Secondly, it is instructed to reason about the appropriate course of action based on the given context. Finally, the LLM outputs its decision for the given task. To aid the LLM in understanding the reasoning process and ensuring correct output formatting, an arbitrary example is included in the prompt. This example serves as a reference point and helps the LLM grasp the desired output structure. Figure 2 illustrates an example of the dialogues generated by the LLM using this prompt setup in the ColoredDoorKey task. " }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Results on MiniGrid", "publication_ref": [], "table_ref": [], "text": "The main results in Figure 3 show that the baseline RL struggles to complete tasks, even the simplest one (Simple-DoorKey), due to highly sparse rewards. In contrast, hierarchical RL eventually succeeds in the tasks but requires over 10,000 training iterations across all tasks. However, LLM4Teach, guided by the LLM-based teacher, effectively leverages the world knowledge embedded in the LLM, leading to significantly higher sample efficiency compared to prior art RL baselines with sparse rewards.\nResults also show that LLM4Teach outperforms LLM soly in terms of accumulated returns for all tasks, except for Sim-pleDoorKey. SimpleDoorKey is the simplest one, with low reasoning difficulty for LLM. Moreover, all option policies are designed based on this environment, so there is no issue of option policy transfer. Therefore, LLM soly can achieve a success rate of nearly 100% for the task.\nFor the other tasks which are more complex than Sim-pleDoorKey, LLM soly performs unsatisfactorily due to the lack of enough task-grounding knowledge. In comparison, LLM4Teach allows the student agent to learn task-grounding knowledge from the environmental feedback, thus performs much better than LLM soly. For example, in Colored- , when λi = 0. LLM soly does not involve any learning, hence we report its average performance over 500 testing seeds, represented by a dashed horizontal line. For other approaches, we evaluate their policies every 10 iterations with 10 randomly generated testing seeds and report the averaged testing performance here. With our approach, the student agent effectively leverages the knowledge of the LLM-based teacher to bootstrap the early learning stage. Except for the SimpleDoorKey task, the student agent in LLM4Teach ultimately outperforms the LLMbased agent by learning from environment feedback through minimizing a traditional RL loss. DoorKey, given the observation \"Agent sees a red key, a blue key, a blue door.\", an LLM can suggest \"pickup the red key\", while the right option is \"pickup the blue key\", since only the the key with the same color of the door can be used to unlock the door. As a result, LLM soly only achieves an average return of 0.52. In contrast, utilizing the student agent within LLM4Teach leads to a significantly higher average return of 0.77, as illustrated in Figure 3. This is due to the student agent's ability to rectify its teacher's errors and adjust its behavior according to environmental feedback.\nWe have identified three major categories for the error policies generated by the LLM:\n• Incorrect policies: These policies are executable but result in task failure. For example, an incorrect policy could involve moving into the lava, leading to the failure of task completion.\n• Inefficient policies: These policies are executable but not necessary for task completion. They can increase the number of steps required to accomplish the task, potentially resulting in time-out errors. For instance, an inefficient policy could involve continuously exploring even after finding the correct key and door, instead of directly proceeding to the door.\n• Inconsistent policies: These policies are not executable due to non-compliance with behavioral logic or contextual constraints, e.g., attempting to pick up a new key without first dropping the key that the agent is currently holding." }, { "figure_ref": [ "fig_4" ], "heading": "Ablation Study on Uncertainty-aware Instructions", "publication_ref": [ "b33", "b4", "b1" ], "table_ref": [], "text": "As presented in subsection 3.3, the teacher agent in LLM4Teach offers uncertainty-aware instructions to the stu- dent agent, which is a distinguishing feature compared to previous LLM-based agents (e.g., in Ahn et al. [2022]), where deterministic feedback is provided upon receiving a query. We conducted ablation studies to investigate the benefits of using uncertainty-aware instructions instead of deterministic ones in the DivergedDoorKey task. We considered two approaches for the LLM to provide uncertainty-aware soft instructions. The first one is to query the LLM multiple times with the same prompt to statistically estimate the probability of each decision, similar to Wang et al. [2022]. The other approach is to access the logits of tokens relevant to option plans and convert them into probabilities [Carta et al., 2023;Ahn et al., 2022]. We compare these two approaches with a hard instruction baseline, where the LLM's responses are directly used as deterministic instructions.\nThe result of the ablation study is shown in Figure 4. It can be observed that utilizing uncertainty-aware instructions improves the overall sample efficiency compared to using de-terministic ones. Moreover, there is no significant disparity in performance between the two approaches for generating uncertainty-aware instructions. The first approach is simpler to implement in practical scenarios but consumes more computational resources due to multiple queries to LLMs, particularly when the observation space is large. On the other hand, the second approach necessitates access to logits, making it applicable only to open-source LLMs." }, { "figure_ref": [], "heading": "Experiments on Habitat", "publication_ref": [ "b31" ], "table_ref": [], "text": "To evaluate the potential applicability of our method in realworld scenarios, we conducted additional experiments using Habitat [Szot et al., 2021]." }, { "figure_ref": [ "fig_5" ], "heading": "Experimental Setting", "publication_ref": [ "b20", "b6", "b23" ], "table_ref": [], "text": "In our experiments, we focus on a manipulation task called Nav & Pick. The objective of the robotic agent is to navigate to the table without any collisions and subsequently perform a precise object pickup. Refer to Figure 5 for a visual representation.\nWe conduct separate pre-training for two high-level options, namely Navigate and Pick. These options are utilized by both LLM4Teach and the hierarchical RL baseline [Matthews et al., 2022]. To ensure the effectiveness of option training, we employ ten distinct training environment specifications, each with varying object and target locations. Furthermore, the agent's initial positions are randomly generated upon environment reset, ensuring diverse training scenarios. For each option, we utilize a ResNet18 backbone in conjunction with a 2-layer MLP architecture to train the corresponding models. For more detailed information about the environments and training parameters, refer to Appendix A.2.\nWe select the Vicuna-7b model [Chiang et al., 2023] as the LLM used in LLM4Teach, following a similar prompt design as in previous experiments on Minigrid. Moreover, we utilize visual observations captured by the on-board camera as input queries for the LLM. To enable the LLM-based teacher agent to comprehend these visual inputs, we utilize a preconfigured translator that generates natural language descriptions listing the objects identified in the visual inputs. Alternatively, pretrained visual-language models such as CLIP [Radford et al., 2021] can also be utilized for this purpose." }, { "figure_ref": [ "fig_6" ], "heading": "Results on Habitat", "publication_ref": [], "table_ref": [], "text": "Due to the task being limited to home scenarios, the LLM effectively covers the common-sense reasoning abilities required to successfully complete the task. This results in few erroneous decision-making during option selection. Consequently, the task completion rate and average returns for LLM soly, as depicted in Figure 6, are relatively high. In contrast, the RL baselines struggle to complete the task due to the scarcity of rewards. Our approach, LLM4Teach, consistently outperforms all RL-based baselines in terms of both sample efficiency and asymptotic performance. This highlights the effective utilization of the LLM-based teacher's knowledge by the student agent in LLM4Teach, facilitating the learning of appropriate policies. Given enough training iterations, our approach exhibits a higher success rate compared to LLM soly. The primary advantage of LLM4Teach is that it is an extremely lightweight RL-based student agent specifically de- signed for utilization in the final online testing phase, instead of relying on the heavier LLM." }, { "figure_ref": [], "heading": "Concluding Remarks", "publication_ref": [], "table_ref": [], "text": "Both RL and LLMs have limitations in handling complex sequential decision-making problems. RL often lacks sample efficiency and incurs high exploration costs, while LLMs are prone to decision errors and have high deployment costs.\nCombining LLMs with RL to overcome these limitations is a natural idea, but creating an effective interface between them poses challenges. LLMs utilize texts as input and output, making them suitable for providing high-level instructions, whereas RL operates at a lower level and uses numerical vectors instead of texts.\nHere we present LLM4Teach, a novel framework that combines LLMs and RL for embodied sequential decisionmaking tasks. Our approach leverages the reasoning capabilities of LLMs to develop a highly capable RL-based student agent. In particular, we use the LLM to provide high-level suggestions on available options for policy training of the student agent. Extensive experiments demonstrate that our student agent outperforms all RL baselines in sample efficiency. Meanwhile, it achieves superior performance to LLM soly in terms of task completion success rate with much fewer computational resources during online testing. For instance, in MiniGrid experiments, the student agent's model size is 24K compared to LLM's 130B. Similarly, in Habitat experiments, the student agent's model size is 10M while LLM's is 7B." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Minigrid Experiments Option Framework", "publication_ref": [ "b30" ], "table_ref": [], "text": "We address the hierarchical structure of target tasks by employing an option framework [Sutton et al., 1999], which defines an option as a sub-policy that specifies a behavior extended over time. Each option ω is defined by the triplet (I, π, β), representing the set of initiation states, the acting policy, and the termination condition, respectively.\nFor the MiniGrid environments, we utilize a set of options consisting of: {explore, go to, pickup, drop, open}. Each option can be initiated from any state, i.e., I ω = S for all options. Here is a breakdown of each option and its associated termination conditions:\n• {explore}: During exploration, the agent systematically scans the unexplored grid row-by-row following a predetermined strategy. This option terminates when the agent observes walls forming a closed area. • {go to}: The agent plans a path to the target object using the A * algorithm and terminates the option upon reaching the target object. • {pickup}: The agent attempts to pick up the target object if it is not already holding another object. Otherwise, it first drops the current object at the nearest available position before picking up the new one. • {drop}: The agent drops the object it is currently holding at the nearest available position. • {open}: This is a one-step action that attempts to interact with the object in front of the agent." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [ "b26" ], "table_ref": [], "text": "We choose Proximal Policy Optimization (PPO) [Schulman et al., 2017] " }, { "figure_ref": [], "heading": "A.2 Habitat Experiments", "publication_ref": [ "b31" ], "table_ref": [], "text": "Task Details In our Habitat experiments, the robot agent is equipped with a wheeled base, a 7-degree-of-freedom (DoF) arm manipulator, and a parallel-jaw gripper. Additionally, it is equipped with a camera mounted on its \"head\" that provides a 90 • field of view and captures visual data at a resolution of 256 × 256 pixels. Therefore, the observation space of the environment consists of a visual observation denoted as o v ∈ R 256×256×1 from the depth camera. It also includes a sensor observation o s ∈ R 24 obtained from various sensors such as joint sensors, gripping sensors, the end effector of the arm, object, and target GPS sensors, among others. The action space in our setup is 11-dimensional, comprising 2 actions for controlling the robot positions, 7 actions for controlling the robot arm, 1 action indicating whether the robot is holding an object, and 1 action indicating termination. This action space enables the agent to execute precise movements and manipulations required to accomplish the target task. For detailed information on the training of option policies for the LLM agent, refer to the Habitat documentation [Szot et al., 2021]. These option policies for the teacher model are kept fixed during the knowledge distillation process to ensure consistency and stability during execution.\nThe agent is trained using the following reward function:\n, where I pickup is an indicator function that is 1 if the agent has picked up the object, I holding is an indicator function that is 1 if the robot is holding an object, ∆ o arm represents the change in Euclidean distance between the end-effector and the object, and I f orce is an indicator function that is 1 if the force on the robot due to collision exceeds a specified limit. Additionally, a slack reward of -0.005 is given to incentivize the agent to complete the task as quickly as possible." }, { "figure_ref": [], "heading": "Hyperparameters", "publication_ref": [ "b37" ], "table_ref": [], "text": "We train the policies using Decentralized Distributed Proximal Policy Optimization (DD-PPO) [Wijmans et al., 2019] with Wasserstein distance regularization terms. The hyperparameters and their values used in the experiments are listed in " }, { "figure_ref": [], "heading": "A.3 Detailed Results", "publication_ref": [], "table_ref": [], "text": "We provide the detailed asymptotic performances for all tasks in " }, { "figure_ref": [], "heading": "A.4 Additional study on the annealing schedule", "publication_ref": [], "table_ref": [], "text": "As described in Section 3.4 (see Equation 3), the value of λ depends on a set of hyper-parameters, including λ 0 , λ c , i 1 , i 2 , and k. In our experiments, we set i 2 = 2000 for all MiniGrid tasks, meaning that the value of λ reduces to 0 after 2000 iterations. In this section, we present an ablation study that compares different annealing schedules for λ. The schedules considered are as follows:\n1. Constant value: λ takes a constant value of 0.1 then decreases to 0 at the i 2 th iteration (λ 0 = 0.1, λ c = 0.1, i 1 = 2000);\n2. Linearly decaying value: the value of λ linearly decays from 10 to 0 over the first 1000 iterations (λ 0 = 10, λ c = 0, i 1 = 1000) or 2000 iterations (λ 0 = 10, λ c = 0, i 1 = 2000);\n3. Stepwise value: The value of λ linearly decreases from 10 to 0.1 over 1000 iterations, and then remains constant at 0.1 for some iterations before eventually reducing to 0 (λ 0 = 10, λ c = 0.1, i 1 = 1000).\nAs depicted in Figure 7, we observed that only the last annealing schedule yielded successful results. We argue that this result is attributed to the fact that when the LLM-based teacher is removed, the student agent shall experience a period of policy oscillation in the short term. By allowing the student agent to adapt to the subtle influence of the LLM-based teacher initially and gradually removing it, the agent can navigate through this learning period more smoothly. Empirically we suggest setting i 1 as the iteration number when the regularization term H (π T (•|s)||π θ (•|s)) converges, and setting i 2 as two times i 1 . These findings highlight the importance of carefully choosing an appropriate annealing schedule for λ to ensure effective knowledge transfer and smooth adaptation of the student agent during the training process. " } ]
Recent studies have uncovered the potential of Large Language Models (LLMs) in addressing complex sequential decision-making tasks through the provision of high-level instructions. However, LLM-based agents lack specialization in tackling specific target problems, particularly in real-time dynamic environments. Additionally, deploying an LLM-based agent in practical scenarios can be both costly and time-consuming. On the other hand, reinforcement learning (RL) approaches train agents that specialize in the target task but often suffer from low sampling efficiency and high exploration costs. In this paper, we introduce a novel framework that addresses these challenges by training a smaller, specialized student RL agent using instructions from an LLM-based teacher agent. By incorporating the guidance from the teacher agent, the student agent can distill the prior knowledge of the LLM into its own model. Consequently, the student agent can be trained with significantly less data. Moreover, through further training with environment feedback, the student agent surpasses the capabilities of its teacher for completing the target task. We conducted experiments on challenging MiniGrid and Habitat environments, specifically designed for embodied AI research, to evaluate the effectiveness of our framework. The results clearly demonstrate that our approach achieves superior performance compared to strong baseline methods.
Large Language Model as a Policy Teacher for Training Reinforcement Learning Agents
[ { "figure_caption": "Figure 1 :1Figure1: An illustration of our LLM4Teach framework using the MiniGrid environment as an exemplar. The LLM-based teacher agent responds to observations of the state provided by the environment by offering soft instructions. These instructions take the form of a distribution over a set of suggested actions. The student agent is trained to optimize two objectives simultaneously. The first one is to maximize the expected return, the same as in traditional RL algorithms. The other one is to encourage the student agent to follow the guidance provided by the teacher. As the student agent's expertise increases during the training process, the weight assigned to the second objective gradually decreases over time, reducing its reliance on the teacher.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example of a prefix prompt and an interaction between the student agent and the LLM-based teacher agent for the task Col-oredDoorKey. The Prefix prompt consists of two blocks: the instruction block briefly introduces the target problem and the CoT reasoning process; and the example block provides one arbitrary example of the expected format of the response from the LLM.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: The tested average returns (top row) and task completion success rates (bottom row) vs. the training iteration index of the compared methods across four environments. The dotted vertical line indicates the point at which the teacher's guidance is diminished, i.e., when λi = 0. LLM soly does not involve any learning, hence we report its average performance over 500 testing seeds, represented by a dashed horizontal line. For other approaches, we evaluate their policies every 10 iterations with 10 randomly generated testing seeds and report the averaged testing performance here. With our approach, the student agent effectively leverages the knowledge of the LLM-based teacher to bootstrap the early learning stage. Except for the SimpleDoorKey task, the student agent in LLM4Teach ultimately outperforms the LLMbased agent by learning from environment feedback through minimizing a traditional RL loss.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: study on uncertainty-aware instructions. It shows that the utilization of two type of uncertainty-aware instructions by the teacher agent results in improved sample efficiency for the student agent.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Habitat environment. Left: The visual observation from the onboard camera. Right: A view of the acting robot and its workspace from a third-party camera. Note that the third-party camera mentioned is purely for illustrative purposes and is not utilized during either the training or testing phases.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The tested average returns (left) and task completion success rates (right) vs. the training iteration index of the compared methods on the Nav&Pick task. For explanations of the lines and curves in the figure, see the caption of Figure 3.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" } ]
Zihao Zhou; Bin Hu; Chenyang Zhao; Pu Zhang; Bin Liu
[ { "authors": "Rishabh Agarwal; Max Schwarzer; Pablo Samuel Castro; Aaron C Courville; Marc Bellemare", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Reincarnating reinforcement learning: Reusing prior computation to accelerate progress", "year": "2022" }, { "authors": "Anthony Michael Ahn; Noah Brohan; Yevgen Brown; Omar Chebotar; Byron Cortes; Chelsea David; Chuyuan Finn; Keerthana Fu; Karol Gopalakrishnan; Hausman", "journal": "", "ref_id": "b1", "title": "Do as I can, not as I say: Grounding language in robotic affordances", "year": "2022" }, { "authors": "Ajay Narasimha Harel Biggie; Dusty Mopidevi; Christoffer Woods; Heckman", "journal": "", "ref_id": "b2", "title": "Tell me where to go: A composable framework for context-aware embodied robot navigation", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Thomas Carta; Clément Romac; Thomas Wolf; Sylvain Lamprier; Olivier Sigaud; Pierre-Yves Oudeyer", "journal": "", "ref_id": "b4", "title": "Grounding large language models in interactive environments with online reinforcement learning", "year": "2023" }, { "authors": "Maxime Chevalier-Boisvert; Bolun Dai; Mark Towers; Rodrigo De Lazcano; Lucas Willems; Salem Lahlou; Suman Pal; Pablo Samuel Castro; Jordan Terry", "journal": "", "ref_id": "b5", "title": "Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks", "year": "2023" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b6", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023-03" }, { "authors": "Cédric Colas; Laetitia Teodorescu; Pierre-Yves Oudeyer; Xingdi Yuan; Marc-Alexandre Côté", "journal": "", "ref_id": "b7", "title": "Augmenting autotelic agents with large language models", "year": "2023" }, { "authors": "Felipe Leno; Da Silva; Ruben Glatt; Anna Helena; Reali Costa", "journal": "", "ref_id": "b8", "title": "Simultaneously learning and advising in multiagent reinforcement learning", "year": "2017" }, { "authors": "Felipe Leno; Da Silva; Garrett Warnell; Anna Helena Reali; Peter Costa; Stone", "journal": "Autonomous Agents and Multi-Agent Systems", "ref_id": "b9", "title": "Agents teaching agents: a survey on inter-agent transfer learning", "year": "2020" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b10", "title": "Palm-e: An embodied multimodal language model", "year": "2023" }, { "authors": "Zhengxiao Du; Yujie Qian; Xiao Liu; Ming Ding; Jiezhong Qiu; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b11", "title": "Glm: General language model pretraining with autoregressive blank infilling", "year": "2022" }, { "authors": "Yuqing Du; Olivia Watkins; Zihan Wang; Cédric Colas; Trevor Darrell; Pieter Abbeel; Abhishek Gupta; Jacob Andreas", "journal": "", "ref_id": "b12", "title": "Guiding pretraining in reinforcement learning with large language models", "year": "2023" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b13", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Bin Hu; Chenyang Zhao; Pu Zhang; Zihao Zhou; Yuanhang Yang; Zenglin Xu; Bin Liu", "journal": "", "ref_id": "b14", "title": "Enabling efficient interaction between an agent and an llm: A reinforcement learning approach", "year": "2023" }, { "authors": "Wenlong Huang; Pieter Abbeel; Deepak Pathak; Igor Mordatch", "journal": "PMLR", "ref_id": "b15", "title": "Language models as zero-shot planners: Extracting actionable knowledge for embodied agents", "year": "2022" }, { "authors": "Martin Klissarov; D' Pierluca; Shagun Oro; Roberta Sodhani; Pierre-Luc Raileanu; Pascal Bacon; Amy Vincent; Mikael Zhang; Henaff", "journal": "", "ref_id": "b16", "title": "Motif: Intrinsic motivation from artificial intelligence feedback", "year": "2023" }, { "authors": "Knox Bradley; Peter Stone", "journal": "", "ref_id": "b17", "title": "Interactively shaping agents via human reinforcement: The tamer framework", "year": "2009" }, { "authors": "Minae Kwon; Sang Michael Xie; Kalesha Bullard; Dorsa Sadigh", "journal": "", "ref_id": "b18", "title": "Reward design with language models", "year": "2023" }, { "authors": "Jiageng Mao; Yuxi Qian; Hang Zhao; Yue Wang", "journal": "", "ref_id": "b19", "title": "Gptdriver: Learning to drive with gpt", "year": "2023" }, { "authors": "Michael Matthews; Mikayel Samvelyan; Jack Parker-Holder; Edward Grefenstette; Tim Rocktäschel", "journal": "", "ref_id": "b20", "title": "Hierarchical kickstarting for skill transfer in reinforcement learning", "year": "2022" }, { "authors": "Kolby Nottingham; Yasaman Razeghi; Kyungmin Kim; Pierre Lanier; Roy Baldi; Sameer Fox; Singh", "journal": "", "ref_id": "b21", "title": "Selective perception: Optimizing state descriptions with reinforcement learning for language model actors", "year": "2023" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b22", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b23", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Stefan Schaal", "journal": "Advances in neural information processing systems", "ref_id": "b24", "title": "Learning from demonstration", "year": "1996" }, { "authors": "Simon Schmitt; Jonathan J Hudson; Augustin Zidek; Simon Osindero; Carl Doersch; Wojciech M Czarnecki; Joel Z Leibo; Heinrich Kuttler; Andrew Zisserman; Karen Simonyan", "journal": "", "ref_id": "b25", "title": "Kickstarting deep reinforcement learning", "year": "2018" }, { "authors": "John Schulman; Filip Wolski; Prafulla Dhariwal; Alec Radford; Oleg Klimov", "journal": "", "ref_id": "b26", "title": "Proximal policy optimization algorithms", "year": "2017" }, { "authors": "Sha Hao; Yao Mu; Yuxuan Jiang; Li Chen; Chenfeng Xu; Ping Luo; Eben Shengbo; Masayoshi Li; Wei Tomizuka; Mingyu Zhan; Ding", "journal": "", "ref_id": "b27", "title": "Languagempc: Large language models as decision makers for autonomous driving", "year": "2023" }, { "authors": "Noah Shinn; Federico Cassano; Ashwin Gopinath; Shunyu Karthik R Narasimhan; Yao", "journal": "", "ref_id": "b28", "title": "Reflexion: Language agents with verbal reinforcement learning", "year": "2023" }, { "authors": "Hee Chan; Jiaman Song; Clayton Wu; Brian M Washington; Wei-Lun Sadler; Yu Chao; Su", "journal": "", "ref_id": "b29", "title": "Llm-planner: Few-shot grounded planning for embodied agents with large language models", "year": "2023" }, { "authors": "Doina Richard S Sutton; Satinder Precup; Singh", "journal": "Artificial intelligence", "ref_id": "b30", "title": "Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning", "year": "1999" }, { "authors": "Andrew Szot; Alexander Clegg; Eric Undersander; Erik Wijmans; Yili Zhao; John Turner; Noah Maestre; Mustafa Mukadam; Devendra Singh Chaplot; Oleksandr Maksymets", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Habitat 2.0: Training home assistants to rearrange their habitat", "year": "2021" }, { "authors": "Ikechukwu Uchendu; Ted Xiao; Yao Lu; Banghua Zhu; Mengyuan Yan; Joséphine Simon; Matthew Bennice; Chuyuan Fu; Cong Ma; Jiantao Jiao", "journal": "PMLR", "ref_id": "b32", "title": "Jump-start reinforcement learning", "year": "2023" }, { "authors": "Xuezhi Wang; Jason Wei; Dale Schuurmans; Quoc Le; Ed Chi; Sharan Narang; Aakanksha Chowdhery; Denny Zhou", "journal": "", "ref_id": "b33", "title": "Self-consistency improves chain of thought reasoning in language models", "year": "2022" }, { "authors": "Guanzhi Wang; Yuqi Xie; Yunfan Jiang; Ajay Mandlekar; Chaowei Xiao; Yuke Zhu; Linxi Fan; Anima Anandkumar", "journal": "", "ref_id": "b34", "title": "Voyager: An open-ended embodied agent with large language models", "year": "2023" }, { "authors": "Lei Wang; Chen Ma; Xueyang Feng; Zeyu Zhang; Hao Yang; Jingsen Zhang; Zhiyuan Chen; Jiakai Tang; Xu Chen; Yankai Lin", "journal": "", "ref_id": "b35", "title": "A survey on large language model based autonomous agents", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Ed Chi; Quoc Le; Denny Zhou", "journal": "", "ref_id": "b36", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Erik Wijmans; Abhishek Kadian; Ari Morcos; Stefan Lee; Irfan Essa; Devi Parikh; Manolis Savva; Dhruv Batra", "journal": "", "ref_id": "b37", "title": "Dd-ppo: Learning near-perfect pointgoal navigators from 2.5 billion frames", "year": "2019" }, { "authors": "Zhiheng Xi; Wenxiang Chen; Xin Guo; Wei He; Yiwen Ding; Boyang Hong; Ming Zhang; Junzhe Wang; Senjie Jin; Enyu Zhou", "journal": "", "ref_id": "b38", "title": "The rise and potential of large language model based agents: A survey", "year": "2023" }, { "authors": "Sherry Yang; Ofir Nachum; Yilun Du; Jason Wei; Pieter Abbeel; Dale Schuurmans", "journal": "", "ref_id": "b39", "title": "Foundation models for decision making: Problems, methods, and opportunities", "year": "2023" }, { "authors": "Shunyu Yao; Jeffrey Zhao; Dian Yu; Izhak Shafran; Yuan Karthik R Narasimhan; Cao", "journal": "", "ref_id": "b40", "title": "React: Synergizing reasoning and acting in language models", "year": "2022" }, { "authors": "Wenhao Yu; Nimrod Gileadi; Chuyuan Fu; Sean Kirmani; Kuang-Huei Lee; Montse Gonzalez Arenas; Lewis Hao-Tien; Tom Chiang; Leonard Erez; Jan Hasenclever; Humplik", "journal": "", "ref_id": "b41", "title": "Language to rewards for robotic skill synthesis", "year": "2023" }, { "authors": "Sheng Yue Zhen; Lu Bi; Pan Xing-Tong; Shi Wei-Qin; Chen Haipeng; Fang Zi-Rui; Yi-Shu", "journal": "", "ref_id": "b42", "title": "Robot task planning based on large language model representing knowledge with directed graph structures", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 150.05, 537.41, 73.18, 12.72 ], "formula_id": "formula_0", "formula_text": "max π E[ t γ t r t ]." }, { "formula_coordinates": [ 4, 58.98, 215.32, 238.02, 19.7 ], "formula_id": "formula_1", "formula_text": "θ ← θ -α∇ θ (L RL (θ) + λ i E s H (π T (•|s)||π θ (•|s))) 9:" }, { "formula_coordinates": [ 4, 100.11, 285.24, 196.89, 20.14 ], "formula_id": "formula_2", "formula_text": "π T (•|s) = k P r LLM (k|c(s))π k (•|s),(1)" }, { "formula_coordinates": [ 4, 79.71, 481.67, 217.3, 10.32 ], "formula_id": "formula_3", "formula_text": "L(θ) = L RL (θ) + λE s∼π θ H (π T (•|s)||π θ (•|s)) ,(2)" }, { "formula_coordinates": [ 4, 361.89, 131.98, 196.11, 30.87 ], "formula_id": "formula_4", "formula_text": "λ i = λ 0 -ki if i < i 1 λ c if i 1 < i < i 2 0 otherwise ,(3)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction and Motivation", "publication_ref": [ "b13", "b6", "b0", "b10", "b16", "b1" ], "table_ref": [], "text": "In high-stakes scenarios, such as industrial or medical applications, ensuring the reliability of machine learning model predictions is paramount. These domains often present dynamic and uncertain environments, necessitating adaptive machine learning solutions with minimal operational overhead. A prevalent issue impacting prediction reliability is concept drift, where a data distribution changes over time [14]. It can be denoted as P train (X, Y ) ̸ = P online, t (X, Y ), representing disparities in data distributions during initial training and online operation. This is common in various domains, where alterations in conditions lead to non-stationary data streams. If overlooked, concept drift can degrade model performance across applications. Therefore, adaptive strategies like periodic model updates or retrainings, especially upon drift detection, can be applied to maintain model reliability in evolving operational landscapes. However, most conventional drift detection algorithms, e.g. [7,1], rely on error rates that demand access to scarce and costly true labels. An alternative is given by a class of drift detectors that work in an unsupervised way, utilizing a model's prediction confidence / uncertainty as a proxy for the error rate, such as Confidence Distribution Batch Detection (CDBD) [11] and Margin Density Drift Detection (MD3) [17]. More recently, Uncertainty Drift Detection (UDD) was proposed by Baier et al. [2], which utilizes neural network uncertainty estimates Workshop on Distribution Shifts, 37th Conference on Neural Information Processing Systems (NeurIPS 2023)." }, { "figure_ref": [ "fig_0" ], "heading": "Methodology and Experiments", "publication_ref": [], "table_ref": [], "text": "To compare the uncertainty estimation methods introduced in the following, we conduct two experiments for each method and dataset. Both start by training the method with the initial five percent of the whole data stream. The first experiment serves as a baseline and thus, the remaining data is tested without analyzing uncertainty estimates or triggering retrainings. In the main experiment however, batches of the stream are evaluated and uncertainty estimates are used as a proxy for the error rate of the ADWIN detector. Once a drift is detected, a retraining is triggered with the initial five percent plus the most recent samples equivalent to one percent of the stream size. Thereby, models may adapt to new concepts while retaining sufficient generalization. Every experiment is repeated five times with different random seeds and results are averaged to allow for a fair comparison. Figure 1 illustrates the process of the main experiment. " }, { "figure_ref": [], "heading": "Uncertainty Estimation Methods", "publication_ref": [ "b13", "b11", "b1", "b5", "b9", "b11", "b4" ], "table_ref": [], "text": "To quantify model uncertainty, Bayesian neural networks, which learn a posterior distribution over model parameters, can be employed [14]. This distribution enables the application of Bayesian model averaging (BMA) during inference. Therefore, multiple weights w i are drawn to gather a distribution of predictions p i (y|w i , x), given input features x and target labels y. The final prediction p(y|x) is then given as the average\np(y|x) = 1 P P i=1 p i (y|w i , x).(1)\nFor regression tasks, the uncertainty is the standard deviation of said distribution. While there are several uncertainty-related metrics for classification tasks, only Shannon's entropy H does not require ground-truth labels. Given the final prediction p(y|x) with K classes, it is computed as\nH[p(y|x)] = - K k=1 p(y = k|x) • log 2 p(y = k|x).(2)\nAlthough bayesian methods were previously considered state-of-the-art, they are computationally intractable for modern neural networks with millions of parameters [12]. Therefore, alternatives have been developed, of which we analyzed the following in our experiments. To get an uncertainty estimate, Shannon's entropy H is applied to the final prediction of each method.\nBasic Neural Network. Given the focus on classification tasks, a distribution of predictions is not necessarily required. Hence, the simplest method is to use a single prediction from an unmodified neural network. The motivation for this is to have a baseline for the more sophisticated methods.\nMonte Carlo Dropout (MCD). Rather than drawing multiple weights from a posterior distribution as in BMA, a random dropout filter is applied to the neurons for several forward passes. These estimates are then averaged to get a final prediction. This allows for estimating the uncertainty in the model parameters based on the variability of the predictions across different dropout masks [2,6].\nEnsemble. A distribution of predictions can also be won by training multiple neural networks. Different seeds of members introduce randomness due to their influence on the initial weights as well as the shuffling of data during training. As Lakshminarayanan et al. [10] have shown, few members, i.e. 5, can be sufficient for good uncertainty estimates.\nStochastic Weight Averaging Gaussian (SWAG). Based on Stochastic Weight Averaging (SWA), a generalization technique in deep networks, Maddox et al. [12] propose a method to approximate a posterior distribution over neural network weights. Therefore, a Gaussian is fit utilizing the SWA solution as the first moment and a low rank plus diagonal covariance also inferred from stochastic gradient descent iterates. Given this posterior distribution, BMA is applied to get a final prediction.\nActivation Shaping (ASH). The ASH method can be considered a more advanced version of the basic neural network, as it also works on single predictions. Djurisic et al. [5] introduced it as an out-of-distribution (OOD) detection method that reaches state-of-the-art performance. Assuming over-parameterized feature representations in modern neural networks, the hypothesis is that pruning a larger percentage of activations in a late layer helps with tasks such as OOD detection.\nThe hyperparameters of the introduced methods as well as the model architectures can be found in Appendix A.1. Furtheremore, we include details of the tuning process in Appendix A.3." }, { "figure_ref": [], "heading": "Drift Detector", "publication_ref": [ "b6", "b14", "b2", "b7", "b15", "b18", "b2" ], "table_ref": [], "text": "Concept drift detectors, such as Drift Detection Method [7], Page Hinkley Test [15], and ADWIN [3], are typically error rate-based, necessitating access to costly true labels [8]. In contrast, data distribution-based detectors exclusively analyze input features, often using distance metrics like the Kolmogorov-Smirnov test [16] to identify changes in feature distribution. Regardless of the detection method employed, distinguishing between noise and genuine concept drift poses a significant challenge [19], requiring a balance between swift adaptation to changes and resilience to noise. ADWIN offers performance guarantees for false positives and false negatives, making it an attractive choice. Furthermore, it is able to work with any real-valued input instead of beeing limited to an error rate between 0-1. As introduced by Bifet et al. [3], ADWIN utilizes sliding windows of variable size. While no drift is present, new samples are added to a window W. After each sample, the algorithm attempts to find two sub-windows W 0 and W 1 that contain distinct averages. Once this happens a drift is assumed and the older sub-window is discarded. The variability of heterogeneous real-world data streams can be addressed by the sensitivity parameter δ ∈ (0, 1). The configuration for our experiments can be found in Appendix A.1." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b17" ], "table_ref": [], "text": "For our studies, we use seven real-world classification datasets from the USP Data Stream Repository [18]. They encompass abrupt, incremental and reocurring drifts, along with combinations thereof. In the Gas sensor dataset chemical sensor data is analyzed to identify one of six gases. The Electricity dataset focuses on predicting market price changes driven by supply and demand. For the Rialto dataset, segments of images from a timelapse with changing weather conditions shall be classified. Lastly, optical sensors are used to analyze moving patterns of flying insect species while drift is artificially introduced to generate the InsAbr, InsInc, InsIncAbr and InsIncReo datasets." }, { "figure_ref": [ "fig_1" ], "heading": "Metrics and Results", "publication_ref": [ "b12", "b3" ], "table_ref": [ "tab_0", "tab_0" ], "text": "For evaluation, we focus on the following two metrics to capture the quality of the uncertainty estimates as well as the drift detection performance: Expected Calibration Error (ECE) ↓ [13] measures the average deviation between prediction confidence and accuracy. As the name suggests, it quantifies how well a model is calibrated. We expect that calibration correlates positively with drift detection capability. Matthew's Correlation Coefficient (MCC) ↑ is able to handle class imbalances which generally makes it a good metric for classification tasks [4]. We employ the MCC to measure the overall prediction performance of the models, averaged over the complete experiment runs. We expect that poor drift detection performance will lead to unsuitable retraining points, in turn producing low MCC scores and vice versa.\nThe results of our experiments can be found in Table 1. Analyzing the MCC values shows that the SWAG method offers the most balanced performance across all datasets. However, the gap in performance to the other methods is minimal. In fact, all methods perform fairly similarly. Surprisingly, even the basic method without any modifications keeps up with the others. Greater differences can be identified when analyzing the ECE as depicted in Figure 2. Here, the SWAG method offers significantly better calibrated predictions in nearly all datasets. The only exception is the InsIncAbr dataset, where all methods achieve a proficient calibration. All other methods appear to be similarly worse calibrated compared to SWAG for the remaining datasets. Despite that, this does not directly translate to a better drift detection performance, as shown by the MCC values. Meanwhile, the total execution time fluctuates notably depending on the method selected, as presented in the last row of Table 1. As the basic and ASH method are based on a single sample, they serve as a lower bound in this regard. While MCD and the SWAG method both increase the inference runtime due to the sampling process, adaptations in the training process of the SWAG method incurr additional overhead. Although the execution time of the ensemble could be reduced by parallelizing the training and inference process of individual ensemble members, this would require additional computational resources. Hence, we choose not to, resulting in the highest execution time by far. Appendix A.2 includes further details of the main experiment as well as an additional experiment to validate the retraining positions found by the uncertainty-based detector. Furthermore, it contains the standard deviations of our experiments. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we implemented five uncertainty estimation methods for classification tasks and evaluated them in experiments including seven real-world datasets. Our goal was to compare the utility of their uncertainty estimates for unsupervised concept drift detection by using them as a proxy for the error-rate in combination with the ADWIN detector. Thereby, drift points in data streams shall be identified to trigger retrainings at the appropriate time and ultimately prevent model decay. Interestingly, even our baseline method, relying solely on the entropy calculated from the softmax scores, performed competitively with more sophisticated state-of-the-art methods. Moreover, all methods performed fairly similar in terms of overall classification performance as measured by the MCC metric. While the SWAG method achieved the most balanced MCC values, differences were only marginal. However, this was not the case when analyzing the ECE. Here the SWAG method offers significantly better calibrated predictions than all other methods. Regardless, these did not translate to better results for the drift detection. Thus, the assumption can be made, that the choice of method does not have a noteworthy influence on the performance of uncertainty-based concept drift detection for real-world applications.\nTo confirm the previous assumption, future work may include testing further real-world datasets, including regression problems. For those, the basic neural network and the ASH method are no longer applicable. Instead, the effect of the ASH method in combination with the remaining approaches could be studied." }, { "figure_ref": [], "heading": "A Appendix A.1 Reproducibility", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "To make our experiments reproducible, Table 2 gives an overview of the neural network architecture used for each dataset. Hidden layers use Rectified Linear Unit activations, while softmax is applied in the final layer. The ADAM optimizer is used with binary or categorical cross-entropy loss, depending on the number of classes. For MCD, 100 forward passes are carried out. The Ensemble consists of three members. Bayesian model averaging is conducted with 100 samples from the posterior approximation of the SWAG method. Details on these choices are discussed in A.3. Furthermore, the estimated covariance matrix utilized in the approach has a rank of 25 and is updated each epoch, starting at the first iteration. For the ASH method, the version termed ASH-p was chosen, where unpruned activations are not modified at all. Pruning is applied in the penultimate hidden layer (i.e. third last overall layer) with a pruning percentage of 60%. Lastly, Table 3 indicates the sensitivity values δ for the ADWIN detector. " }, { "figure_ref": [ "fig_1", "fig_2", "fig_3", "fig_4", "fig_1" ], "heading": "A.2 Additional Results and Experiments", "publication_ref": [ "b8" ], "table_ref": [ "tab_0", "tab_4" ], "text": "We generated reliability diagrams [9] in addition to Table 1 and Figure 2 of the main experiment. These diagrams illustrate the quantification of the ECE. Hence, buckets of confidence values are compared to their average accuracy. Furthermore, the gaps to a perfect calibration are visualized. Plots can be found in Figures 345. Consistent to Figure 2, they show that the SWAG method offers the best calibration.\nTo validate the retraining positions found by the uncertainty based drift detection, we also conducted an experiment with equally and randomly distributed retraining positions. We compared these against the SWAG-based drift detection. Hence, the same amount of retrainings was triggered as found by the SWAG approach for each dataset (see Table 4). While the retraining positions found by SWAGs uncertainty values yield significantly better predictions for Gas, Electricity, and InsAbr, the opposite is the case for InsIncAbr and InsIncReo. Here, the equally distributed approach for retrainings offers noticeably better results. For Rialto and InsInc there are only slight differences between all three methods. Nevertheless, the detection based approach still offers the best overall performance.\nAs we repeated all of these experiments five times with different random seeds, we also include the standard deviations in Tables 5 -7. " }, { "figure_ref": [], "heading": "A.3 Hyperparameter Tuning", "publication_ref": [ "b4" ], "table_ref": [ "tab_8", "tab_9", "tab_10", "tab_11", "tab_12", "tab_13" ], "text": "To tune the hyperparameters, the main experiment was run for several configurations with the same seed. Tables in the following show the MCC based on all predictions and the number of retrainings in parentheses.\nFor MCD the only additional hyperparameter is the number of stochastic forward passes T . As Table 8 reveals, we tested T = 25, 50, 75 and 100. Although T = 25 had the best performance in the majority of our experiments, this is not really representative for the overall performance. In fact, discrepancies are rather slight in datasets where T = 25 yields the best performance, while it is significantly outperformed in other datasets. We found that T = 100 offers the most balanced performance across all datasets. The additional computational cost is also negligible as T = 100 triggers the least amout of retrainings and thus incurrs the lowest execution time for all experiments combined. Similar to MCD there is only one hyperparameter for the Ensemble. Namely, the number of members M which was set to three, five, and seven during our tests, as shown in Table 9. Here we found very slight differences overall. Thus, we choose the version with the least computational cost, which is M = 3. Other than the previous methods, SWAG comes with several hyperparameters. First, the influence of the number of weight samples S drawn from the approximated distribution was tested. Therefore, the rank K was set to 25, and weights were updated every epoch starting at the first iteration. As shown by Table 10, S = 100 offers the most balanced performance. The higher execution time compared to S = 50 and S = 75 is the result of more retrainings in datasets such as InsAbr and InsIncReo. Consequently, there is a noticeable performance gap in said datasets which mitigates the slower execution. Onwards, the effect of the rank K was studied with a fixed S. As seen in Table 11, the initial rank of K = 25 slightly outperformed the other settings. Starting at later epochs and reducing the update frequency for the SWAG method showed no mentionable improvements neither in performance, nor in execution time. Thus the final configuration was S = 100 and K = 25 with updates in every epoch beginning at the start of training. All three ASH versions introduced by Djurisic et al. [5] were tested with pruning percentages between 60% and 90% in the penultimate layer. As Table 12 reveals, the best results performance was achieved by the ASH-p version with a rather low pruning percentage of 60%. This is surprising, as ASH-p was the worst method in tests from Djurisic et al. where it served as a baseline. Furthermore, experiments have shown that higher pruning percentages were hurting performance. Lastly, the placement of the pruning layer was tested for the previous best configuration. While differences were slight, the best performance was reached when pruning in the penultimate hidden layer (i.e. third last overall layer) as seen in Table 13. " } ]
In safety-critical domains such as autonomous driving and medical diagnosis, the reliability of machine learning models is crucial. One significant challenge to reliability is concept drift, which can cause model deterioration over time. Traditionally, drift detectors rely on true labels, which are often scarce and costly. This study conducts a comprehensive empirical evaluation of using uncertainty values as substitutes for error rates in detecting drifts, aiming to alleviate the reliance on labeled post-deployment data. We examine five uncertainty estimation methods in conjunction with the ADWIN detector across seven real-world datasets. Our results reveal that while the SWAG method exhibits superior calibration, the overall accuracy in detecting drifts is not notably impacted by the choice of uncertainty estimation method, with even the most basic method demonstrating competitive performance. These findings offer valuable insights into the practical applicability of uncertainty-based drift detection in real-world, safety-critical applications.
An Empirical Study of Uncertainty Estimation Techniques for Detecting Drift in Data Streams
[ { "figure_caption": "Figure 1 :1Figure 1: Approach to uncertainty drift detection.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Calibration of the employed uncertainty estimation methods measured by ECE (↓) across the seven datasets.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Reliability diagrams of main experiment (1/3)", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Reliability diagrams of main experiment (2/3)", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Reliability diagrams of main experiment (3/3)", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "BasicMCDEnsembleSWAGASHGas0.273 (0) 0.455 (36)0.256 (0) 0.46 (55)0.245 (0) 0.492 (50)0.299 (0) 0.46 (52)0.275 (0) 0.459 (35)Electricity0.178 (0) 0.424 (11) 0.421 (10) 0.405 (10) 0.198 (0) 0.183 (0)0.191 (0) 0.419 (7)0.175 (0) 0.438 (10)Rialto0.532 (0) 0.537 (43) 0.553 (48) 0.527 (45) 0.534 (0) 0.505 (0)0.52 (0) 0.54 (52)0.525 (0) 0.539 (43)InsAbr0.471 (0) 0.519 (9)0.472 (0) 0.509 (8)0.461 (0) 0.503 (8)0.48 (0) 0.514 (6)0.474 (0) 0.508 (7)InsInc0.087 (0) 0.241 (3)0.1 (0) 0.238 (3)0.081 (0) 0.241 (3)0.1 (0) 0.301 (4)0.085 (0) 0.231 (3)InsIncAbr0.304 (0) 0.53 (24)0.307 (0) 0.525 (26) 0.518 (23) 0.445 (25) 0.531 (25) 0.308 (0) 0.299 (0) 0.316 (0)InsIncReo0.141 (0) 0.253 (18) 0.247 (20) 0.236 (18) 0.302 (21) 0.243 (20) 0.133 (0) 0.172 (0) 0.16 (0) 0.133 (0)Total exec. time6821s7339s15653s9036s6890s", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Overview of model architectures.", "figure_data": "NameNo. Layers Neurons per layer Dropout rate EpochsGas5128, 64, 32, 16, 80.2100Electricity332, 16, 80.1400Rialto4512, 512, 256, 320.2200InsAbr5128, 64, 32, 16, 80.1200InsInc5128, 64, 32, 16, 80.1100InsIncAbr332, 16, 80.150InsIncReo3128, 64, 320.1400", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Sensitivity values for ADWIN detector.", "figure_data": "Gas Electricity Rialto InsAbr InsInc InsIncAbr InsIncReo0.11e-151e-200.0020.0020.10.1", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Retraining position validation.", "figure_data": "SWAGEqual dist. Random dist.Gas0.46 (52)0.387 (52)0.38 (52)Electricity 0.419 (7)0.346 (7)0.351 (7)Rialto0.54 (52)0.555 (52)0.557 (52)InsAbr0.514 (6)0.459 (6)0.484 (6)InsInc0.301 (4)0.301 (4)0.293 (4)InsIncAbr 0.445 (25) 0.483 (25)443 (25)InsIncReo 0.302 (21) 0.344 (21)0.32 (21)", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Standard deviations of baseline experiment without retrainings.", "figure_data": "BasicMCD Ensemble SWAGASHGas0.0249 0.04370.01190.0274 0.0404Electricity 0.016 0.00930.01250.0235 0.014Rialto0.0102 0.01180.00540.0035 0.0014InsAbr0.0066 0.00170.00170.0014 0.0022InsInc0.0099 0.01010.01010.0096 0.0075InsIncAbr 0.0067 0.00350.00420.0086 0.005InsIncReo 0.0073 0.01090.0040.0033 0.0059", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Standard deviations of main experiment with ADWIN detection.", "figure_data": "BasicMCD Ensemble SWAGASHGas0.0331 0.04380.03660.0203 0.0347Electricity 0.0199 0.03680.03560.0187 0.0136Rialto0.0034 0.00280.00300.0036 0.0026InsAbr0.0060 0.00820.01070.0251 0.0101InsInc0.0156 0.01250.01380.0162 0.0158InsIncAbr 0.0021 0.00500.00960.0203 0.0078InsIncReo 0.0088 0.00720.01110.0113 0.0095", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Standard deviations of retraining position validation experiment.", "figure_data": "SWAG Equal dist. Random dist.Gas0.02030.00940.0305Electricity 0.01870.02080.0398Rialto0.00360.00130.0047InsAbr0.02510.00390.0247InsInc0.01720.00210.0316InsIncAbr 0.02030.00810.0469InsIncReo 0.01130.00360.0156", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "MCC values of MCD with 25, 50, 75, and 100 forwards passes.", "figure_data": "T = 25T = 50T = 75T = 100Gas0.418 (49) 0.443 (48) 0.41 (44) 0.451 (46)Electricity0.365 (8) 0.386 (10) 0.363 (11) 0.415 (8)Rialto0.554 (59) 0.56 (61) 0.554 (59) 0.553 (59)InsAbr0.509 (9)0.491 (6) 0.521 (10) 0.481 (5)InsInc0.218 (2)0.216 (1)0.217 (1)0.216 (1)InsIncAbr0.54 (25) 0.538 (23) 0.538 (23) 0.538 (22)InsIncReo0.249 (21) 0.24 (19)0.24 (20) 0.235 (18)Total exec. time4712s4712s4875s4638s", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "MCC values of an ensemble of 3, 5, and 7 members.", "figure_data": "M = 3M = 5M = 7Gas0.479 (48) 0.494 (51) 0.479 (52)Electricity0.422 (10) 0.407 (9) 0.426 (10)Rialto0.529 (46) 0.529 (48) 0.526 (51)InsAbr0.474 (4)0.505 (8)0.494 (8)InsInc0.259 (3)0.194 (1)0.255 (2)InsIncAbr0.53 (24) 0.514 (26) 0.508 (22)InsIncReo0.231 (21) 0.255 (22) 0.25 (18)Total exec. time12912s19511s31092s", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "MCC values of SWAG with 25, 50, 75, and 100 weight samples.", "figure_data": "S = 25S = 50S = 75S = 100Gas0.436 (49) 0.433 (57) 0.456 (55) 0.455 (53)Electricity0.414 (10) 0.435 (11) 0.343 (13) 0.396 (10)Rialto0.544 (53) 0.546 (54) 0.543 (49) 0.541 (53)InsAbr0.543 (9)0.503 (6)0.517 (7)0.542 (8)InsInc0.283 (3)0.29 (4)0.304 (4)0.296 (3)InsIncAbr0.51 (25) 0.487 (23) 0.528 (23) 0.504 (22)InsIncReo0.332 (31) 0.282 (16) 0.311 (20) 0.335 (28)Total exec. time5694s5018s4913s5514s", "figure_id": "tab_10", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "MCC values of SWAG with S = 100 and K = 10, 25, and 40.", "figure_data": "K = 10K = 25K = 40Gas0.443 (54) 0.455 (53) 0.435 (55)Electricity0.412 (10) 0.396 (10) 0.392 (13)Rialto0.543 (54) 0.541 (53) 0.548 (50)InsAbr0.521 (7)0.542 (8)0.54 (8)InsInc0.255 (3)0.296 (3)0.318 (4)InsIncAbr0.487 (26) 0.504 (22) 0.514 (21)InsIncReo0.316 (28) 0.335 (28) 0.289 (22)Total exec. time5603s5514s5472s", "figure_id": "tab_11", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "MCC values of ASH-p (top), ASH-b (middle) and, ASH-s (bottom) for pruning percentages 60%, 70%, 80% and 90%.", "figure_data": "60%70%80%90%0.443 (42) 0.443 (42) 0.398 (47) 0.293 (60)Gas0.407 (38) 0.407 (38) 0.357 (39) 0.388 (48)0.397 (28) 0.397 (28) 0.396 (26) 0.347 (25)0.475 (12) 0.395 (10) 0.408 (6) 0.422 (13)Electricity0.347 (8) 0.464 (10) 0.444 (10) 0.412 (12)0.318 (4)0.339 (3)0.338 (3)0.398 (4)0.539 (42) 0.537 (41) 0.526 (45) 0.447 (43)Rialto0.551 (38) 0.56 (36) 0.564 (39) 0.442 (41)0.569 (35) 0.561 (35) 0.575 (35) 0.45 (39)0.496 (7) 0.474 (10) 0.435 (7)0.322 (8)InsAbr0.471 (4)0.386 (9)0.426 (5)0.301 (5)0.491 (6)0.405 (7)0.435 (5)0.342 (9)0.217 (1)0.252 (5)0.179 (2)0.172 (3)InsInc0.237 (2)0.155 (3)0.162 (3)0.153 (2)0.23 (2)0.196 (3)0.196 (5)0.223 (3)0.502 (24) 0.502 (24) 0.477 (24) 0.336 (22)InsIncAbr0.473 (25) 0.473 (25) 0.435 (19) 0.424 (20)0.526 (17) 0.526 (17) 0.453 (21) 0.419 (24)0.232 (17) 0.254 (17) 0.208 (17) 0.171 (21)InsIncReo0.202 (12) 0.196 (13) 0.142 (8) 0.168 (21)0.214 (9)0.171 (6) 0.223 (13) 0.116 (17)3570s3551s3718s3657sTotal exec. time3034s3122s3026s3483s2804s2715s3240s3297s", "figure_id": "tab_12", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "MCC values of ASH-p with a pruning percentage of 60% at outputlayer -1, -2, and -3.", "figure_data": "L -1L -2L -3Gas0.443 (42) 0.441 (36) 0.399 (39)Electricity0.475 (12) 0.423 (13) 0.453 (13)Rialto0.539 (42) 0.545 (43) 0.545 (43)InsAbr0.496 (7)0.494 (6)0.496 (8)InsInc0.217 (1)0.237 (3)0.234 (3)InsIncAbr0.502 (24) 0.526 (21) 0.524 (24)InsIncReo0.232 (17) 0.259 (21) 0.245 (20)Total exec. time3570s3611s3646s", "figure_id": "tab_13", "figure_label": "13", "figure_type": "table" } ]
Anton Winter; Nicolas Jourdan; Tristan Wirth; Volker Knauthe; Arjan Kuijper
[ { "authors": "Manuel Baena-Garcıa; José Del Campo-Ávila; Raul Fidalgo; Albert Bifet; Ricard Gavalda; Rafael Morales-Bueno", "journal": "Citeseer", "ref_id": "b0", "title": "Early drift detection method", "year": "2006" }, { "authors": "Lucas Baier; Tim Schlör; Jakob Schöffer; Niklas Kühl", "journal": "", "ref_id": "b1", "title": "Detecting concept drift with neural network model uncertainty", "year": "2021" }, { "authors": "Albert Bifet; Ricard Gavalda", "journal": "SIAM", "ref_id": "b2", "title": "Learning from time-changing data with adaptive windowing", "year": "2007" }, { "authors": "Davide Chicco; Giuseppe Jurman", "journal": "BMC genomics", "ref_id": "b3", "title": "The advantages of the matthews correlation coefficient (mcc) over f1 score and accuracy in binary classification evaluation", "year": "2020" }, { "authors": "Andrija Djurisic; Nebojsa Bozanic; Arjun Ashok; Rosanne Liu", "journal": "", "ref_id": "b4", "title": "Extremely simple activation shaping for out-of-distribution detection", "year": "2022" }, { "authors": "Yarin Gal; Zoubin Ghahramani", "journal": "PMLR", "ref_id": "b5", "title": "Dropout as a bayesian approximation: Representing model uncertainty in deep learning", "year": "2016" }, { "authors": "Joao Gama; Pedro Medas; Gladys Castillo; Pedro Rodrigues", "journal": "Springer", "ref_id": "b6", "title": "Learning with drift detection", "year": "2004" }, { "authors": "Paulo M Gonçalves Jr; G T Silas; Roberto De Carvalho Santos; Davi Cl Sm Barros; Vieira", "journal": "Expert Systems with Applications", "ref_id": "b7", "title": "A comparative study on concept drift detectors", "year": "2014" }, { "authors": "Chuan Guo; Geoff Pleiss; Yu Sun; Kilian Q Weinberger", "journal": "", "ref_id": "b8", "title": "On calibration of modern neural networks", "year": "2017" }, { "authors": "Alexander Balaji Lakshminarayanan; Charles Pritzel; Blundell", "journal": "Advances in neural information processing systems", "ref_id": "b9", "title": "Simple and scalable predictive uncertainty estimation using deep ensembles", "year": "2017" }, { "authors": "Patrick Lindstrom; Brian Mac Namee; Sarah Jane Delany", "journal": "Evolving Systems", "ref_id": "b10", "title": "Drift detection using uncertainty distribution divergence", "year": "2013" }, { "authors": "J Wesley; Pavel Maddox; Timur Izmailov; Garipov; P Dmitry; Andrew Vetrov; Wilson Gordon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "A simple baseline for bayesian uncertainty in deep learning", "year": "2019" }, { "authors": "Gregory F Mahdi Pakdaman Naeini; Milos Cooper; Hauskrecht", "journal": "AAAI Press", "ref_id": "b12", "title": "Obtaining well calibrated probabilities using bayesian binning", "year": "2015" }, { "authors": "Yaniv Ovadia; Emily Fertig; Jie Ren; Zachary Nado; David Sculley; Sebastian Nowozin; Joshua Dillon; Balaji Lakshminarayanan; Jasper Snoek", "journal": "Advances in neural information processing systems", "ref_id": "b13", "title": "Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift", "year": "2019" }, { "authors": "Ewan S Page", "journal": "Biometrika", "ref_id": "b14", "title": "Continuous inspection schemes", "year": "1954" }, { "authors": "Christoph Raab; Moritz Heusinger; Frank-Michael Schleif", "journal": "Neurocomputing", "ref_id": "b15", "title": "Reactive soft prototype computing for concept drift streams", "year": "2020" }, { "authors": "Tegjyot Singh; Sethi ; Mehmed Kantardzic", "journal": "Procedia Computer Science", "ref_id": "b16", "title": "Don't pay for validation: Detecting drifts from unlabeled data using margin density", "year": "2015" }, { "authors": "M A Vinicius; Denis M Dos Souza; Andre G Reis; Gustavo Eapa Maletzke; Batista", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b17", "title": "Challenges in benchmarking stream learning algorithms with real-world data", "year": "2020" }, { "authors": "Alexey Tsymbal", "journal": "Computer Science Department, Trinity College Dublin", "ref_id": "b18", "title": "The problem of concept drift: definitions and related work", "year": "2004" } ]
[ { "formula_coordinates": [ 2, 249.32, 573.96, 255.35, 30.32 ], "formula_id": "formula_0", "formula_text": "p(y|x) = 1 P P i=1 p i (y|w i , x).(1)" }, { "formula_coordinates": [ 2, 207.2, 661.57, 297.46, 30.55 ], "formula_id": "formula_1", "formula_text": "H[p(y|x)] = - K k=1 p(y = k|x) • log 2 p(y = k|x).(2)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b2", "b5", "b3" ], "table_ref": [], "text": "Providing explanations for decisions made by machine learning models is gaining relevance as the impact of AI on daily life increases (Barredo Arrieta et al. 2020). In this work we provide explainability and trust by incrementally transforming the learned, complicated model to an interpretable model, which provides an explanation for the entire model, not only around certain instances. Furthermore, this derived model is sufficiently accurate and can be used directly instead of the original model. This allows us to first learn an as accurate possible model and later incrementally simplify this model, allowing the user to decide where the optimal tradeoff is between accuracy and comprehensibility and possibly even altering the interpretable model using domain expertise.\nThe explainable model in this paper is derived from a Probabilistic Circuit (PC), which models uncertainty using expressive deep generative models and offers tractable inference (Choi, Vergari, and den Broeck 2020). We use generative learning algorithms because they allow us to relax the requirement that both positive and negative examples are available, which is for example not the case when dealing with the Positive and Unlabeled (PU) learning setting (Bekker and Davis 2020). A generative probabilistic model expresses a probability distribution over the instance space and can generate a probability for any given input. The highdensity regions of a distribution represent those regions in the input space that have a high probability of being consistent with the given data set, or put differently, which relations between features are more likely.\nTo provide an explanation for a PC, we derive a logical theory from the PC that functions as the interpretable model by applying pruning methods on the circuit. This logical theory covers the high-density regions generated by the PC and can be used as a discriminative classifier that predicts when a new instance would have a high likelihood given the PC, thus predicting whether a new instance is similar to the instances used to train the PC.\nIn this paper, the derived logical theory describing the training data is used as a database query in a real world case for a music streaming company which is used to find similar data in a music database. Learning comprehensible queries based on a set of input songs improves the workflow of the music experts, as it removes the time consuming task of manually identifying and constructing these queries. In this setting it is important that the deployed model (thus database query) can be inspected. For example, just one inappropriate song in a playlist can ruin the atmosphere in a wellness centre or a funeral home.\nThe contributions of this work are fourfold: it (1) proposes a new metric to measure comprehensibility of logical theories; (2) introduces the new problem setting of generating a comprehensive logical theory describing the high density regions of a PC; (3) presents a method based on pruning to solve this problem (PUTPUT); (4) showcases its relevance by applying the method on a real world use case handling generation of database queries for playlist generation." }, { "figure_ref": [], "heading": "Theoretical background", "publication_ref": [], "table_ref": [], "text": "In this section, theoretical background and terminology is given on databases, probabilistic circuits and binary classification." }, { "figure_ref": [], "heading": "arXiv:2311.13379v1 [cs.AI] 22 Nov 2023", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Databases", "publication_ref": [], "table_ref": [], "text": "In this work, a database is a set of unique examples, where an example is a tuple of variable-value pairs over variable set A. Each variable a ∈ A has a set of different possible values V (a). An example e can then be defined as an assignment of a value for each variable: ∀ a∈A : ∃!v ∈ V (a) : e(a) = v Multi-valued variables can be binarized into mutually exclusive binary variables using one-hot encoding, e.g. variable x with V (x) = {1, 2, 3} is binarized into 3 mutually exclusive boolean variables x 1 , x 2 , x 3 with for each of the new variables:\nx = i ⇔ x i = ⊤ j̸ =i x j = ⊥." }, { "figure_ref": [ "fig_0" ], "heading": "Circuits", "publication_ref": [ "b7" ], "table_ref": [], "text": "Probabilistic Circuit A probabilistic circuit P := (G, θ) represents a joint probability distribution p(X) over random variables X through a directed acyclic graph (DAG) G parametrized by θ. Each node in the DAG defines a computational unit, which is one of three types of nodes -input, sum, and product. Every leaf node in G is an input node; every inner node n (i.e., sum or product) receives inputs from its children out(n), and computes its output, which encodes a probability distribution p n defined recursively as follows:\npn(x)=          f n (x) if n is an input node Π d∈out(n) p d (x) if n is a product node d∈out(n) θ d|n • p d (x) if n is a sum node\nwhere f n (x) is a univariate input distribution of a literal and θ d|n denotes the weight that corresponds to the edge (d, n) in the DAG. The probability distribution of a probabilistic circuit is defined as the distribution represented by its root unit p P (x). The scope of a node is the set of input variables it depends on. A sum unit is smooth if the scopes of all the children are identical. A product unit is decomposable if the scopes of all the children are disjoint (Dang, Liu, and Van den Broeck 2022). In this work, we limit the input distribution f n (x) of a literal to be a numerical boolean function:\nf n (x) = {0, 1} with f n (x) + f n (-x) = 1.\nLogical Circuit A logical circuit L represents a logical theory over random variables X as a DAG. Each node in the DAG defines a computational unit. The DAG consists of three types of units -input, AND and OR. Every leaf unit in the DAG is an input unit, every inner unit is either an AND or an OR unit and receives input from its children. The logical output of a unit is recursively defined as:\non(x)=          f n (x) if n is an input unit d∈out(n) o d (x) if n is an AND unit d∈out(n) o d (x)\nif n is an OR unit Link between the two circuits Each probabilistic circuit P with boolean input distributions can be converted to a logical circuit L by substituting the sum-nodes with OR units and removing the weights, substituting the multiply-nodes with AND units and converting the input nodes to boolean nodes where p P (l) = 1 ⇔ p L (l) = ⊤ and p P (l) = 0 ⇔ p L (l) = ⊥. An example of a PC with its corresponding logical circuit is shown in Figure 1. \n-B) ∨ (A ∧ (-B ∨ B))) ∧ (C ∨ -C)) ∨ (-A ∧ ((-B ∧ C) ∨ (-B ∧ -C)))\nLemma: All models of a logical circuit constructed from a probabilistic circuit P with nonzero weights are the examples that have a positive probability generated by P For proof: see Appendix A." }, { "figure_ref": [], "heading": "Binary classification", "publication_ref": [], "table_ref": [], "text": "Binary classification is the task of classifying an input example in one of two classes. To evaluate a binary classifier, functions such as f1-score, precision and recall can be used. Precision is given by " }, { "figure_ref": [], "heading": "Comprehensibility of logical theories", "publication_ref": [ "b16", "b12" ], "table_ref": [], "text": "Description lengths are used to measure the size of data in information theory or to measure compression when making a logical theory smaller (Muggleton 1987;Jain et al. 2021). Comprehensibility of a theory mirrors how complicated the theory is to process by a human. The higher the comprehensibility of a theory, the easier it is to process. Description length does not necessarily imply comprehensibility, as a long theory can be easy to understand and a short theory can still be complicated. Furthermore, in our setting we want to support multi-valued variables (e.g., music style can be one of many values). This is the reason why we propose a new measure for logical theories with support for function symbols that represent multi-valued variables (thus assuming closed world). In this measure, we expect the theory to be in conjunction normal form (CNF).\nComprehensibility in this metric is linked to two properties that make it more easy for a human to read the model: (1) A variable that is used in only a few clauses requires the reader the keep in mind only a small set (or no) clauses to assess the interactions; (2) A multi-valued variable that allows for only a few or very many values that can be assigned is easy to remember (e.g., music style is only metal, or music style is everything except metal). Note that both properties relate to how much information the user needs to keep in mind. Therefore, we will base our metric on information theory and quantify this as the number of bits needed to represent a clause and its directly linked clauses as a proxy for how much a user needs to memorise when reading the model. The theory is more comprehensible when the measure is lower, which is why the measure itself is called incomprehensibility.\nFor clause c and multi-valued variable X with |X| possible values, c(X) ≤ |X| are the values that included in clause c. The entropy of a variable in this clause is given as\nE var (c, X) = -c(X)\n|X| log 2 ( c(X) |X| ). Since each variable in a clause needs to be remembered, the incomprehensibility of a clause c in a theory with variables V is the sum of their entropies:\nΥ(c, V ) = X∈V E var (c, X).\nWhen many clauses in a theory include the same variable, then the theory is more difficult to understand as the interactions need to be considered. This can be expressed by constructing a graph with each clause as a node. An edge is constructed between two nodes when they include the same variable, e.g. there is an edge e(c i , c l ) between clauses c i : x = 1 ∨ z = 3 and c l : w = 7 ∨ x = 2, as they both include variable x. To read one clause, the expected effort is the incomprehensibility of that clause and all of the clauses it is linked with. For the entire theory, we take the sum of this value for all clauses. Incomprehensibility of CNF theory C = i=1..n c i with variables V (C) is then calculated as:\nI(C) = i=1..n     Υ(c i , V (C)) + k=1..n ∧∃e(ci,c k )) Υ(c k , V (C))    \nIn the rest of this paper, increasing comprehensibility is used to describe the minimisation of incomprehensibility." }, { "figure_ref": [], "heading": "Problem statement", "publication_ref": [], "table_ref": [], "text": "Given a probabilistic circuit P with boolean input distributions, find the logical theory L that models the examples in database D for which P generates a high probability, which is as comprehensible as possible. As the actual meaning of a high probability can differ in different applications, we assume that there is a probability threshold t given that defines the separation between high and low probability.\nGiven A probabilistic circuit P with boolean input distributions, a database D and a probability threshold t Objective Target T is a subset of the database that describes the examples for which P generated a high probability in relation to t: T = {e|e ∈ D ∧ p P (e) ≥ t}. The goal is to find logical theory L with model set M (L) = {x|x ∈ D, x |= L} such that 1. The theory describes the items in T as good as possible arg max\nL f 1 (M (L), T ) with f 1(x, y) = 2×|x∩y| 2×|x∩y|+|x-y|+|y-x|\n2. The theory is as comprehensible as possible:\narg min L I(L)" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "As stated in the lemma in Section 2.2, the logical circuit constructed from a PC covers the examples for which the PC generates a nonzero probability. A learned PC with nonzero parameters generates nonzero probability for all examples, which gives that the learned theory corresponds to L = ⊤.\nAs we are only interested in the high-density regions, represented as target T , the PC can be pruned such that only the regions of the instance space that receive a high probability are represented by the PC. The resulting logical circuit can then be used as a classifier where if the circuit returns true, the probability will be above the threshold and if it results false the probability will be below the threshold. In addition the resulting circuit will become more comprehensible as pruning is applied.\nWe propose a method called PUTPUT (Probabilistic circuit Understanding Through Pruning Underlying logical Theories) that consists of two steps. The first step prunes the sum-nodes of the circuit using a pruning function with iteratively changing parameters to end up with a circuit only covering the high-density regions. A second step prunes away input nodes to lower the size of the circuit, whilst keeping the f 1-score resulting from the first step as a lower bound, with the goal of increasing comprehensibility." }, { "figure_ref": [], "heading": "Pruning functions", "publication_ref": [ "b7" ], "table_ref": [], "text": "There are several ways of pruning a probabilistic circuit, as mentioned in (Dang, Liu, and Van den Broeck 2022). One can prune random edges or use the parameter values to direct the pruning. It is made clear however that pruning by generative significance, i.e. which edges are essential in the generation of probabilities, outperforms the other two approaches when making the trade-off between circuit size and accuracy as it is able to prune 80% of the edges without losing a lot of generated probability ." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "PUTPUT", "publication_ref": [], "table_ref": [], "text": "Our approach to bridge the gap between the generative PC and the comprehensible logical theory consists of 2 steps: iteratively pruning sum nodes using a pruning method as described above and pruning input nodes to improve comprehensibility.\nIteratively pruning sum nodes Given a pruning method f p (P, args) that prunes edges in the PC P the PC based on parameters args. The first step of PUTPUT uses a search algorithm f s to find the parameters of f p that optimise the f1-score in relation to the target of the examples covered by the logical circuit related to the pruned PC .\nInput: Probabilistic circuit P, database D with a subset of target examples T ∈ D, and pruning method f p (P, args).\nOutput:\nPruned PC P p = f s(P, T , D, f p) = f p (P, arg max args f1({x | x ∈ D ∧ p fp(P,args) (x) > 0}, T }).\nThe left circuit in Figure 2 shows a circuit that is pruned based on a threshold. Pruning input nodes To further increase the comprehensibility of the logical circuit, the second step of PUTPUT prunes all input nodes one by one, as an input node that is a child of a product node cannot be pruned by the pruning methods mentioned above. If pruning the input node does not negatively impact the f1-score related to the target, it can be pruned, as we assume this increases comprehensibility. As the ordering of these nodes can impact the result, this is done iteratively until no more nodes can be pruned. An example of a pruned circuit with pruned input nodes is shown in the right circuit in Figure 2. The exact algorithm for this step is shown in Appendix B." }, { "figure_ref": [], "heading": "Use case: Generation of concept queries", "publication_ref": [], "table_ref": [], "text": "A product concept is a set of items in a collection that share certain characteristics. Most online consumer businesses use product concepts to structure their inventory and improve the usability of their website. In a bookshop, example product concepts could be Fantasy, Italian novels written before 1997 or Books written by authors that use a pseudonym. The way these product concepts are implemented into the back-end system can differ, but it is fair to assume that product concepts can be formalised by a form of database query, generally called product query. These queries can filter the database on certain features with given constraints. It is clear that a link exists between the idea of a product concept, which can be implemented as a database query, and finding a logical theory describing data for which a PC generates high probabilities.\nThe use case that is handled in this paper is based on a problem occurring in the workflow of music streaming provider Company 1 . They have a database of annotated music where each song is represented by a fixed set of discrete valued features. These can be objective (BPM, Year, Lyricist,...) or subjective (Mood, Feel, ...). As one of their services, they provide a predefined selection of playlists, where each playlist is represented as a query on their database. A playlist is safe when there are no outliers from the product concept, for example a black metal song could ruin the general feel of a playlist consisting of happy songs for children. It is easy to see that the query representing a playlist from Company is a product query, with the playlist as the product concept it is covering. Generating such queries can be time 1 Real name is anonymized for blind review. consuming if done manually, which is the driving factor to improve the automatic generation of these queries.\nAs Company assures their customers that all playlists are safe, it is important that automatically generated queries can be easily checked for errors and if necessary be corrected by a music expert. Increasing comprehensibility of the query makes this task easier, which leads to the automation problem that occurs in the workflow of Company: given some input songs, generate: (1) a playlist that contains similar songs to the input, thus representing the product concept that covers the input; (2) a comprehensive database query." }, { "figure_ref": [], "heading": "Formalizing the problem of the use case", "publication_ref": [], "table_ref": [], "text": "Given a database and a subset of the database, find the shortest concept query that represents the product concept that covers the given subset, assuming that one exists. This problem is a PU-learning setting, as the input consists of positive and unlabeled data.\nInput A database D and a subset S of the database (S ∈ D) being part of a hidden product concept C with F all the songs covered by C Output/objective Find concept query Q C that represents product concept C such that 1. The theory describes the items covered by C as good as possible arg max\nQ C f 1 (Q C (D), F) with Q C (D) the songs in D that are covered by Q C 2.\nThe theory is as comprehensive as possible:\narg min\nQ C I(Q C )" }, { "figure_ref": [], "heading": "Solving the problem with PCs", "publication_ref": [ "b6" ], "table_ref": [], "text": "As the link between the problem setting and the use case is easy to see, we can apply PUTPUT to generate product queries.\nGiven a database D and input songs S ∈ D, the flow is described as: 1. Learn a PC P with S as training data using the Hidden Chow Liu Tree (Liu and Van den Broeck 2021) method available in the JUICE package (Dang et al. 2021). 2. Find the probability threshold t that identifies the highdensity region, which is represented as the set of target examples T . This is done by using the elbow method to find threshold t such that with f (x) = |{s|s ∈ D ∧ p P (s) > x}| and ϵ = 10 -5 :\nf (t+ϵ)-f (t) f (t)-f (t-ϵ) < 1 4 and f (t+2 * ϵ)-f (t+ϵ) f (t+ϵ)-f (t) > 1 4\n3. Apply PUTPUT to P with T as the target to get the resulting logical theory." }, { "figure_ref": [], "heading": "evaluation", "publication_ref": [], "table_ref": [], "text": "The method is empirically evaluated to answer the following research questions: (1) Which pruning method optimises the f1-score in relation to the target the best? (2) Does pruning the input nodes increase comprehensibility? (3) How does PUTPUT perform on the use case compared to the state of the art? The goal of these itemsets is to verify whether, given a small subset of the itemset as input, the itemset can be recovered automatically." }, { "figure_ref": [], "heading": "Data", "publication_ref": [], "table_ref": [], "text": "Setup For each item of these 4 datasets, 10 subsets with a size of 10% of the target size were randomly selected as training data, with the full itemset as test data." }, { "figure_ref": [], "heading": "Experiment 1: Comparing pruning methods", "publication_ref": [ "b7", "b6" ], "table_ref": [], "text": "To answer research question 1, we evaluate the first step of PUTPUT on the MNIST dataset using the newly proposed threshold based pruning method and two methods mentioned in (Dang, Liu, and Van den Broeck 2022): top down probability pruning and pruning based on circuit flows. A PC is learned on each subset of the data using the Hidden Chow Liu Tree (Liu and Van den Broeck 2021) method available in the JUICE package (Dang et al. 2021). The f1-score used in this evaluation is in relation to the target examples.\nThe results in Table 1 show that pruning by circuit flows optimises the f1-score in relation to the target. The inclusion of the target data in the pruning process increases the recall more then it loses precision compared to pruning by top down probability. In the last column of the results, it is clear that pruning by circuit flows also decreases the circuit size the most, which we assume will also increase comprehensibility. Compared to the baseline, pruning by circuit flows almost prunes half the circuit. In the following experiments, pruning by circuit flows is used as pruning method in the first step of PUTPUT." }, { "figure_ref": [], "heading": "Experiment 2: Effect of pruning input nodes", "publication_ref": [], "table_ref": [], "text": "The input nodes of the circuit are pruned in the second step of PUTPUT to lower incomprehensibility. The evaluation of this step is performed on the MNIST data. Table 2 shows that pruning the input nodes increases comprehensibility significantly, whilst trading some recall for precision in the process. The assumption that decreasing the circuit size increases comprehensibility as well is confirmed.\nExample on MNIST By applying PUTPUT on a PC learned on a subset of the MNIST data containing 25 examples representing 0's. The resulting logical theory is p 12,22 = White ∧ p 14,15 = Black ∧ p 14,16 = Black ∧ (p 8,15 = White ∨ p 8,17 = White)∧ (p 15,9 = White ∨ p 13,12 = Black )\nThis theory only looks at seven of the 784 pixels to explain that the PC it is derived from will give an example a high probability to be classified as a zero. As this theory does not have perfect precision, false positives are described as well. Figure 3 shows a correctly covered zero and a falsely covered four, together with a visualisation of the theory." }, { "figure_ref": [], "heading": "Experiment 3: Comparing PUTPUT with a state of the art approach on the use case", "publication_ref": [ "b11" ], "table_ref": [], "text": "The evaluation on the use case is performed on the private dataset provided by Company. As a baseline, we will compare with the concept learning algorithm that was recently presented by Goyal et al. (2022) for a setting similar to the one in this work, where they used PU-learning in the form of the Rocchio or likelihood approach to find reliable negatives, learned a decision tree on these negatives combined with the input examples and converted the decision tree into a logical theory through dt-queries or item-queries. To simplify the evaluation, we call this method PU+DT." }, { "figure_ref": [], "heading": "Multiple theories", "publication_ref": [ "b17", "b25", "b20", "b18", "b15", "b9", "b22", "b12", "b1", "b7", "b24", "b21", "b19", "b0", "b10", "b4", "b23" ], "table_ref": [], "text": "The dt-queries used in PU+DT generate multiple conjunctions, whilst the item-queries generate multiple CNF theories. As they have to be combined in disjunction, the dt-queries result in a DNF theory and the itemqueries in a disjunction of CNFs. In this last case, we simplify the theory by merging all variables of the same class in each clause into one variable, which transforms the disjunction of CNFs into a disjunction of conjunctions (DNF). Both types of query now result in a DNF. We can negate this DNF to get a CNF for which we can compute the incomprehensibility as described in Section 3.\nResults Table 3 shows the results of the comparison between PUTPUT and PU+DT, with the f1-scores in relation to the examples covered by the given concept query.\nPUTPUT performs similar or better when comparing the f1-scores. When comparing the comprehensibility on the disjunctive dataset, it is clear that PUTPUT is significantly worse. The reason for this is the simplification made in the metric when having multiple queries. As PUTPUT always generates a single query, arguments can be made to prefer that query over a combination of 10 or more queries.\nCompany prefers high precision over high recall, as it makes it easier to fine-tune the query into a safe query. This makes Rocchio + dt-query not ideal for the use case, as it results in the worst precision. The tradeoff between precision and recall is dependent on the situation, which means that no method clearly outperforms another. If near perfect precision is not expected, PUTPUT gives a good combination of precision and recall as a single, comprehensive theory.\nExplainability can be tackled in different ways: (1) Limit the model to an interpretable model, possibly sacrificing accuracy (Murdoch et al. 2019) ; (2) Reduce the learned model to an interpretable model. For example by combining multiple decision trees into a single, interpretable and explainable tree (Yan et al. 2022) or compiling a bayesian network classifier into a logical decision function (Shih, Choi, and Darwiche 2018); (3) Generate interpretable models for a local part of the model. This is the approach followed by LIME and SHAP where linear models are generated around a point of interest (Ribeiro, Singh, and Guestrin 2016;Lundberg and Lee 2017); (4) Ask questions to the model to identify edge cases or adversarial examples. Such an approach allows a full investigation of the original model, but requires the user to formulate the relevant questions (Devos, Meert, and Davis 2020). In this work we opted for the second strategy as the domain expert is expected to sign of on the entire model before it can be deployed.\nFinding a description of a given set of examples is a problem setting that occurs in the field of data mining. KRIMP (Vreeken, Leeuwen, and Siebes 2011) is based on pattern mining and the minimum description length (MDL) principle to find a code table that compresses the data, which is not applicable in our case, as converting this code table into a comprehensive logical theory is not a trivial task. Another approach based on itemset mining is Mistle (Jain et al. 2021), that learns and compresses a logical theory based on positive and negative input data. This is unlike the PUlearning setting used in this work. A last example of the use of pattern mining to find description is constraint-based querying to explore Bayesian networks (Babaki et al. 2015). In this work pattern mining is used to answer explorative queries, which are used to explain what the Bayesian network is representing.\nApplying pruning functions on PCs to prune unimportant subcircuits and growing the leftover circuit can increase the capacity of the PC that is meaningfully used when generating probabilities (Dang, Liu, and Van den Broeck 2022). An earlier application of PCs in the field of explainable AI is their use to find explanations that have a high probability to be correct (Wang, Khosravi, and Van den Broeck 2021). Metrics to measure the uncertainty in applications on out of distribution data (Ventola et al. 2023) or fairness of the circuit to avoid discrimination (Selvam, Van den Broeck, and Choi 2023) are two other ways that provide the user with more information about the concepts the circuit has learned.\nAutomating the generation of playlists as is done in the use case is a recurrent research topic. Recent methods use information registered by wearable physiological sensors to measure the mood of the person (Ayata, Yaslan, and Kamasak 2018), use acoustic features of music extracted by digital signal processing (Elbir et al. 2018) or by examining the MIDI files (Chen and Chen 2005) or use information of the order in which songs are played by the user (Wang et al. 2020)." }, { "figure_ref": [], "heading": "conclusions", "publication_ref": [], "table_ref": [], "text": "This work addresses the challenge of explaining probabilistic circuits by deriving a comprehensible logical theory through pruning based on generative significance. The work's contributions are fourfold: it (1) proposes a new metric to measure comprehensibility; (2) introduces a new problem setting; (3) presents a method to tackle this challenge (PUTPUT) and (4) showcases its relevance by applying the method on a real world use case. The method's evaluation demonstrates its efficacy in generating a comprehensible logical theory that cover the high-density region of a probability distribution modelled by a PC, which provide an explanation for the model. Applied in a real-world context within the music industry, PUTPUT outperforms state of the art methods when exploring the performancecomprehensibility trade-off." }, { "figure_ref": [], "heading": "A Proofs", "publication_ref": [], "table_ref": [], "text": "Lemma: All models of a logical circuit constructed from a probabilistic circuit with nonzero weights are the samples that have a positive probability in the psdd We can proof this using the definitions of the logical circuit and PC.\nGiven probabilistic circuit P, logical circuit L = LC(P) and a random sample x.\nAs the definitions for probability and logical output of an input node are similar, it is clear that p\nn (x) = 1 ⇔ o n (x) = ⊤ if n is an input node.\nIf n is a sum node, the probability is given by: n 1 is an inner node with children of depth 0, which means that all children are literals. As for all children holds that p 0 (x) = 1 ⇔ o 0 (x) = ⊤, it is trivial that ϕ 1 = ψ 1 and that the lemma holds for all nodes n 1 . n 2 is an inner node with children of depth 1 and 0. As for all children is proven that the lemma holds, it is trivial that ϕ 2 = ψ 2 and that the lemma holds for all nodes n 2 .\np n (x) = c∈C(n) θ c|n • p c (x)\nAs all children of node n x have depth i < x, it is trivial to proof that ϕ x = ψ x ." }, { "figure_ref": [], "heading": "B Algorithms", "publication_ref": [], "table_ref": [], "text": "Algorithm 1: Pruning input nodes to increase comprehensibility Require:\n- " }, { "figure_ref": [], "heading": "C Hardware", "publication_ref": [], "table_ref": [], "text": "All experiments were run on a machine with a NVIDIA GeForce RTX 3050 Laptop GPU, an AMD Ryzen 7 4800H CPU and 16GB RAM." }, { "figure_ref": [], "heading": "", "publication_ref": [ "b11" ], "table_ref": [], "text": "Table 3: Results of experiment 3 on the private dataset provided by Company where PUTPUT is compared to the PU-learning + decision tree approach proposed by (Goyal et al. 2022)." } ]
The field of Explainable AI (XAI) is seeking to shed light on the inner workings of complex AI models and uncover the rationale behind their decisions. One of the models gaining attention are probabilistic circuits (PCs), which are a general and unified framework for tractable probabilistic models that support efficient computation of various probabilistic queries. Probabilistic circuits guarantee inference that is polynomial in the size of the circuit. In this paper, we improve the explainability of probabilistic circuits by computing a comprehensible, readable logical theory that covers the highdensity regions generated by a PC. To achieve this, pruning approaches based on generative significance are used in a new method called PUTPUT (Probabilistic circuit Understanding Through Pruning Underlying logical Theories). The method is applied to a real world use case where music playlists are automatically generated and expressed as readable (database) queries. Evaluation shows that this approach can effectively produce a comprehensible logical theory that describes the high-density regions of a PC and outperforms state of the art methods when exploring the performance-comprehensibility trade-off.
Deriving Comprehensible Theories from Probabilistic Circuits
[ { "figure_caption": "Figure 1 :1Figure 1: A PC with its corresponding logical circuit with logical theory (((-A ∧ -B) ∨ (A ∧ (-B ∨ B))) ∧ (C ∨ -C)) ∨ (-A ∧ ((-B ∧ C) ∨ (-B ∧ -C)))", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Left: a PC after threshold based pruning with α = 0.1 . Right: further pruning the input nodes made the logical theory smaller, without changing the covered examples. The resulting theory is ((-A∧-B)∧(C ∨-C))∨(-A∧(-B ∨ -B)).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "⇔c∈ϕn θ c|n • p c (x) + c∈C(n)\\ϕn θ c|n • p c (x) with ϕ n = {u|u ∈ C(n) ∧ p u (x) > 0} ⇔ c∈ϕn θ c|n • p c (x) + 0We can do the same with the output of the OR-node in the logical circuit:with ψ n = {u|u ∈ C(n) ∧ o u (x) = ⊤} ⇔ c∈ψn o c (x) ∨ ⊥If ϕ n = ψ n (proven later), we get the following cases:ϕ n = ψ n = ∅ ⇒ p n (x) = 0 and o n (x) = ⊥ ϕ n = ψ n ̸ = ∅ ⇒ p n (x) = c∈ϕn θ c|n • p c (x) > 0 and o n (x) = c∈ψn o c (x) = ⊤If n is a product node, the probability is given by:p n (x) = Π c∈C(n) p c (x) ⇔ Π c∈ϕn p c (x) • Π c∈C(n)\\ϕn p c (x) with ϕ n = {u|u ∈ C(n) ∧ p u (x) > 0}We can do the same with the output of the AND-node in the logical circuit:o n (x) = c∈C(n) o c (x) ⇔ c∈ψn o c (x) ∧ c∈C(n)\\ψn o c (x) with ψ n = {u|u ∈ C(n) ∧ o u (x) = ⊤} ⇔ ⊤ ∧ c∈C(n)\\ψn o c (x)If ϕ n = ψ n (proven later), we get the following cases:ϕ n = ψ n = ∅ ⇒ p n (x) = Π c∈C(n)\\ϕn p c (x) > 0 and o n (x) = c∈C(n)\\ψn o c (x) = ⊤ ϕ n = ψ n ̸ = ∅ ⇒ p n (x) = 0 and o n (x) = ⊥ The only thing needed is to prove that ϕ n = ψ n : If n is a literal node: p n (x) = 1 ⇔ o n (x) = ⊤ and p n (x) = 0 ⇔ o n (x) = ⊥,which proves the lemma. For this proof, literals are noted as n 0 , which indicates it is a leaf node. The index indicates how many inner nodes are maximally needed to reach a literal node.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "The evaluation of PUTPUT is performed on two datasets: a version of the MNIST (Deng 2012) dataset with binary pixels to allow for reproducability and a private dataset covering a real world problem provided by Company. The multivalued private dataset is binarized as described in Section 2.1.", "figure_data": "MNIST A subset of binarized MNIST, with 250 examplesfor each digit, thus consisting of 2500 examples. To find thetarget examples used in PUTPUT, the elbow method as de-scribed in Section 6.2 was used.Music data by Company Private data provided by Com-pany consisting of 360.000 annotated songs and a setof product queries. From this data, 3 datasets whereconstructed. Single product concept: 5 known productconcepts with their respective songs, e.g. Rock. Dis-junctive product concepts: 10 combinations of 2 knownconcept queries in a disjunctive form, e.g. Rock orEasy Lounge. Exclusive or product concepts: 5 con-cept queries with an xor-structure, e.g. substyle=metalXOR feel=aggressive.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Pruned probabilistic circuit P p -Target T -Database D Ensure: Probabilistic circuit P f with pruned input nodes Method lower bound lb=f 1({x | x ∈ D ∧ p Pp (x) > 0}, T } previous size= 0, current size= |P p | P i = |P p | while current size ! =previous size do previous size=current size for input node n in P i do for parent z of n do P z,n = prune(P i , z, n) if f 1({x | x ∈ D ∧ p Pz,n (x) > 0}, T } > lb then P i = P z,n end if end for end for P f = P i current size= |P i | end while", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Sieben Bocklandt; Wannes Meert; Koen Vanderstraeten; Wouter Pijpops; Kurt Jaspers
[ { "authors": "D Ayata; Y Yaslan; M Kamasak", "journal": "IEEE Transactions on Consumer Electronics", "ref_id": "b0", "title": "Emotion Based Music Recommendation System Using Wearable Physiological Sensors", "year": "2018" }, { "authors": "B Babaki; T Guns; S Nijjsen; L De Raedt", "journal": "", "ref_id": "b1", "title": "Constraint-Based Querying for Bayesian Network Exploration", "year": "2015" }, { "authors": "A Barredo Arrieta; N Díaz-Rodríguez; J Del Ser; A Bennetot; S Tabik; A Barbado; S Garcia; S Gil-Lopez; D Molina; R Benjamins; R Chatila; F Herrera", "journal": "Information Fusion", "ref_id": "b2", "title": "Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI", "year": "2020" }, { "authors": "J Bekker; J Davis", "journal": "Machine Learning", "ref_id": "b3", "title": "Learning from positive and unlabeled data: a survey", "year": "2020" }, { "authors": "H.-C Chen; A Chen", "journal": "Journal of Intelligent Information Systems", "ref_id": "b4", "title": "A Music Recommendation System Based on Music and User Grouping", "year": "2005" }, { "authors": "Y Choi; A Vergari; G V Broeck", "journal": "", "ref_id": "b5", "title": "Probabilistic Circuits: A Unifying Framework for Tractable Probabilistic Models", "year": "2020" }, { "authors": "M Dang; P Khosravi; Y Liang; A Vergari; G Van Den Broeck", "journal": "", "ref_id": "b6", "title": "Juice: A Julia Package for Logic and Probabilistic Circuits", "year": "2021" }, { "authors": "M Dang; A Liu; G Van Den Broeck", "journal": "NeurIPS", "ref_id": "b7", "title": "Sparse Probabilistic Circuits via Pruning and Growing", "year": "2022" }, { "authors": "L Deng", "journal": "IEEE Signal Processing Magazine", "ref_id": "b8", "title": "The mnist database of handwritten digit images for machine learning research", "year": "2012" }, { "authors": "L Devos; W Meert; J Davis", "journal": "", "ref_id": "b9", "title": "Versatile Verification of Tree Ensembles", "year": "2020" }, { "authors": "A M Elbir; H B Iyican; M E Öztürk; B Aydin; N ", "journal": "", "ref_id": "b10", "title": "Music Genre Classification and Recommendation by Using Machine Learning Techniques", "year": "2018" }, { "authors": "K Goyal; W Meert; H Blockeel; E Van Wolputte; K Vanderstraeten; W Pijpops; K Jaspers", "journal": "Springer", "ref_id": "b11", "title": "Automatic Generation of Product Concepts from Positive Examples, with an Application to Music Streaming", "year": "2022" }, { "authors": "A Jain; C Gautrais; A Kimmig; L De Raedt", "journal": "", "ref_id": "b12", "title": "Learning CNF Theories Using MDL and Predicate Invention", "year": "2021" }, { "authors": "A Liu; G Van Den Broeck", "journal": "", "ref_id": "b13", "title": "Tractable Regularization of Probabilistic Circuits", "year": "2021" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b14", "title": "", "year": "" }, { "authors": "S Lundberg; S.-I Lee", "journal": "", "ref_id": "b15", "title": "A Unified Approach to Interpreting Model Predictions", "year": "2017" }, { "authors": "S H Muggleton", "journal": "", "ref_id": "b16", "title": "Duce, An Oracle-based Approach to Constructive Induction", "year": "1987" }, { "authors": "W Murdoch; C Singh; K Kumbier; R Abbasi Asl; B Yu", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b17", "title": "Definitions, methods, and applications in interpretable machine learning", "year": "2019" }, { "authors": "M Ribeiro; S Singh; C Guestrin", "journal": "", "ref_id": "b18", "title": "Why Should I Trust You?", "year": "2016" }, { "authors": "N R Selvam; G Van Den Broeck; Y Choi", "journal": "", "ref_id": "b19", "title": "Certifying Fairness of Probabilistic Circuits", "year": "2023" }, { "authors": "A Shih; A Choi; A Darwiche", "journal": "", "ref_id": "b20", "title": "A Symbolic Approach to Explaining Bayesian Network Classifiers", "year": "2018" }, { "authors": "F Ventola; S Braun; Z Yu; M Mundt; K Kersting", "journal": "", "ref_id": "b21", "title": "Probabilistic Circuits That Know What They Don't Know", "year": "2023" }, { "authors": "J Vreeken; M Leeuwen; A Siebes", "journal": "Data Min. Knowl. Discov", "ref_id": "b22", "title": "KRIMP: Mining itemsets that compress", "year": "2011" }, { "authors": "D Wang; X Zhang; D Yu; G Xu; S Deng", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b23", "title": "CAME: Content-and Context-Aware Music Embedding for Recommendation", "year": "2020" }, { "authors": "E Wang; P Khosravi; G Van Den Broeck", "journal": "", "ref_id": "b24", "title": "Probabilistic Sufficient Explanations", "year": "2021" }, { "authors": "S Yan; S Natarajan; S Joshi; R Khardon; P Tadepalli", "journal": "", "ref_id": "b25", "title": "Explainable Models via Compression of Tree Ensembles", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 94.98, 170.04, 126.12, 11.15 ], "formula_id": "formula_0", "formula_text": "x = i ⇔ x i = ⊤ j̸ =i x j = ⊥." }, { "formula_coordinates": [ 2, 63.05, 311.29, 219.2, 38.9 ], "formula_id": "formula_1", "formula_text": "pn(x)=          f n (x) if n is an input node Π d∈out(n) p d (x) if n is a product node d∈out(n) θ d|n • p d (x) if n is a sum node" }, { "formula_coordinates": [ 2, 92.47, 469, 172.24, 9.65 ], "formula_id": "formula_2", "formula_text": "f n (x) = {0, 1} with f n (x) + f n (-x) = 1." }, { "formula_coordinates": [ 2, 80.11, 570.47, 185.08, 39.29 ], "formula_id": "formula_3", "formula_text": "on(x)=          f n (x) if n is an input unit d∈out(n) o d (x) if n is an AND unit d∈out(n) o d (x)" }, { "formula_coordinates": [ 2, 319.5, 145.29, 238.5, 19.7 ], "formula_id": "formula_4", "formula_text": "-B) ∨ (A ∧ (-B ∨ B))) ∧ (C ∨ -C)) ∨ (-A ∧ ((-B ∧ C) ∨ (-B ∧ -C)))" }, { "formula_coordinates": [ 3, 54, 177.52, 88.68, 12.44 ], "formula_id": "formula_5", "formula_text": "E var (c, X) = -c(X)" }, { "formula_coordinates": [ 3, 95.78, 215.2, 125.42, 11.15 ], "formula_id": "formula_6", "formula_text": "Υ(c, V ) = X∈V E var (c, X)." }, { "formula_coordinates": [ 3, 61.62, 373.61, 223.26, 46.65 ], "formula_id": "formula_7", "formula_text": "I(C) = i=1..n     Υ(c i , V (C)) + k=1..n ∧∃e(ci,c k )) Υ(c k , V (C))    " }, { "formula_coordinates": [ 3, 68.15, 674.58, 224.35, 32.01 ], "formula_id": "formula_8", "formula_text": "L f 1 (M (L), T ) with f 1(x, y) = 2×|x∩y| 2×|x∩y|+|x-y|+|y-x|" }, { "formula_coordinates": [ 3, 319.5, 655.22, 220.42, 27.03 ], "formula_id": "formula_9", "formula_text": "Pruned PC P p = f s(P, T , D, f p) = f p (P, arg max args f1({x | x ∈ D ∧ p fp(P,args) (x) > 0}, T })." }, { "formula_coordinates": [ 4, 320, 345.65, 215.44, 40.66 ], "formula_id": "formula_10", "formula_text": "Q C f 1 (Q C (D), F) with Q C (D) the songs in D that are covered by Q C 2." }, { "formula_coordinates": [ 4, 343.03, 388.3, 50.4, 17.14 ], "formula_id": "formula_11", "formula_text": "Q C I(Q C )" }, { "formula_coordinates": [ 4, 333.65, 576, 165.59, 14.38 ], "formula_id": "formula_12", "formula_text": "f (t+ϵ)-f (t) f (t)-f (t-ϵ) < 1 4 and f (t+2 * ϵ)-f (t+ϵ) f (t+ϵ)-f (t) > 1 4" }, { "formula_coordinates": [ 9, 54, 169.55, 238.5, 19.92 ], "formula_id": "formula_13", "formula_text": "n (x) = 1 ⇔ o n (x) = ⊤ if n is an input node." }, { "formula_coordinates": [ 9, 118.54, 211.13, 109.42, 20.53 ], "formula_id": "formula_14", "formula_text": "p n (x) = c∈C(n) θ c|n • p c (x)" } ]
2023-11-23
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "cloud as a geometrical guideline for each image generation. Specifically, we project a portion of point cloud to the desired view and provide the projection as a guidance for inpainting using the generative model. The inpainted images are lifted to 3D space with estimated depth maps, composing a new points. Second, to aggregate the new points into the 3D scene, we propose an aligning algorithm which harmoniously integrates the portions of newly generated 3D scenes. The finally obtained 3D scene serves as initial points for optimizing Gaussian splats. LucidDreamer produces Gaussian splats that are highly-detailed compared to the previous 3D scene generation methods, with no constraint on domain of the target scene." }, { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b24", "b19", "b44" ], "table_ref": [], "text": "With the advent of commercial mixed reality platforms and the rapid innovations in 3D graphics technology, highquality 3D scene generation has become one of the most important problem in computer vision. This requires the ability to create diverse and photo-realistic 3D scenes from any type of input, such as text, RGB, and RGBD images. There are efforts to use the diffusion model in voxel, point cloud, and implicit neural representation to generate 3D objects and scenes directly [11,25,60], but the results show low diversity and quality due to the limitations in training data based on 3D scans. One way to cope with the issue is to leverage the power of a pre-trained image generation diffusion model, such as Stable Diffusion [39], to create diverse high-quality 3D scenes. Such a big model creates plausible images with a data-driven knowledge learned from the large-scale training data, although it does not guarantee multi-view consistency between the generated images [51].\nIn this work, we propose a pipeline called Lucid-Dreamer that utilizes Stable Diffusion [39] and 3D Gaussian splatting [20] to create diverse high-quality 3D scenes from various types of inputs such as text, RGB, and RGBD. Following the pipeline of LucidDreamer, a unified large point cloud is generated by repeating the two processes named Dreaming and Alignment, alternatively. Before beginning the two process, an initial point cloud is generated by the initial image and the corresponding depth map. Dreaming process includes the generation of geometrically consistent images and the lifting of these images into 3D space. We first move the camera along the pre-defined camera trajectory and project a visible region of point cloud in the new camera coordinate to the new camera plane. Then, the projected image is put into the Stable Diffusion-based inpainting network to generate the complete image from the projected one. A new set of 3D points are generated by lifting the inpainted image and the estimated depth map to the 3D space. Then the proposed alignment algorithm seamlessly connects the new 3D points to the existing point cloud by slightly moving the position of the new points in the 3D space. After the large point cloud generated by repeating the above processes a sufficient number of time is obtained, we use it as the initial SfM points to optimize the Gaussian splats. The continuous representation of 3D Gaussian splats removes the holes generated by the depth discrepancy in the point cloud, enabling us to render more photo-realistic 3D scenes than traditional representations. Figure 1 shows the simple process of LucidDreamer and a 3D generation result.\nLucidDreamer exhibits significantly more realistic and astonishing results compared to existing models. We compare the generated 3D scenes conditioned with an image from ScanNet [9], NYUDepth [45], and Stable Diffusion, and show better visual results across all datasets. Our model is capable of generating 3D scenes across diverse domains such as realistic/anime/lego and indoor/outdoor. Not only does our model support various domains, but it also accommodates the simultaneous use of diverse input conditions. For example, by conditioning an image and text together, it generates a 3D scene based on the text but also includes the image. This alleviates the challenges associated with creating the desired scene solely from the text, moving away from generating samples exhaustively. Furthermore, our approach also allows for the change of the input condition while creating the 3D space. These capabilities offer opportunities to create a wide range of 3D scenes, inspiring creativity.\nIn summary, our contributions are as follows.\n• We introduce LucidDreamer, a domain-free high-quality 3D scene generation, achieving better domain generalization in 3D scene generation by leveraging the power of Stable Diffusion, depth estimation, and explicit 3D representation.\n• To generate multi-view images from Stable Diffusion, our Dreaming process establishes point cloud as geometrical guideline for each image generation. Subsequently, our Aligning process harmoniously integrates the generated images to form an unified 3D scene.\n• Our model provides users with the ability to create 3D scenes in various ways by supporting different input types, such as text, RGB, and RGBD, allowing the simultaneous use of multiple inputs, and enabling the change of the inputs during the generation process." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b11", "b15", "b32", "b9", "b55", "b1", "b48", "b25", "b45", "b25", "b13", "b23", "b57", "b54", "b26", "b14", "b37", "b19", "b19", "b16", "b2", "b30", "b41", "b28", "b29", "b53", "b0", "b42", "b18", "b46", "b6", "b24", "b58", "b10", "b56", "b12", "b7", "b21", "b4", "b49" ], "table_ref": [], "text": "3D Scene Representation. Representative methods for expressing 3D scenes include explicit methods such as point cloud, mesh, and voxel. These are widely used because they allow direct and intuitive control of each element and enable fast rendering through the rasterization pipeline. However, they need a large number of elements for detailed expression because of their simple structure. Complex primitives such as cuboid [52], Gaussian [12], ellipsoid [16], superquadrics [33], convex hull [10], and polynomial surface [56] were developed for more efficient expression. Although primitives have increased expressive power for complex geometry, it is still difficult to express realistic 3D scenes because of simple color representation. Recently, there have been works to express more detailed 3D scenes using neural networks as implicit expressions. They train a neural network to express the scene creating the desired properties in 3D coordinates, such as signed distance function [32,49], RGBα [26,46]. In particular, Neural Radiance Fields [26] showed that it was possible to optimize photorealistic 3D scenes from multiple images through volume rendering, but the scene implicitly stored in the network form is difficult to handle and slow. To improve this, subsequent studies attempted to use volume rendering in explicit expressions. By utilizing the locality of structures such as sparse voxels [14,24,48,58], featured point clouds [55], Multi-Level Hierarchies [27,28], tensor [5], infinitesimal networks [15,38], triplane [4], polygon [7], and Gaussain splats [20] they greatly improve the training and rendering speed. In particular, 3D Gaussian splatting [20] utilizes the concept of Gaussian splats combined with spherical harmonics and opacity to represent complete and unbounded 3D scenes. It supports not only alpha-blending but also differentiable rasterization, resulting in fast, high-quality 3D scene optimization. This structure is essential for our generation method, which cannot determine the bounds of the scene due to sequential image generation, and plays a role in making the scene complete.\n3D Scene Generation. Inspired by the early success of generative adversarial network (GAN) [17] in image generation, similar attempts are made in 3D creation. Creating a set of multiview consistent images [3,31,42], or directly creating voxel [29,30,54] or point cloud [1,43] were studied. However, they suffer from GAN's learning instability [36] and memory limitation in 3D representation, limiting the generation quality. Encouraged by the recent success of diffusion [19,47] in the field of image generation [37,39], there are many attempts to introduce the diffusion model into 3D representation, such as voxel [60], point cloud [25,59], triplane [4, 6, 44], implicit neural network [11,34,57]. They use object-centric coordinates because of their nature and focus on simple examples. Some generative diffusion models overcome this problem by using a mesh as a proxy and diffusing in the UV space. They create a large portrait scene by continuously building the mesh [13] or create indoor scenes [8,22] and more realistic objects [35,50]. However, their performance falls short of foundation models [39] because they involve training a new diffusion model in a different representation space, which is limited by data availability and computational resources. In comparison, our method leverages the power of the foundation model to generate diverse images and creates reliable 3D scenes through depth estimation and optimization." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "While the range of target scenes of existing scene generation models is strictly restricted due to the limitations in training dataset, LucidDreamer can generate even more realistic, higher-resolution 3D scenes with much more general input conditions. For instance, LucidDreamer can generate a text-relevant scene if only the text prompt is given. Also, the style of the input image is maintained along the scene, while existing models keep producing scenes that are similar to the style of the training dataset, not the input image.\nThe pipeline of LucidDreamer is broadly divided into two stages: point cloud construction and Gaussian splats optimization. During the first stage, an initial point cloud is formed from the input image, and the area of the point cloud is expanded to create a large scene using Stable Diffusion inpainting and monocular depth estimation. Then, the point cloud and the reprojected images are used to optimize Gaussian splats. By representing the scene with Gaussian splats, we can fill the empty space that appears in the point cloud due to the depth discrepancy." }, { "figure_ref": [ "fig_0" ], "heading": "Point cloud construction", "publication_ref": [ "b56" ], "table_ref": [], "text": "To generate multi-view consistent 3D point cloud, we create the initial point cloud and aggregate the points by moving back and forth between 3D space and the camera plane while moving the camera. The overall process of point cloud construction is illustrated in Figure 1.\nInitialization. A point cloud generation starts from lifting the pixels of the initial image. If the user gives a text prompt as input, the latent diffusion model is used to generate an image relevant to the given text, and the depth map is estimated using the monocular depth estimation model such as ZoeDepth [2]. We denote the generated or received RGB image and the depth map as I 0 ∈ R 3×H×W and D 0 ∈ R H×W , where H and W are height and the width of the image. The camera intrinsic matrix and the extrinsic matrix of I 0 are denoted as K and P 0 , respectively. For the case where I 0 and D 0 are generated from the diffusion model, we set the values of K and P 0 by convention regarding the size of the image.\nFrom the input RGBD image [I 0 , D 0 ], we lift the pixels into the 3D space, where the lifted pixels will form a point cloud in a 3D space. The generated initial point cloud using the first image is defined as P 0 :\nP 0 = ϕ 2→3 ([I 0 , D 0 ], K, P 0 ) ,(1)\nwhere ϕ 2→3 is the function to lift pixels from the RGBD image [I, D] to the point cloud.\nPoint cloud aggregation. We sequentially attach points to the original point cloud to create a large 3D scene. Specifically, we set the camera trajectory with length N , where P i indicates the position and pose of the camera in the i-th index, then inpaint and lift the missing pixel in each step. Here, the generated points should satisfy two conditions; the images projected from the points should have high perceptual quality and be consistent with image parts produced from the existing points. To achieve the former condition, we borrow the representation power of the Stable Diffusion [39] to the image inpainting task. Navigation. At step i, we first move and rotate the camera from the previous position (P i-1 ) to P i . We change the coordinate from the world to the current camera and project to the camera plane using K and P i .\nDreaming. We denote the projected image at camera P i as Îi . Since the position and the pose of the camera are changed, there would be some regions that cannot be filled from the existing point cloud. We define the mask M i to discriminate the region that is filled by existing points in Îi . Specifically, the value of M i is one if the corresponding pixel is already filled or 0 otherwise. The Stable Diffusion inpainting model (S) is executed to generate a realistic image, I i , from the incomplete image ( Îi ) and the mask (M i ). The corresponding depth map ( Di ) is estimated using the monocular depth estimation network (D).\nHere, the monocular depth estimation model can only estimate the relative depth, and the depth coefficients from the relative depth to the actual depth can be different between images. If the depth coefficients are different, the lifted 3D point clouds in the two generated images are not connected and are spaced apart. We estimate the optimal depth scale coefficient, d i , that minimizes the distance between the 3D points of the new image and the corresponding points in the original point cloud, P i-1 . Then the actual depth map, D i is calculated by multiplying the coefficient d i to the estimated depth map, Di .\nI i = S Îi , M i , Di = D (I i ) , D i = d i Di , d i = argmin d Mi=1 ϕ 2→3 I i , d Di , K, P i -P i-11\n.\n(2)\nHere, M i = 1 implies that the distance of point pairs in the overlapping regions is used for estimating d i .\nUsing the image and the corresponding depth map, [I i , D i ], we lift the pixels to 3D space. Here, we note that only inpainted pixels of I i are lifted to prevent points overlapping and mitigate the inconsistency problem. The output of dreaming, Pi , can be calculated as:\nPi = ϕ 2→3 ([I i , D i |M i = 0] , K, P i ) ,(3)\nwhere\n[I i , D i |M i = 0]\nindicates the inpainted region in the RGBD image.\nAlignment. Compared to the way that trains a generative model to generate both RGB and the depth map at once, such as RGBD2 [57], the depth map estimated by off-theshelf depth estimation method is more accurate and generalizable to various situations since off-the-shelf methods are trained on large and various datasets. However, since D 0 , D 1 , ..., D i-1 is not considered when estimating D i , inconsistency problem occurs when we add new points, Pi . To overcome the problem, we move the points of Pi in 3D space to attach the two point cloud (P i-1 and Pi ) smoothly. Specifically, we extract the region where the value of mask \nP 0 ← ϕ 2→3 ([I 0 , D 0 ], K, P 0 ) 2 for i ← 1 to N do 3 Îi , M i ← ϕ 3→2 (P i-1 , K, P i ) 4 I i ← S Îi , M i , Di ← D (I i ) 5 d i ← 1 6 while not converged do 7 Pi ← ϕ 2→3 I i , d i Di , K, P i 8 L d ← 1 ∥Mi=1∥ Mi=1 Pi -P i-1 1 9 Calculate ∇ d L d 10 d i ← d i -α∇ d L d 11 end 12 D i ← d i Di 13 Pi ← ϕ 2→3 ([I i , D i |M i = 0] , K, P i ) 14 P i ← P i-1 ∪ W Pi 15 end changes (|∇M i | > 0)\nto find the corresponding points to that region in both P i-1 and Pi . Then, we calculate the displacement vector from Pi to P i-1 . However, moving the points in a naive way may distort the shape of the lifted point cloud and make a misalignment between the point cloud and the inpainted image. We mitigate the issue by giving the restrictions for moving the points and using the interpolation algorithm to preserve the overall shape of the points.\nFirst, we force each point in Pi to move along the ray line from the camera center to the corresponding pixel. We find the closest point to the corresponding point in P i-1 along the ray line and report how much the depth changes are caused by the movement. Using the constraint, we preserve the contents of RGB image (I i ) although moving the points in 3D space. Next, we assume that the depth does not change at the opposite side of the mask boundary region. Then, for the points that do not have their ground truth counterparts, i.e. M i = 0, we calculate for each pixel how much the depth value should change using linear interpolation. By interpolating smoothly, the mismatch among the pixels caused by the drastic movement is alleviated. The aligned points are combined with the original one:\nP i = P i-1 ∪ W Pi ,(4)\nwhere we denote calculating movement and interpolation as W. We repeat the process N times to construct the final point cloud, P N . By reprojection, P N provides high-quality " }, { "figure_ref": [], "heading": "Rendering with Gaussian Splatting", "publication_ref": [ "b20" ], "table_ref": [], "text": "After the point cloud is created, we train 3D Gaussian splatting model [21] using the point cloud and the projected images. The center of Gaussian splatting points are initialized by the input point cloud, and the volume and the position of each point are changed by the supervision of input ground truth projected images. We use the generated point cloud (P N ) as the initial SfM points. Initialization with P N will boost up the convergence of the network and encourage the network to focus on generating the details of the representation. For the images to train the model, we use additional M images as well as (N + 1) images for generating the point cloud, since the initial (N + 1) images are not sufficient to train the network for generating the plausible output. The M new images and the masks are generated by reprojecting from the point cloud P N by a new camera sequence of length M , denoted as P N +1 , ..., P N +M .\nI i , M i = ϕ 3→2 (P N , K, P i ) , i = M + 1, ..., M + N. (5)\nWe note that we do not inpaint I i when optimizing Gaussian splats. Instead, when calculating the loss function, we only consider the valid image region where the mask value is 1. It prevents the model from learning the wrong details of the reprojected images. Since each point is represented as a Gaussian distribution, the missing pixels when training the model are naturally filled, and the rasterized image after the training becomes plausible." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiment settings", "publication_ref": [ "b44", "b22" ], "table_ref": [], "text": "Datasets. Since LucidDreamer is optimized for every input, we do not need training dataset to train the model. For the text input, we randomly generate several text prompts relevant to scene images to generate the first image using Stable Diffusion. We use real or generated high-quality images for the RGB input. For the case of RGBD inputs, we use ScanNet [9] and NYUdepth [45] since the two datasets have ground truth depth maps.\nImplementation details. The modules we used to construct LucidDreamer can be either trained using manual design or brought from off-the-shelf models. We use pretrained large-scale off-the-shelf models to compose the whole network to maximize the generalization capability of the network. Specifically, we adopt Stable Diffusion model [39] to inpaint the masked image. We use the same text prompt input for the Stable Diffusion if the first image is generated from the text. If the input format is a RGB(D) image without text, we use LAVIS [23] to generate the caption according to the image and place it in the diffusion (a) RGB condition \"A Cat on the street near a white house with stairs, trees nearby, and gray blocks\" \"A number of flower buckets, small bushes and many pebbles on the ground\" \"A Cat on the street near a white house with stairs, trees nearby, and gray blocks\" \"A grass garden with bushes and flowers, white house with white stairs, table on the grass\" inpainting model to generate consistent content. For the camera trajectory that we use to construct the point cloud ({P i } N i=0 ), we create several types of camera trajectory presets in advance, and different types of trajectories were used for different tasks." }, { "figure_ref": [], "heading": "Experiment results", "publication_ref": [], "table_ref": [], "text": "We demonstrate the superiority and high generalizability of LucidDreamer in many aspects. We strongly recommend the readers to watch the video in the supplementary materials where we can entirely show the strength of our model." }, { "figure_ref": [ "fig_1", "fig_2", "fig_4", "fig_3", "fig_3", "fig_3" ], "heading": "Applicability to various input domains and formats.", "publication_ref": [ "b21", "b44", "b21", "b21", "b21", "b17", "b52" ], "table_ref": [ "tab_0" ], "text": "LucidDreamer is capable of generating a consistent and high-quality 3D scene considering the input style. Figure 2 shows the generated realistic images and the 3D scenes. At the top row, we visualize a result of Text-to-3D. We depict an initial image generated from the given text and estimated depth in (a). (b) and (c) present the plausible images and geometry generated through our pipeline involving navigation, dreaming, and alignment. We showcase an overview of the final 3D scene in (d). On the other hand, the bottom row demonstrates an example result of RGB-to-3D We estimated depth from the given RGB and used it as an initial geometry for the scene. Similar to the top row, we generated believable images and geometry, resulting in a high-quality 3D scene.\nSince our model supports multiple inputs, it can generate 3D scenes in various ways as illustrated in Figure 3. The top and middle rows depict outcomes generated by guaranteeing the inclusion of conditioned RGB during the creation of the 3D scene. Despite different texts, the conditioned RGB is consistently present in the scene. On the other hand, the bottom row displays the outcome of altering the text condition while generating the 3D scene. Through diverse combinations and alterations of conditions, our model facilitates the creation of the desired 3D scene more effortlessly. We illustrate additional example scenes in Figure 5. Our model successfully generates diverse 3D scenes with various styles (e.g. lego, anime) across different camera paths.\nComparison with RGBD2. We qualitatively compare the generation results with RGBD2 [22] and illustrate the result in Figure 4. For fairness of comparison, we compare the results on three images with different domains: generated image, ScanNet, and NYUDepth. For the generated image, the depth map estimated by Zoedepth [2] is considered a ground-truth depth map when processing RGBD2. For ScanNet and NYUDepth, we use the ground truth depth map for both RGBD2 and LucidDreamer when producing a (iii) NYUDepth [45] (ii) ScanNet [9] (i) Generated Ours RGBD2 [22] Ours RGBD2 [22] Ours RGBD2 [22] 3D scene. For ScanNet, each scene consists of several images and the corresponding depth maps and camera views. We randomly select one of the given image and depth map pairs and use it as an initial RGBD input. In Figure 4b, we observe that RGBD2 generates (ScanNet-style) images with similar styles regardless of the input image. This remains consistent not only in the initial image but also throughout the following sequence as shown in Figure 4c. We believe the issue arises due to insufficient training data and domain limitations, highlighting the need for a model with sufficient generalization. In contrast, our approach generates high-quality 3D scenes with careful consideration to harmonize well with the input RGB. Moreover, LucidDreamer can generate scenes composed of high-resolution images while RGBD2 can only make 128 × 128-sized images, which is too small to use in real applications. We also document the quantitative results evaluated on CLIP-Score [18] and CLIP-IQA [53] in Table 1. We confirm that our model incorporates input conditions well, resulting in the creation of high-quality 3D scenes. " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose LucidDreamer, a novel pipeline for domain-free 3D scene generation. By fully exploiting the power of large diffusion models, LucidDreamer is capable of generating high-quality scenes without the restriction of the target scene domain. We first generate the point cloud from the input image and repeat 'Dreaming' and 'Alignment' algorithms to generate the multi-view consistent high-qulaity image and harmoniously integrate them to the existing point cloud in the 3D space. After the construction is finished, the point cloud is converted to 3D Gaussian splats to enhance the quality of the 3D scene. Extensive experiments show that LucidDreamer can consistently generate high-quality and diverse 3D scenes in various situations." } ]
Figure 1. Introducing LucidDreamer. We develop LucidDreamer, a general framework for generating multiview-consistent and highquality 3D scenes from various input types: text, RGB, and RGBD. After the initial point cloud is created by lifting the RGBD image, LucidDreamer maintains and expands its world model by repeating two operations: dreaming and alignment. The 3D scene is finalized through optimizing a Gaussian splatting representation.
LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes
[ { "figure_caption": "Algorithm 1 :1Constructing point cloudInput: A single RGBD image [I 0 , D 0 ] Input: Camera intrinsic K , extrinsics {P i } N i=0 Output: Complete point cloud P N 1", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Intermediate images during point cloud generation and final 3D output between different inputs. We generate 3D scene from different input types (text and RGB image). The input image in the first row is generated image using Stable diffusion. Our model is capable of generating consistent images high-quality 3D scene regardless of input type.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Intermediate images during point cloud generation and final 3D output for different text prompt. We put the different text prompt while having same initial image (I0) and compare the generation results.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure4. Qualitative comparison with RGBD2[22] on various image datasets. We compare LucidDreamer with RGBD2 starting from the same input image while changing the datasets. The scene generated by LucidDreamer always shows higher quality than RGBD2, even on ScanNet [9] which RGBD2 is trained on.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. 3D reconstruction results and short video on various styles. This is a video figure that is best viewed by Adobe Reader.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. The effect of the mask during training. Training with valid masks helps prevent artifacts at the boundaries of the scene.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Score ↑[18] Quality ↑ Colorful ↑ Sharp ↑ Quantitative comparison of generated scenes. We quantitatively compare the results using CLIP-Score and CLIP-IQA with RGBD2. Our model shows better results on all metrics.", "figure_data": "ModelsCLIP-CLIP-IQA [53]RGBD2 [22]0.20350.12790.20810.0126LucidDreamer0.21100.61610.84530.5356", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Reconstruction quality according to the source of initial SfM points. We use the initial point cloud generated by COLMAP[40,41] and compare the reconstruction results. Our model consistently shows better reconstruction metrics.", "figure_data": "ItersSource of SfM pointsMetrics PSNR ↑ SSIM ↑ LPIPS ↓1000COLMAP LucidDreamer23.15 32.590.7246 0.2910 0.9672 0.02723000COLMAP LucidDreamer30.87 33.800.9478 0.0353 0.9754 0.01787000COLMAP LucidDreamer32.52 34.240.9687 0.0208 0.9781 0.0164", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Jaeyoung Chung; Suyoung Lee; Hyeongjin Nam; Jaerin Lee; Kyoung Mu; Lee Asri
[ { "authors": "Panos Achlioptas; Olga Diamanti; Ioannis Mitliagkas; Leonidas Guibas", "journal": "", "ref_id": "b0", "title": "Learning representations and generative models for 3d point clouds", "year": "2018" }, { "authors": "Farooq Shariq; Reiner Bhat; Diana Birkl; Peter Wofk; Matthias Wonka; Müller", "journal": "", "ref_id": "b1", "title": "Zoedepth: Zero-shot transfer by combining relative and metric depth", "year": "2023" }, { "authors": "Marco Eric R Chan; Petr Monteiro; Jiajun Kellnhofer; Gordon Wu; Wetzstein", "journal": "", "ref_id": "b2", "title": "pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis", "year": "2021" }, { "authors": "Connor Z Eric R Chan; Matthew A Lin; Koki Chan; Boxiao Nagano; Shalini De Pan; Orazio Mello; Leonidas J Gallo; Jonathan Guibas; Sameh Tremblay; Khamis", "journal": "", "ref_id": "b3", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "Anpei Chen; Zexiang Xu; Andreas Geiger; Jingyi Yu; Hao Su", "journal": "", "ref_id": "b4", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "Hansheng Chen; Jiatao Gu; Anpei Chen; Wei Tian; Zhuowen Tu; Lingjie Liu; Hao Su", "journal": "", "ref_id": "b5", "title": "Single-stage diffusion nerf: A unified approach to 3d generation and reconstruction", "year": "2023" }, { "authors": "Zhiqin Chen; Thomas Funkhouser; Peter Hedman; Andrea Tagliasacchi", "journal": "", "ref_id": "b6", "title": "Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures", "year": "2023" }, { "authors": "Dana Cohen-Bar; Elad Richardson; Gal Metzer; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b7", "title": "Set-the-scene: Globallocal training for generating controllable nerf scenes", "year": "2023" }, { "authors": "Angela Dai; Angel X Chang; Manolis Savva; Maciej Halber; Thomas Funkhouser; Matthias Nießner", "journal": "", "ref_id": "b8", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Boyang Deng; Kyle Genova; Soroosh Yazdani; Sofien Bouaziz; Geoffrey Hinton; Andrea Tagliasacchi", "journal": "", "ref_id": "b9", "title": "Cvxnet: Learnable convex decomposition", "year": "2020" }, { "authors": "Emilien Dupont; Hyunjik Kim; S M Eslami; Danilo Rezende; Dan Rosenbaum", "journal": "", "ref_id": "b10", "title": "From data to functa: Your data point is a function and you can treat it like one", "year": "2022" }, { "authors": "Rina Foygel; Mathias Drton", "journal": "", "ref_id": "b11", "title": "Extended bayesian information criteria for gaussian graphical models", "year": "2010" }, { "authors": "Rafail Fridman; Amit Abecasis; Yoni Kasten; Tali Dekel", "journal": "", "ref_id": "b12", "title": "Scenescape: Text-driven consistent scene generation", "year": "" }, { "authors": "Sara Fridovich-Keil; Alex Yu; Matthew Tancik; Qinhong Chen; Benjamin Recht; Angjoo Kanazawa", "journal": "", "ref_id": "b13", "title": "Plenoxels: Radiance fields without neural networks", "year": "2022" }, { "authors": "Stephan J Garbin; Marek Kowalski; Matthew Johnson; Jamie Shotton; Julien Valentin", "journal": "", "ref_id": "b14", "title": "Fastnerf: Highfidelity neural rendering at 200fps", "year": "2021" }, { "authors": "Kyle Genova; Forrester Cole; Daniel Vlasic; Aaron Sarna; William T Freeman; Thomas Funkhouser", "journal": "", "ref_id": "b15", "title": "Learning shape templates with structured implicit functions", "year": "2019" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "NIPS", "ref_id": "b16", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Jack Hessel; Ari Holtzman; Maxwell Forbes; Ronan Le Bras; Yejin Choi", "journal": "", "ref_id": "b17", "title": "CLIPScore: a referencefree evaluation metric for image captioning", "year": "2021" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b18", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Bernhard Kerbl; Georgios Kopanas; Thomas Leimkühler; George Drettakis", "journal": "ACM ToG", "ref_id": "b19", "title": "3d gaussian splatting for real-time radiance field rendering", "year": "2023" }, { "authors": "Bernhard Kerbl; Georgios Kopanas; Thomas Leimkühler; George Drettakis", "journal": "ACM ToG", "ref_id": "b20", "title": "3d gaussian splatting for real-time radiance field rendering", "year": "2023" }, { "authors": "Jiabao Lei; Jiapeng Tang; Kui Jia", "journal": "CVPR", "ref_id": "b21", "title": "Rgbd2: Generative scene synthesis via incremental view inpainting using rgbd diffusion models", "year": "2023" }, { "authors": "Dongxu Li; Junnan Li; Hung Le; Guangsen Wang; Silvio Savarese; Steven C H Hoi", "journal": "", "ref_id": "b22", "title": "LAVIS: A onestop library for language-vision intelligence", "year": "2023" }, { "authors": "Lingjie Liu; Jiatao Gu; Kyaw Zaw Lin; Tat-Seng Chua; Christian Theobalt", "journal": "NIPS", "ref_id": "b23", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "Shitong Luo; Wei Hu", "journal": "", "ref_id": "b24", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b25", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Thomas Müller; Fabrice Rousselle; Jan Novák; Alexander Keller", "journal": "", "ref_id": "b26", "title": "Real-time neural radiance caching for path tracing", "year": "2021" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "TOG", "ref_id": "b27", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Thu Nguyen-Phuoc; Chuan Li; Lucas Theis; Christian Richardt; Yong-Liang Yang", "journal": "", "ref_id": "b28", "title": "Hologan: Unsupervised learning of 3d representations from natural images", "year": "2019" }, { "authors": "Christian Thu H Nguyen-Phuoc; Long Richardt; Yongliang Mai; Niloy Yang; Mitra", "journal": "NeurIPS", "ref_id": "b29", "title": "Blockgan: Learning 3d object-aware scene representations from unlabelled images", "year": "2020" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b30", "title": "Giraffe: Representing scenes as compositional generative neural feature fields", "year": "2021" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard Newcombe; Steven Lovegrove", "journal": "", "ref_id": "b31", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Despoina Paschalidou; Ali Osman Ulusoy; Andreas Geiger", "journal": "", "ref_id": "b32", "title": "Superquadrics revisited: Learning 3d shape parsing beyond cuboids", "year": "2019" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b33", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Guocheng Qian; Jinjie Mai; Abdullah Hamdi; Jian Ren; Aliaksandr Siarohin; Bing Li; Hsin-Ying Lee; Ivan Skorokhodov; Peter Wonka; Sergey Tulyakov", "journal": "", "ref_id": "b34", "title": "Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors", "year": "2023" }, { "authors": "Alec Radford; Luke Metz; Soumith Chintala", "journal": "", "ref_id": "b35", "title": "Unsupervised representation learning with deep convolutional generative adversarial networks", "year": "2015" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b36", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Christian Reiser; Songyou Peng; Yiyi Liao; Andreas Geiger", "journal": "", "ref_id": "b37", "title": "Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps", "year": "2021" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b38", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Johannes Lutz; Schönberger ; Jan-Michael Frahm", "journal": "", "ref_id": "b39", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": "Johannes Lutz Schönberger; Enliang Zheng; Marc Pollefeys; Jan-Michael Frahm", "journal": "", "ref_id": "b40", "title": "Pixelwise view selection for unstructured multi-view stereo", "year": "2016" }, { "authors": "Katja Schwarz; Yiyi Liao; Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b41", "title": "Graf: Generative radiance fields for 3daware image synthesis", "year": "2020" }, { "authors": "Dong Wook Shu; Sung Woo Park; Junseok Kwon", "journal": "", "ref_id": "b42", "title": "3d point cloud generative adversarial network based on tree structured graph convolutions", "year": "2019" }, { "authors": "Ryan Shue; Eric Ryan Chan; Ryan Po; Zachary Ankner; Jiajun Wu; Gordon Wetzstein", "journal": "", "ref_id": "b43", "title": "3d neural field generation using triplane diffusion", "year": "2023" }, { "authors": "Nathan Silberman; Derek Hoiem; Pushmeet Kohli; Rob Fergus", "journal": "", "ref_id": "b44", "title": "Indoor segmentation and support inference from rgbd images", "year": "2012" }, { "authors": "Julien Vincent Sitzmann; Alexander Martel; David Bergman; Gordon Lindell; Wetzstein", "journal": "NeurIPS", "ref_id": "b45", "title": "Implicit neural representations with periodic activation functions", "year": "2020" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b46", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Cheng Sun; Min Sun; Hwann-Tzong Chen", "journal": "", "ref_id": "b47", "title": "Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction", "year": "2022" }, { "authors": "Towaki Takikawa; Joey Litalien; Kangxue Yin; Karsten Kreis; Charles Loop; Derek Nowrouzezahrai; Alec Jacobson; Morgan Mcguire; Sanja Fidler", "journal": "", "ref_id": "b48", "title": "Neural geometric level of detail: Real-time rendering with implicit 3d shapes", "year": "2021" }, { "authors": "Jiaxiang Tang; Jiawei Ren; Hang Zhou; Ziwei Liu; Gang Zeng", "journal": "", "ref_id": "b49", "title": "Dreamgaussian: Generative gaussian splatting for efficient 3d content creation", "year": "2023" }, { "authors": "Shitao Tang; Fuyang Zhang; Jiacheng Chen; Peng Wang; Yasutaka Furukawa", "journal": "", "ref_id": "b50", "title": "Mvdiffusion: Enabling holistic multi-view image generation with correspondence-aware diffusion", "year": "2023" }, { "authors": "Shubham Tulsiani; Hao Su; Leonidas J Guibas; Alexei A Efros; Jitendra Malik", "journal": "", "ref_id": "b51", "title": "Learning shape abstractions by assembling volumetric primitives", "year": "2017" }, { "authors": "Jianyi Wang; Kelvin Ck Chan; Chen Change Loy", "journal": "", "ref_id": "b52", "title": "Exploring clip for assessing the look and feel of images", "year": "2023" }, { "authors": "Jiajun Wu; Chengkai Zhang; Tianfan Xue; Bill Freeman; Josh Tenenbaum", "journal": "", "ref_id": "b53", "title": "Learning a probabilistic latent space of object shapes via 3d generativeadversarial modeling", "year": "2016" }, { "authors": "Qiangeng Xu; Zexiang Xu; Julien Philip; Sai Bi; Zhixin Shu; Kalyan Sunkavalli; Ulrich Neumann", "journal": "", "ref_id": "b54", "title": "Point-nerf: Point-based neural radiance fields", "year": "2022" }, { "authors": "Mohsen Yavartanoo; Jaeyoung Chung; Reyhaneh Neshatavar; Kyoung Mu; Lee ", "journal": "", "ref_id": "b55", "title": "3dias: 3d shape reconstruction with implicit algebraic surfaces", "year": "2021" }, { "authors": "Tackgeun You; Mijeong Kim; Jungtaek Kim; Bohyung Han", "journal": "", "ref_id": "b56", "title": "Generative neural fields by mixtures of neural implicit functions", "year": "2023" }, { "authors": "Alex Yu; Ruilong Li; Matthew Tancik; Hao Li; Ren Ng; Angjoo Kanazawa", "journal": "", "ref_id": "b57", "title": "Plenoctrees for real-time rendering of neural radiance fields", "year": "2021" }, { "authors": "Xiaohui Zeng; Arash Vahdat; Francis Williams; Zan Gojcic; Or Litany; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b58", "title": "Lion: Latent point diffusion models for 3d shape generation", "year": "2022" }, { "authors": "Linqi Zhou; Yilun Du; Jiajun Wu", "journal": "", "ref_id": "b59", "title": "3d shape generation and completion through point-voxel diffusion", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 365.48, 510.05, 179.63, 9.68 ], "formula_id": "formula_0", "formula_text": "P 0 = ϕ 2→3 ([I 0 , D 0 ], K, P 0 ) ,(1)" }, { "formula_coordinates": [ 4, 50.11, 372.12, 238.15, 50.52 ], "formula_id": "formula_1", "formula_text": "I i = S Îi , M i , Di = D (I i ) , D i = d i Di , d i = argmin d Mi=1 ϕ 2→3 I i , d Di , K, P i -P i-11" }, { "formula_coordinates": [ 4, 92.81, 540.31, 193.55, 12.17 ], "formula_id": "formula_2", "formula_text": "Pi = ϕ 2→3 ([I i , D i |M i = 0] , K, P i ) ,(3)" }, { "formula_coordinates": [ 4, 76.89, 560.7, 64.95, 9.68 ], "formula_id": "formula_3", "formula_text": "[I i , D i |M i = 0]" }, { "formula_coordinates": [ 4, 308.86, 132.57, 195.89, 251.38 ], "formula_id": "formula_4", "formula_text": "P 0 ← ϕ 2→3 ([I 0 , D 0 ], K, P 0 ) 2 for i ← 1 to N do 3 Îi , M i ← ϕ 3→2 (P i-1 , K, P i ) 4 I i ← S Îi , M i , Di ← D (I i ) 5 d i ← 1 6 while not converged do 7 Pi ← ϕ 2→3 I i , d i Di , K, P i 8 L d ← 1 ∥Mi=1∥ Mi=1 Pi -P i-1 1 9 Calculate ∇ d L d 10 d i ← d i -α∇ d L d 11 end 12 D i ← d i Di 13 Pi ← ϕ 2→3 ([I i , D i |M i = 0] , K, P i ) 14 P i ← P i-1 ∪ W Pi 15 end changes (|∇M i | > 0)" }, { "formula_coordinates": [ 4, 380.02, 658.7, 165.09, 12.17 ], "formula_id": "formula_5", "formula_text": "P i = P i-1 ∪ W Pi ,(4)" }, { "formula_coordinates": [ 5, 55.09, 646.07, 231.27, 9.68 ], "formula_id": "formula_6", "formula_text": "I i , M i = ϕ 3→2 (P N , K, P i ) , i = M + 1, ..., M + N. (5)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b50", "b61", "b11", "b40", "b3", "b51", "b67", "b4", "b12", "b24", "b41", "b56", "b66", "b68", "b69", "b71", "b14", "b15", "b23", "b32", "b60", "b70", "b6", "b13", "b42", "b22", "b30", "b37", "b45", "b7", "b46", "b47", "b53", "b26", "b5", "b37", "b63", "b5", "b37", "b30", "b37" ], "table_ref": [], "text": "Volumetric image segmentation, involving extracting 3D regions of interest, such as organs, lesions, and tissues, plays a pivotal role in medical image analysis by accurately modeling the 3D structural information of the human body from volumetric medical images such as CT or MRI. This technique benefits numerous clinical applications including tumors monitoring [51,62], surgical planning [12,41], disease diagnosis [4], therapy optimization [52,68], etc.\nCompared to 2D medical image segmentation [5,13,25,42,57,67,69,70,72], volumetric image segmentation is notably more challenging due to the labor-intensive annotation and resource-consuming computation. The research of volumetric medical image segmentation has garnered substantial attention, leading to a series of advancements [15,16,24,33,61,71]. However, there exist several key limitations of the above-mentioned methods, which prevent their application in challenging tasks, e.g., liver tumor or colon cancer segmentation [2,7,14,43], and real-world tasks, e.g., human-interactive segmentation [23,31,38,46]. Firstly, the publicly available volumetric medical image datasets usually consist of a small number of mask annotations from a few varying categories. Due to the different label spaces, the traditional task-specific segmentation models trained on one dataset have difficulty in generalizing to others. For example, the CT-ORG dataset [2,8,47,48] contains 'lungs' category, while this category is split into two sub-classes and named 'left lung' and 'right lung' in the LUNA16 dataset [54]. The main reason is that these models do not understand the semantics of anatomical categories. Secondly, traditional segmentation models have inferior performance when segmenting complex structures, such as tumors and cysts [27]. This is because these models are trained on insufficient data and are also not able to leverage the spatial information through user interaction. Thirdly, previous solutions are computationally expensive in the inference process. They typically employ a sliding window to infer the whole volumetric input. This strategy is not only timeconsuming but also short-sighted, as the sliding window contains only local information. Recently, there have been some works [6,38,64] that introduce spatial prompts into medical image segmentation. However, most of them lack the ability to process the 3D input directly, e.g. [6,38], and none of them is able to understand the semantics of anatomical categories.\nMotivated by the success of 2D image analysis [31,38], we present the first foundation model, SegVol, for volumetric medical image segmentation. SegVol enables universal and interactive segmentation of more than 200 anatomical categories, supporting both spatial and semantic prompts. SegVol is built on a lightweight architecture, ensuring its efficiency for practical medical image analysis. We summarize the key features of SegVol as follows: umes and the supervised fine-tuning on 25 public volumetric medical image segmentation datasets.\n2. Enable semantic-prompt segmentation on over 200 anatomical categories by integrating the language model into the segmentation model.\n3. Employ a synergistic mechanism to coordinate the spatial-prompt and semantic-prompt in the model and achieve high-precision segmentation.\n4. Design a zoom-out-zoom-in strategy that significantly reduces the computational cost, meanwhile preserving precise segmentation.\nWe extensively evaluate the proposed SegVol on 10 internal validation tasks and 18 external validation tasks, which encompass a variety of anatomical structures including organs, tissues, and lesions. The internal validation experiments show that SegVol outperforms the traditional task-specific segmentation models, and the external validation experiments demonstrate that our method surpasses the state-of-the-art interactive models by a large margin. These experimental results verify the capabilities of SegVol as a foundation model for universal and interactive volumetric medical image segmentation." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "SegVol: a 3D foundation model for volumetric medical image segmentation", "publication_ref": [], "table_ref": [], "text": "It is a long-standing challenge in medical image analysis to build a 3D foundation model that is capable of handling a wide range of segmentation tasks while achieving precise segmentation results. This challenge has two critical aspects. On the one hand, volumetric medical imaging datasets are usually dispersed and small in scale, making it challenging to train a comprehensive 3D foundation model that can generalize well across diverse datasets. On the other hand, segmentation tasks in medical image analysis encompass a wide range of semantic categories and spatial scales, ranging from organ to lesion segmentation, further complicating the development of a general solution.\nTo establish a universal volumetric segmentation model proficient in multiple tasks, we collect 25 CT volume segmentation datasets from public medical datasets and process them into a joint dataset, involving popular segmentation tasks. A total of 5,772 CT volumes of the joint dataset participate in the training and internal validation, with 149,199 volumetric masks and semantics. The collected joint dataset includes major regions of the human body, i.e., the head, neck, thorax, abdomen, and pelvis, comprising over 200 categories of organs and tissues, and 28 lesion tasks from different benchmarks. Some representative samples are shown in Fig. 1 b. The detailed information of joint dataset can be obtained from Supplementary Tables 1, 2, and Fig. 2.\nDeveloping a universal segmentation model is challenging due to two main obstacles. One is the large number of categories leading to ambiguous semantics, where the same voxel may correspond to multiple targets. Another obstacle lies in the wide range of spatial scales of targets, varying from small lesions to large organs, and the complex and diverse structures of targets in the space. To address these challenges, our method adopts innovative strategies demonstrated in Fig. 1 a. As a universal model, our approach delivers accurate segmentation results for over 200 important organs, tissues, and lesions by leveraging text prompts to clarify semantic references. Furthermore, as a precise segmentation model, SegVol introduces point and bbox(bounding box) spatial prompts to guide the segmentation of anatomical structures, thus achieving high-precision segmentation performance. By leveraging these techniques, our method navigates the ambiguous semantics arising from numerous categories and accommodates the varying spatial scales and complex structures of targets, ensuring robust and precise segmentation across a wide range of medical imaging tasks." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_2", "fig_3" ], "heading": "Internal validation compared with task-specific segmentation models", "publication_ref": [ "b32", "b23", "b14", "b31", "b57", "b23" ], "table_ref": [ "tab_1" ], "text": "Task-specific segmentation models mainly fall into two architectures, CNN-based models and Transformer-based models. We conduct internal comparative experiments with representative CNN-based models i.e. 3DUX-Net [33] and nnU-Net [24], and representative Transformer-based models i.e. SwinUNETR [15]. We conduct internal validation experiments on the test set of joint dataset, which is not observed during the training phase of SegVol. The 10 internal segmentation tasks are selected from BTCV [32] and MSDspleen [58] datasets, which focus on organ segmentation, and from MSD-lung, MSD-colon, and MSD-liver datasets, which focus on lesion segmentation. We train task-specific segmentation models on each dataset individually for each method.\nThe quantitative experimental results are summarized in Fig. 2 a. Generally speaking, SegVol, jointly trained on 25 datasets, outperforms traditional task-specific segmentation models trained on a single dataset. Compared to these strong baselines, SegVol exhibits a narrower distribution of DSC scores across the eight tasks, indicating its robustness and good generalization ability. This mainly owes to the massive knowledge learned from diverse samples of the same categories but different datasets. SegVol depicts excellent performance on lesion tasks which are more challenging in semantic understanding and spatial locating. We present a detailed comparison to nnU-Net [24] on lesion tasks. As shown in Fig. 2 c, the average Dice score of SegVol is 14.76% higher than that of nnU-Net for lesion tasks. We visualize the prediction results of the two methods in Fig. 2 d, which intuitively show that SegVol performs more precise segmentation of the tumors than nnU-Net. The detailed scores and visualization results of interval validation are presented in Supplementary Table 3 and Fig. 345.\nWe analyze that there are mainly three factors that make SegVol more powerful than traditional task-specific models: 1) Massive generative pre-training on unlabeled data endows SegVol with a complete understanding of the volumetric structures and the discriminative feature representations, which is much superior to learning from a small number of samples. 2) Learning from joint datasets with semantic prompts makes SegVol generalize better to unseen data and categories. For instance, SegVol can learn from both 'left kidney' and 'kidney' categories based on their semantic correlation, while traditional task-specific models treat the two categories independently. 3) SegVol can be prompted with (spatial) points/bboxes, which provide a precise spatial reference, and (semantic) texts, which disambiguate the overlap of multiple categories in the same space. In contrast, traditional methods are not able to understand semantics. This ability enables SegVol to perform better than traditional methods in challenging tasks, e.g., segmenting lesions." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_3" ], "heading": "External validation compared with interactive methods", "publication_ref": [ "b37", "b5", "b63", "b25", "b8", "b37", "b30", "b30", "b5", "b63", "b27", "b28", "b29" ], "table_ref": [ "tab_3", "tab_2" ], "text": "Several efforts have been made to construct an interactive segmentation model. However, some of these works, such as MedSAM [38] and SAM-MED2D [6], focus on 2D tasks and cannot process 3D input directly. The other 3Dbased methods, such as SAM-MED3D [64], only support small cropped input and lack semantic information support, which is still far from building a comprehensive foundation model for volumetric medical image analysis. To compare with these interactive segmentation models, we performed external validation experiments on 1,738 cases from the validation set of AMOS22 [26] and the whole novel annotated set of Universal Lesion Segmentation Challenge 23(ULS23) [9]. The validation set of AMOS22 contains 120 cases annotated with 15 main organs. The novel annotated ULS23 dataset is composed of three subsets, namely, DeepLesion3D, Radboudumc Bone, and Radboudumc Pancreas. The DeepLesion3D subset contains 200 abdominal lesions, 100 bone lesions, 50 kidney lesions, 50 liver lesions, 100 lung lesions, 100 mediastinal lesions, and 150 assorted lesions cases. There are 744 bone lesion cases in the Radboudumc Bone subset and 124 pancreas lesion cases in the Radboudumc Pancreas subset.\nThe quantitative results of external validation experiments are shown in Fig. 3. The Fig. 3 a illustrates our method is the best in most of the tasks including lesions and organs, compared to other SAM-like interactive models. MedSAM [38] and SAM(bounding box) [31] use bounding box prompts. SAM(5 clicks) [31], SAM-MED2D [6] and SAM-MED3D [64] use point prompts and a five-step correction procedure, which means that the point prompt in each step will be given according to the previous-step output and ground truth, rather than giving all at once. In this experiment, our SegVol uses bounding box and text prompt which performs better than other kinds of prompt combinations. We provide the ablation study in Fig. 3 b, which shows the good performance of SegVol among different prompt types, especially bbox(bounding box) prompt and text+bbox prompt. Note that the category of each mask in ULS23 is not clearly defined. Thus, we give a general text (i.e. 'tumor' or 'lesion') to prompt SegVol, and it is compatible with such general prompts and performs well. The detailed scores and visualization results of external validation are presented in Supplementary Table 5 and Fig. 678.\nIn Fig. 4, we visualize the segmentation results in 4 important organ categories to study the differences within these interactive models. Due to the lack of understanding of the 3D structure, 2D methods like MedSAM and SAM(bbox) present worse results. Although the 3D segmentation method, SAM-MED3D, performs well in the easy aorta case, it demonstrates poor segmentation results in others, especially the pancreas and stomach cases. SegVol achieves good segmentation results stably in all categories, relying on its full understanding of 3D spatial structures and semantics.\nIn addition, we discuss the generalization performance of SegVol on an external MRI dataset. We collect 60 MRI scans annotated with 4 key organ categories from CHAOS [28][29][30] dataset and evaluate the generalization ability to unseen modality of SegVol. It achieves median Dice scores of 85.70%, 80.09%, 80.04%, and 81.46% for liver, spleen, left kidney, and right kidney, respectively. This generalization result demonstrates the robustness of SegVol in the face of completely unseen modality data. The detailed scores and visualization results are presented in Supplementary Table 4 and Fig. 9." }, { "figure_ref": [], "heading": "The interaction relationship between spatialprompt and semantic-prompt", "publication_ref": [ "b30" ], "table_ref": [], "text": "As a universal model, our approach achieves precise segmentation for over 200 organs, tissues, and lesions using both spatial and semantic prompts. In Fig. 5 a, we quantitatively analyze the mutually supportive relationship between semantic-prompt and spatial-prompt in 19 internal segmentation tasks. On the one hand, spatial prompts allow the model to locate the specific part in the 3D space. According to Fig. 5 a, the average Dice score of 'bbox+text' prompt is boosted by 5.85% compared to the 'text' prompt on average. On the other hand, semantic prompts clarify the reference to the anatomical structure, eliminating the ambiguity of spatial prompts and the plausible masks of multiple categories. This is reflected in Fig. 5 a as the average Dice score of 'point+text' prompts is 4.62% higher than using 'point' prompts alone. Spatial and semantic prompts mutually support each other, ultimately endowing the model with powerful segmentation capabilities. Alexander Kirillov, etc. [31] discuss the multiple plausible outputs problem in the spatial prompt setting. As illustrated in the images on the top left in Fig. 5 b, they are applied with the same point prompt which reasonably corresponds to three concepts, namely, kidney tumor, left kidney, and the whole kidney. Similarly, in the bottom left images, the bounding box selects the region of liver. However, liver tumors, hepatic vessels, and liver itself are also plausible target structures. In these cases, SAM chooses to return multiple masks to match different levels of plausible results. Un-like SAM's solution, we use semantic prompts to clarify the targets. As shown in Fig. 5 b, the captions below the images are text prompts, and the masks in the images are the predictions of SegVol, which shows that the semantic prompts can effectively disambiguate the text prompts.\nFurthermore, we study the possibility of SegVol to reflect spatial prompts to semantic categories. Fig. 5 c reveals that SegVol can give accurate semantic categories based on the spatial prompts. In the top left image in Fig. 5 c, the spatial prompt on the liver results in a 0.997 prediction score for liver. The top right image in the sub-figure shows if the spatial prompt is the point on the liver tumor, SegVol will output a 0.619 prediction score for tumor category and a 0.339 prediction score for liver based on the spatial relationship of liver tumor and liver. We implement this reflection experiment by decoding the semantic prompts from a category set and applying the softmax function among the logits of semantic prompts on the predicted mask voxels to get the prediction probabilities of different categories." }, { "figure_ref": [ "fig_1" ], "heading": "Scaling up training data", "publication_ref": [ "b30", "b44", "b31" ], "table_ref": [], "text": "The success of scaling up has been witnessed in multiple computer vision tasks [31,45]. We conduct an ablation study to investigate the importance of scaling up training images and masks. The BTCV dataset [32], which includes 13 main organs, is set as an anchor to evaluate the model trained separately on 1, 2, and 8 datasets for 500 epochs, as well as the final model trained on 25 datasets. The detailed results are shown in Fig. 2 b. As a lightweight model, the performance is weak when only one dataset is used. However, with the increase of training data, the Dice score increases rapidly, especially in the text prompt setting. The results indicate that our method is scalable and better performance can be achieved if more training data is available." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b23", "b14", "b23", "b32", "b60", "b5", "b30", "b37", "b63", "b50", "b40" ], "table_ref": [], "text": "We present SegVol, a 3D foundational model for interactive and universal volumetric medical image segmentation. This method has been developed and evaluated using 25 open-source datasets, 10 internal validation tasks, and 18 external validation tasks. Unlike the traditional volumetric segmentation method, nnU-Net [24], which automatically configures settings for every dataset, SegVol is designed to unify various volumetric segmentation datasets into a single architecture. This results in a universal segmentation tool capable of generating accurate responses for over 200 anatomical targets. Furthermore, SegVol demonstrates state-of-the-art volumetric segmentation performance when compared with both traditional task-specific methods [15,24,33,61] and the recent interactive methods [6,31,38,64] in internal validation and external validation experiments, respectively. Despite its universality and high precision, SegVol maintains a lightweight architecture compared to other volumetric segmentation methods. We have made SegVol an open-source foundational model, readily applicable to a broad spectrum of medical image representation and analysis fields. This ensures it can be easily integrated and utilized by researchers and practitioners alike.\nSegVol's capability of interactive and precise segmentation makes it a promising clinical aid tool. It can assist clinicians in identifying and quantifying tumor location, size, and shape changes within a patient's body [51] more accurately and rapidly. This precise monitoring aids clinicians in detecting tumor growth trends, assessing treatment effectiveness, and adjusting treatment plans as needed. Additionally, clinicians can use SegVol to accurately identify and segment important structures within a patient's body, such as organs, blood vessels, or the precise location of tumors and surrounding tissues, using high-resolution 3D images such as CT volumes. These precise segmentation results help clinicians better understand the patient's anatomical structures, plan surgical pathways, reduce surgical risks, and improve the accuracy and success rate of surgeries [41].\nWhile SegVol is capable of understanding semantic prompts composed of sentences, there remains a gap between it and the referring expression segmentation that involves complex semantic information and logical relationships. The establishment of a referring expression segmentation model needs more curated data with spatial annotations with text. Our SegVol provides a foundation for realizing referring segmentation of medical images, and we leave as future work.\nWe primarily use SegVol with Computed Tomography (CT) data due to its advantages of easy acquisition, wide us-age, and high resolution. CT is also the preferred method for evaluating solid tumors. Furthermore, the flexible architecture of SegVol allows it to be compatible with various types of volumetric medical images, like MRI. The current framework also allows for the direct addition of new training data in the same format, even if the new data are from unseen categories. This means that the model can inherit all previous knowledge and continue learning in new fields. This adaptability and continuous learning capabilities make SegVol a promising and broadly used tool in the field of medical image analysis." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Data processing", "publication_ref": [ "b27", "b28", "b29", "b43", "b25", "b38", "b21", "b17", "b18", "b54", "b55", "b20", "b31", "b7", "b48", "b49", "b58", "b39", "b57", "b64", "b7", "b46", "b47", "b33", "b35", "b52", "b19", "b0", "b57", "b53", "b36", "b10", "b10" ], "table_ref": [], "text": "One of the main challenges in volumetric medical image segmentation is the absence of large-scale publicly available volumetric medical data, especially the annotated segmentation CTs. Doing our utmost, we collected 25 opensource segmentation CT datasets, including CHAOS [28][29][30], HaN-Seg [44], AMOS22 [26], AbdomenCT-1k [39], KiTS23 [22], KiPA22 [18,19,55,56], KiTS19 [21], BTCV [32], Pancreas-CT [8,49,50], 3D-IRCADB [59], FLARE22 [40,58], TotalSegmentator [65], CT-ORG [2,8,47,48], VerSe19, VerSe20 [34,36,53], SLIVER07 [20], QUBIQ [1], six MSD datasets [58], LUNA16 [54], and WORD [37]. These CTs originate from various medical institutions, captured by different machines with varying parameter settings and scanning regions. These factors result in a wide data distribution, and thus significant challenges in data processing.\nTo standardize these datasets, we perform the following transformation on every CT scan. Firstly, we set a threshold based on the mean voxel value of each volume. Voxels with values that are above this threshold are retained. Then, we calculate the 99.95 th and 0.05 th percentiles of the remaining voxels and use them as the upper and lower bounds to clip the original voxels and obtain the foreground. Finally, we normalize the foreground voxels using the mean and standard deviation.\nVolumetric segmentation datasets suffer from the notorious problem of partial labels. Most of the datasets have annotations of only a few segmentation targets, e.g., several organs. Therefore, the deep models may learn the spurious correlation between datasets and segmentation targets, and produce inferior results during the inference phase. To relieve this problem, we introduce the pseudo labels by utilizing the Felzenswalb-Huttenlocher (FH) [11] algorithm to generate pseudo masks for each CT scan.\nThe unsupervised segmentation algorithm FH [11] separates the spatial structures based on the gradient between adjacent voxels. However, pseudo masks derived by the FH algorithm contain substantial noise and numerous small masks, for example, the disconnection of a complete struc-ture and the wrong connection of different structures. To improve the pseudo masks, we employ the following strategies: 1) The pseudo masks are replaced with ground truth masks when applicable. 2) We filter out tiny structures smaller than 1‰ of the whole volume. 3) Each mask is refined by dilation and erosion operations." }, { "figure_ref": [], "heading": "Model architecture", "publication_ref": [ "b14", "b15", "b23", "b32", "b60", "b70", "b30", "b9", "b16", "b65", "b44", "b44", "b34", "b30", "b59", "b62" ], "table_ref": [], "text": "The volumetric medical image segmentation dataset D = {(x i , y i )} consists of many pairs of 3D images and mask labels. Each data pair has a 3D image datum x i ∈ R C×D×H×W and K mask labels y i ∈ {0, 1} K×D×H×W , corresponding to K target categories. The classic segmentation model [15,16,24,33,61,71] F( * , θ) learns to predict masks y i belonging to the K categories based on the volumetric input x i , i.e., o i = F(x i , θ), where o i ∈ R K×D×H×W . Therefore, the traditional models are not able to generalize to unseen categories.\nMotivated by the recent advance in 2D nature image segmentation, Segment Anything (SAM) [31], we design a novel method for interactive and universal volumetric medical image segmentation, named, SegVol. We illustrate the model in Fig. 1 a. SegVol supports three types of prompts for interactive segmentation: 'bbox' prompt, b ∈ R 6 representing the coordinates of two diagonal vertices; 'point' prompt, including a set of (P ) points p ∈ R P ×3 ; and 'text' prompt, such as 'liver' or 'cervical spine C2', which is tokenized to tensor t. SegVol consists of four modules, namely, image encoder F IE ( * , θ IE ), text encoder F TE ( * , θ TE ), prompt encoder F PE ( * , θ PE ), and mask decoder F MD ( * , θ MD ). We introduce each module in the following.\nWe employ ViT (Vision Transformer) [10] as the image encoder, which exhibits remarkable advantages over convolutional models [17] when pre-trained on large-scale datasets. We first pre-train ViT using SimMIM algorithm [66] on the all collected 96K CTs, and then conduct further supervised fine-tuning on the 6K CTs with 150K labeled segmentation masks. The image encoder, denoted as F IE ( * , θ IE ), takes a volumetric image x ∈ R C×D×H×W as input. Firstly, it splits x into a set of patches, denoted as\nx patch ∈ R N ×(C×P D ×P H ×P W )\n, where N = D×H×W P D ×P H ×P W . P D , P H and P W are the size of patch. These patches are then fed into the network, which outputs an embedding z image = F IE (x patch , θ IE ), z image ∈ R N ×F . F represents the feature dimension, which is set to 768 by default in this paper.\nOne main limitation of traditional segmentation models is that the models learn dataset-specific labels encoded as integers which cannot generalized to new datasets or tasks, limiting their real-world applications. We enable universal segmentation across datasets by leveraging the text prompt. We employ the text encoder from CLIP model [45] to encode the input text prompt, as CLIP [45] has been trained to align image and text on web-scale image-text pairs. We denote the text prompt encoder as F TE ( * , θ TE ). Given a word or phrase as prompt, we complete it using the template s ='A computerized tomography of a [text prompt]' [35]. s is then tokenized into t. The text encoder accepts t as input and outputs the text embedding z text = F TE (t, θ TE ), where z text ∈ R F . We freeze the off-the-shelf text encoder during training since the text data in CT datasets is a small amount.\nFollowing SAM [31], we use the positional encoding [60] for point prompt p and bbox prompt b and obtain the point embedding z point ∈ R F and bbox embedding z bbox ∈ R F . We concatenate the embeddings of three kinds of prompts as\nz prompt = F PE (p, b, s, θ PE ) = [z point , z bbox , z text ].\nAfter obtaining the image embedding z image , prompt embedding z prompt and text embedding z text , we input them to the mask decoder and predict the mask p = F MD (z image , z prompt , z text , θ MD ). We use self-attention and cross-attention [63] in two directions to blend the image embedding and prompt embedding, and then employ the transposed convolutions and interpolation operations to generate masks. Since the text embedding is the key to universal segmentation and it is also harder to learn the correlation between text and volumetric regions, we reinforce the text information by introducing a parallel text input z text beside the joint prompt embedding z prompt . We further compute a similarity matrix between the up-scaled embedding from the transposed convolution output and the text embedding in the mask decoder. The element-wise multiplication of the similarity matrix with the mask prediction is applied before interpolation, after which the model outputs the masks." }, { "figure_ref": [], "heading": "Prompt generation", "publication_ref": [ "b10" ], "table_ref": [], "text": "SegVol can accept multiple prompt types, including individual point prompts, bbox prompts, and text prompts, and also their combinations. To make full use of the segmentation training data, we generate kinds of prompts for each datum. Then, the prompt and mask pairs are used to compute the training loss. SegVol supports 'point' prompts, 'bbox' prompts, and 'text' prompts.\nThe point prompt is built from ground truth or pseudo masks, consisting of three kinds of points, namely, positive point, negative point, and ignore point. Positive point means that it is within the target mask region, while negative points are those outside. The ignore points are used for input completion, which will be disregarded by the model so that the point prompt has the same length.\nThe bbox prompt is generated based on the ground truth or pseudo masks, integrated with random jitter to enhance the model's robustness. When generating the bbox prompt for some pseudo mask, the bbox may also cover other masks due to the irregular 3D shapes. To address this problem, we compute the Intersection over Union (IOU) between the generated bbox and the included pseudo masks. Any mask with an IOU greater than 0.9 will also be integrated and con-sidered as part of the target mask corresponding to this bbox prompt.\nThe bbox and point prompts can be generated by sampling points based on the ground-truth segmentation masks, while text prompts are constructed based on their category names. As pseudo masks produced by the unsupervised FH algorithm [11] do not have the semantic information, we only use point and bbox prompts when training with pseudo masks." }, { "figure_ref": [], "heading": "Loss function", "publication_ref": [ "b65" ], "table_ref": [], "text": "We apply SimMIM algorithm [66] to pre-train the image encoder of SegVol using the masked image modeling loss L pre-training (θ IE ; D 1 ). The loss function is as follows:\nL pre-training (θ IE ; D 1 ) = 1 Ω(x M ) ||y M -x M || 1 ,(1)\nwhere x, y ∈ R D×H×W are the input voxel values and predicted values, respectively. M denotes the set of masked voxels, Ω(•) is the number of elements, and D 1 is the pretraining dataset.\nWe combine the binary cross-entropy (BCE) loss and Dice loss as the supervised fine-tuning loss function L fine-tuning (θ; D 2 ) to train the model with parameters θ, where θ = [θ IE , θ PE , θ MD ] and D 2 is the supervised finetuning dataset. The loss function is as follows:\nL BCE (θ; D 2 ) = -E (x,y)∼D2 [⟨y, log(F(x, θ))⟩+ ⟨1 -y, log(1 -F(x, θ))⟩](2)\nL Dice (θ; D 2 ) = 1 -E (x,y)∼D2 [ 2⟨y, F(x, θ)⟩ ∥y∥ 1 + ∥F(x, θ)∥ 1 ](3)\nL fine-tuning (θ; D 2 ) = L BCE (θ; D 2 ) + L Dice (θ; D 2 )(4)\nThe detailed supervised fine-tuning training loop of SegVol is presented in Supplementary Algorithm 1." }, { "figure_ref": [], "heading": "Zoom-out-zoom-in mechanism", "publication_ref": [], "table_ref": [], "text": "Compared to 2D slides, volumetric data has a remarkably large number of voxels meanwhile small segmentation targets relatively. Naively down-sampling the original data will cause serious information loss and thus inferior performance. Diving the large volumetric data into small cubes and conquering each separately is computationally expensive and also suffers from information loss. To reduce the computational cost while preserving the details of the Region of Interest (ROI), we design the zoom-out-zoom-in mechanism consisting of multi-size training and zoom-outzoom-in inference.\nTo adapt various sizes of volumetric data and enable the zoom-out-zoom-in inference, we construct two kinds of training data. One is to resize the large-size CT to adapt the model's input size and obtain the training data of the zoomout view. The other one is to crop the original large-size CT into cubes with the model's input size. In this way, we obtain the training data of zoom-in view.\nDuring the zoom-out-zoom-in inference, we first zoom out and implement global inference. Given a large volumetric image, it is resized and then fed into the SegVol model. After obtaining the global predicted segmentation mask based on the user's prompt, we locate the region of interest (ROI) and zoom in, namely, crop it from the originalsize image. We apply a sliding window on the cropped region and implement more precise local inference. We adapt the input prompt for the local inference, since the original point and bbox prompts input by the user may not be applicable in the local inference region when zoom-in. Specifically, we ignore the positive or negative points of the local region. Similar to the training bbox prompt generation in Sec. 4.3, we generate the local bbox prompt by considering the global predicted mask in the local region as the pseudo mask. Finally, we fill the ROI region of the global segmentation mask with the local segmentation mask. The zoomout-zoom-in mechanism realizes both efficient and precise inference simultaneously. The detailed procedure of zoomout-zoom-in mechanism is provided in Supplementary Fig. 1 c andd." }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [], "table_ref": [], "text": "During the training and internal validation process, each subset of the joint dataset is divided into 80% training data and 20% internal validation data. To ensure the absence of any data leaks, hash value is utilized to compare between the validation set and the training set. And during the external validation process, the model's parameters are all frozen.\nWe use the Dice Similarity Coefficient (Dice score) as a metric to evaluate the model, which is defined as DSC = Dice score is a commonly used metric for evaluating image segmentation tasks. It measures the degree of similarity between predicted segmentation and true segmentation, making it particularly suitable for evaluating the overlap degree of binary segmentation results." }, { "figure_ref": [], "heading": "Data and code availability", "publication_ref": [], "table_ref": [], "text": "The training, internal, and external validation datasets used in this study are publicly accessible and licensed. Please refer to their original papers for the details of these datasets. The download links are provided in Supplementary Table 2.\nOur training code, inference code, and model weights have been publicly available in https://github.com/BAAI-DCAI/SegVol. The online running demonstration is provided in https://huggingface.co/spaces/BAAI/SegVol. # Loop for possible prompt composite types of ground truth mask. # Choose one prompt composite type.\n12:\npt ′ spatial , pt ′ semantic ⇐ PromptStrategy(pt spatial , pt semantic ) # Loop for several pseudo masks." }, { "figure_ref": [], "heading": "20:", "publication_ref": [ "b32", "b14", "b23" ], "table_ref": [], "text": "for p ⇐ 1 to n pt do 21:\n# Random select a pseudo mask of this case for training. . Visualized liver and pancreas prediction results of 3DUX-NET [33], SwinUNETR [15], nnU-Net [24] and SegVol on 4 cases from internal validation set. For the modeling of pancreas, SegVol is significantly superior to other baseline methods.\nGround Truth" }, { "figure_ref": [], "heading": "3DUX-NET SwinUNETR nnU-Net SegVol", "publication_ref": [ "b32", "b14", "b23" ], "table_ref": [], "text": "Spleen Stomach Figure 5\n. Visualized spleen and stomach prediction results of 3DUX-NET [33], SwinUNETR [15], nnU-Net [24] and SegVol on 4 cases from internal validation set. For the consistency and stability of stomach modeling, SegVol is significantly better than other methods." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "Funding: This work is funded by the National Key R&D Program of China (2021ZD0111102) and NSFC-62306046." }, { "figure_ref": [], "heading": "Supplementary Materials", "publication_ref": [], "table_ref": [], "text": "Table 1. Supervised fine-tuning datasets and validation datasets." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b58", "b38", "b25", "b31", "b27", "b28", "b29", "b7", "b46", "b47", "b39", "b57", "b43", "b17", "b18", "b54", "b55", "b37", "b30", "b5", "b63", "b30" ], "table_ref": [], "text": "Anatomical Targets Category Number Trainset Volumes 3D-IRCADB [59] Liver and liver tumor 47\nAbdomenCT-1k [39] Liver, kidney, spleen, and pancreas 4\nAMOS22 [26] Abdominal organs 15\nBTCV [32] Abdominal organs 13 CHAOS [28][29][30] Abdominal organs 1\nCT-ORG [2,8,47,48] Brain, lung, bones, liver, kidney, and bladder 6\nFLARE22 [40,58] Thoracic and abdominal organs 13\nHaN-Seg [44] Organs of the head and neck 30\nKiPA22 [18,19,55,56 [38], SAM(bbox) [31], SAM-MED2D [6], SAM-MED3D [64], SAM(points) [31] and SegVol on 4 cases from external validation set. " }, { "figure_ref": [], "heading": "Gall Bladder", "publication_ref": [ "b37", "b30", "b5", "b63", "b30", "b37", "b30", "b5", "b63", "b30" ], "table_ref": [], "text": "Left Kidney Figure 7. Visualized gall bladder and left kidney prediction results of MedSAM [38], SAM(bbox) [31], SAM-MED2D [6], SAM-MED3D [64], SAM(points) [31] and SegVol on 4 cases from external validation set. [38], SAM(bbox) [31], SAM-MED2D [6], SAM-MED3D [64], SAM(points) [31] and SegVol on 4 cases from external validation set. " } ]
Precise image segmentation provides clinical study with instructive information. Despite the remarkable progress achieved in medical image segmentation, there is still an absence of 3D foundation segmentation model that can segment a wide range of anatomical categories with easy user interaction. In this paper, we propose a 3D foundation segmentation model, named SegVol, supporting universal and interactive volumetric medical image segmentation. By scaling up training data to 90K unlabeled Computed Tomography (CT) volumes and 6K labeled CT volumes, this foundation model supports the segmentation of over 200 anatomical categories using semantic and spatial prompts. Extensive experiments on 10 internal validation tasks and 18 external validation tasks verify that SegVol outperforms the state of the art by a large margin. Through its capacity to provide precise volumetric segmentation across various anatomical categories, SegVol has the potential to accelerate advancements in medical imaging diagnosis and facilitate treatment optimization. The model and code are publicly available at: https://github.com/BAAI-DCAI/SegVol.
SegVol: Universal and Interactive Volumetric Medical Image Segmentation
[ { "figure_caption": "1 .Figure 1 .11Figure 1. Overview of the model architecture and representative samples in the joint dataset. a. SegVol can model the 3D anatomical structures from volumetric inputs with easy user interactions including bounding box, point and text prompts. b. The joint dataset encompasses various anatomical structures in major regions of the human body. Several volume examples are demonstrated as 2D slices and 3D shapes in the images respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Quantitative results and visualization of internal validation experiments. a. Violin plots for comparing experiment results of SegVol and task-specific methods. The vertical axis is the Dice score. b. The performance of SegVol improves as the training dataset scales up. c. The comparison of the average Dice score of SegVol and nnU-Net [24] across 3 internal lesion tasks. d. Visualization results of SegVol and nnU-Net across 3 internal lesion tasks.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Quantitative results of external validation experiments. a. Violin plots for comparison experiment results of SegVol and interactive methods. The vertical axis represents the Dice score. b. The bar chart illustrates the consistency of SegVol's performance on the external validation set across different prompt types.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualized prediction results of SegVol and other interactive methods on four external categories. In each case, the upper row is the axial plane of CTs and the lower row is the sagittal plane of CTs. Visualization results show the accuracy and robustness of SegVol's modeling of various anatomical structures.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Bbox+text 83. 02 aFigure 5 .025Figure 5. Analysis of the relationship between semantic-prompt and spatial-prompt. a. The quantitative experimental results on 19 internal tasks demonstrate that jointly using semantic and spatial prompts can achieve better performances. b. The four cases demonstrate that semantic prompts can clarify the ambiguity of spatial prompts and avoid multi-plausible outputs. Each image shows the segmentation result of SegVol using the spatial prompt, i.e. point or bounding box, and semantic prompt, i.e. the caption below the image. c. We reflect the spatial prompts to semantic categories (prompts). Each image shows the spatial prompt and the mask prediction. The bar charts rank the top 8 semantic categories with the highest probabilities.", "figure_data": "", "figure_id": "fig_4", "figure_label": "025", "figure_type": "figure" }, { "figure_caption": "|X ∩ Y | is the cardinality of the intersection of the predicted segmentation sets X and the ground truth sets Y . |X| and |Y | are the cardinalities of sets X and Y respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "10 :10for p ⇐ 1 to n pt do 11:", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "23 :Figure 3 .Figure 42334Figure3. Visualized aorta and left kidney prediction results of 3DUX-NET[33], SwinUNETR[15], nnU-Net[24] and SegVol on 4 cases from internal validation set. For the integrality of aorta and left kidney structure modeling, SegVol significantly outperforms 3DUX-NET and SwinUNETR and is comparable to nnU-Net.", "figure_data": "", "figure_id": "fig_7", "figure_label": "2334", "figure_type": "figure" }, { "figure_caption": "Internal validation results of 3DUX-NET, SwinUNETR, nnU-Net and SegVol on the test set of supervised fine-tuning datasets in term of Dice score.", "figure_data": "Category3DUX-NET [33]SwinUNETR [15]nnU-Net [24]SegVolAorta0.9122 (0.8852, 0.9292) 0.8870 (0.8619, 0.8964) 0.9155 (0.8790, 0.9431) 0.9179 (0.8850, 0.9256)Colon cancer0.0773 (0.0000, 0.2931) 0.0270 (0.0003, 0.2908) 0.3610 (0.0000, 0.6961) 0.7582 (0.6749, 0.7903)Esophagus0.7136 (0.6617, 0.7718) 0.6063 (0.5508, 0.6353) 0.7407 (0.6563, 0.8313) 0.7373 (0.7205, 0.8062)Gallbladder0.4916 (0.1875, 0.6926) 0.2714 (0.1421, 0.5671) 0.8555 (0.5267, 0.8633) 0.8560 (0.7036, 0.8968)Inferior vena cava0.7673 (0.6740, 0.8465) 0.7368 (0.6376, 0.8376) 0.8138 (0.7580, 0.8487) 0.8267 (0.8044, 0.8418)Left adrenal gland0.5788 (0.3238, 0.6038) 0.5658 (0.4380, 0.6147) 0.7915 (0.6888, 0.8231) 0.7643 (0.6525, 0.7880)Left kidney0.9072 (0.8692, 0.9438) 0.9070 (0.8829, 0.9203) 0.9395 (0.9050, 0.9518) 0.9296 (0.9228, 0.9321)Liver0.9316 (0.9074, 0.9462) 0.9374 (0.9110, 0.9531) 0.9276 (0.8614, 0.9597) 0.9560 (0.9437, 0.9685)Liver tumor0.7131 (0.5159, 0.8457) 0.6479 (0.2756, 0.7853) 0.7495 (0.6243, 0.8228) 0.7801 (0.7558, 0.8440)Lung tumor0.5628 (0.4375, 0.7021) 0.4043 (0.2159, 0.6910) 0.7294 (0.4814, 0.8210) 0.7250 (0.6026, 0.8154)Pancreas0.5820 (0.4748, 0.7069) 0.6352 (0.5586, 0.6894) 0.8248 (0.8169, 0.8665) 0.8464 (0.8248, 0.8578)Portal/splenic vein0.7207 (0.6211, 0.7588) 0.6656 (0.5888, 0.6982) 0.7964 (0.7524, 0.8582) 0.7188 (0.7128, 0.7569)Right adrenal gland 0.5785 (0.5099, 0.6302) 0.5026 (0.2730, 0.5963) 0.7137 (0.7067, 0.7326) 0.6579 (0.6372, 0.7008)Right kidney0.9177 (0.8877, 0.9417) 0.9065 (0.9011, 0.9289) 0.9432 (0.9207, 0.9504) 0.9227 (0.9157, 0.9295)Spleen0.8913 (0.7726, 0.9492) 0.9147 (0.8255, 0.9456) 0.9681 (0.9596, 0.9766) 0.9642 (0.9558, 0.9664)Stomach0.7627 (0.6655, 0.8424) 0.7147 (0.6470, 0.8231) 0.8374 (0.6339, 0.9391) 0.9177 (0.9035, 0.9260)", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Generalization experiment results of SegVol on the MRI set of CHAOS[28][29][30] dataset in term of Dice score.", "figure_data": "MethodLiverSpleenLeft KidneyRight KidneySegVol(5 Points) 0.8091 (0.7376, 0.8554) 0.7496 (0.6990, 0.7872) 0.7216 (0.6125, 0.7869) 0.7174 (0.6052, 0.8090)SegVol(Bbox)0.8570 (0.8319, 0.8819) 0.8009 (0.7702, 0.8256) 0.8004 (0.7265, 0.8452) 0.8146 (0.7593, 0.8620)", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "External validation results of SAM(Point), SAM(Bbox), SAM-MED2D, SAM-MED3D, MedSAM and SegVol on the external validation datasets in term of Dice score. .0471, 0.1237) 0.5579 (0.4911, 0.6111) 0.5548 (0.4862, 0.6382) 0.5526 (0.3287, 0.6786) 0.6561 (0.5857, 0.7143) 0.7265 (0.6722, 0.7776) Note: Dice scores are displayed as Median values (First quartile, Third quartile). SegVol training loop Input: SegVol model, training image x, ground truth mask set Y x = {y i } n i=1 , category set S x = {s i } n i=1 , pseudo mask set Z x = {z i } m i=1 Output: SegVol Model parameters 1: n pt ⇐ 6 2: α ⇐ 0.1 3: # Loop for each category of this case. 4: for i ⇐ 1 to n do", "figure_data": "Category SAM(Point) [31] SAM(Bbox) [31] SAM-MED2D [6] SAM-MED3D [64] MedSam [38] SegVolAorta 0.7267 (0.5213, 0.9350) 0.4362 (0.3491, 0.5646) 0.8704 (0.8260, 0.9141) 0.8102 (0.6680, 0.8692) 0.3387 (0.2778, 0.4478) 0.9273 (0.9050, 0.9424)Bladder 0.4162 (0.2862, 0.5099) 0.6281 (0.3093, 0.7565) 0.8417 (0.7484, 0.9024) 0.4338 (0.2445, 0.7198) 0.6799 (0.4275, 0.7992) 0.9120 (0.8338, 0.9446)Duodenum 0.1554 (0.1039, 0.2125) 0.3192 (0.2559, 0.3886) 0.5066 (0.4170, 0.5725) 0.3820 (0.2427, 0.4981) 0.3066 (0.2635, 0.3661) 0.7402 (0.6594, 0.7909)Esophagus 0.2917 (0.1019, 0.6169) 0.3541 (0.2167, 0.5540) 0.5500 (0.4131, 0.6599) 0.5174 (0.3678, 0.6792) 0.3610 (0.2560, 0.5402) 0.7460 (0.6376, 0.8115)Gallbladder 0.2831 (0.1756, 0.5198) 0.6161 (0.4809, 0.7200) 0.7999 (0.7097, 0.8725) 0.5643 (0.3615, 0.7377) 0.6609 (0.5446, 0.7245) 0.8763 (0.8020, 0.9082)Left adrenal gland 0.0555 (0.0276, 0.2347) 0.4222 (0.3417, 0.4995) 0.5068 (0.3225, 0.6318) 0.4584 (0.3104, 0.6267) 0.3766 (0.3321, 0.4541) 0.7295 (0.6519, 0.7916)Left kidney 0.8405 (0.6844, 0.9464) 0.8274 (0.7733, 0.8631) 0.9325 (0.8899, 0.9467) 0.8723 (0.7705, 0.9286) 0.7909 (0.7409, 0.8139) 0.9489 (0.9389, 0.9585)Liver 0.7477 (0.6695, 0.8085) 0.5124 (0.4467, 0.5801) 0.6904 (0.5401, 0.8016) 0.8801 (0.8204, 0.9321) 0.6137 (0.5783, 0.6479) 0.9641 (0.9547, 0.9701)Pancreas 0.2127 (0.1558, 0.3109) 0.3392 (0.2572, 0.4243) 0.5656 (0.5155, 0.6413) 0.5391 (0.3304, 0.7333) 0.3217 (0.2756, 0.4020) 0.8295 (0.7734, 0.8711)Postcava 0.2042 (0.1402, 0.3478) 0.5251 (0.4349, 0.5925) 0.4436 (0.3029, 0.6463) 0.6683 (0.5353, 0.7672) 0.5211 (0.4598, 0.6180) 0.8384 (0.7909, 0.8684)Prostate uterus 0.2344 (0.1655, 0.3081) 0.6986 (0.5430, 0.7522) 0.7518 (0.6567, 0.8261) 0.6231 (0.5330, 0.7364) 0.7739 (0.6685, 0.8271) 0.8557 (0.8255, 0.8901)Right adrenal gland 0.0452 (0.0268, 0.1082) 0.3642 (0.2766, 0.4491) 0.1681 (0.0873, 0.3560) 0.3708 (0.2454, 0.5182) 0.3855 (0.3103, 0.4710) 0.6994 (0.6138, 0.7661)Right kidney 0.8459 (0.5935, 0.9497) 0.8215 (0.7528, 0.8577) 0.9077 (0.8685, 0.9419) 0.8632 (0.7755, 0.9258) 0.7851 (0.7506, 0.8227) 0.9505 (0.9426, 0.9585)Spleen 0.5936 (0.4686, 0.7846) 0.6536 (0.5934, 0.7697) 0.9267 (0.8821, 0.9483) 0.8591 (0.7552, 0.9297) 0.7038 (0.6609, 0.7766) 0.9589 (0.9465, 0.9677)Stomach 0.4229 (0.3437, 0.5479) 0.3883 (0.3051, 0.4713) 0.5399 (0.4555, 0.6267) 0.4576 (0.2540, 0.6447) 0.4378 (0.3503, 0.5379) 0.9123 (0.8677, 0.9369)ULS23(DeepLesion3D) 0.3686 (0.0855, 0.7680) 0.7473 (0.6817, 0.8063) 0.3258 (0.1325, 0.5707) 0.2386 (0.1045, 0.4372) 0.7680 (0.7103, 0.8160) 0.7065 (0.6247, 0.7782)ULS23(Bone) 0.4461 (0.2349, 0.6676) 0.6671 (0.5854, 0.7443) 0.1947 (0.0898, 0.3969) 0.4447 (0.1481, 0.7026) 0.6896 (0.6128, 0.7530) 0.6920 (0.6097, 0.7702)ULS23(Pancreas) 0.0675 (0", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "pred gt ⇐ model.Decoder(f img , f prompt , f text ) 16:l gt ⇐ l gt + DiceLoss(pred gt , y i ) + BCELoss(pred gt , y i )", "figure_data": "13: 14:f text ⇐ model.TextEncoder(pt ′ semantic ) f prompt ⇐ model.PromptEncoder(pt ′ spatial , f text )15:17:end for18:l pseudo ⇐ 019:", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Yuxin Du; Fan Bai; Tiejun Huang; Bo Zhao
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Quantification of uncertainties in biomedical image quantification challenge", "year": "2021-08-18" }, { "authors": "Patrick Bilic; Patrick Christ; Bran Hongwei; Eugene Li; Avi Vorontsov; Georgios Ben-Cohen; Adi Kaissis; Colin Szeskin; Jacobs; Efrain Humpire Gabriel; Gabriel Mamani; Chartrand", "journal": "Medical Image Analysis", "ref_id": "b1", "title": "The liver tumor segmentation benchmark (lits)", "year": "2023" }, { "authors": "", "journal": "", "ref_id": "b2", "title": "Image by brgfx on freepik", "year": "" }, { "authors": "Chen Chen; Chen Qin; Huaqi Qiu; Giacomo Tarroni; Jinming Duan; Wenjia Bai; Daniel Rueckert", "journal": "Frontiers in Cardiovascular Medicine", "ref_id": "b3", "title": "Deep learning for cardiac image segmentation: a review", "year": "2020" }, { "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b4", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": "2017" }, { "authors": "Junlong Cheng; Jin Ye; Zhongying Deng; Jianpin Chen; Tianbin Li; Haoyu Wang; Yanzhou Su; Ziyan Huang; Jilong Chen; Lei Jiang", "journal": "", "ref_id": "b5", "title": "Sam-med2d", "year": "2023" }, { "authors": "Ferdinand Patrick; Florian Christ; Felix Ettlinger; Mohamed Grün; A Ezzeldin; Jana Elshaera; Sebastian Lipkova; Freba Schlecht; Sunil Ahmaddy; Marc Tatavarty; Patrick Bickel; Bilic", "journal": "", "ref_id": "b6", "title": "Automatic liver and tumor segmentation of ct and mri volumes using cascaded fully convolutional neural networks", "year": "2017" }, { "authors": "Kenneth Clark; Bruce Vendt; Kirk Smith; John Freymann; Justin Kirby; Paul Koppel; Stephen Moore; Stanley Phillips; David Maffitt; Michael Pringle", "journal": "Journal of digital imaging", "ref_id": "b7", "title": "The cancer imaging archive (tcia): maintaining and operating a public information repository", "year": "2013" }, { "authors": "Max De; Grauw ", "journal": "", "ref_id": "b8", "title": "Universal lesion segmentation challenge 23", "year": "" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b9", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2021" }, { "authors": "F Pedro; Daniel P Felzenszwalb; Huttenlocher", "journal": "International journal of computer vision", "ref_id": "b10", "title": "Efficient graph-based image segmentation", "year": "2004" }, { "authors": "Vincenzo Ferrari; Marina Carbone; Carla Cappelli; Luigi Boni; Franca Melfi; Mauro Ferrari; Franco Mosca; Andrea Pietrabissa", "journal": "Surgical endoscopy", "ref_id": "b11", "title": "Value of multidetector computed tomography image segmentation for preoperative planning in general surgery", "year": "2012" }, { "authors": "Zaiwang Gu; Jun Cheng; Huazhu Fu; Kang Zhou; Huaying Hao; Yitian Zhao; Tianyang Zhang; Shenghua Gao; Jiang Liu", "journal": "IEEE transactions on medical imaging", "ref_id": "b12", "title": "Ce-net: Context encoder network for 2d medical image segmentation", "year": "2019" }, { "authors": "Maxime Ben Hamida; Jonathan Devanne; Caroline Weber; Valentin Truntzer; Derangère; Germain Franc ¸ois Ghiringhelli; Cédric Forestier; Wemmert", "journal": "Computers in Biology and Medicine", "ref_id": "b13", "title": "Deep learning for colon cancer histopathological images analysis", "year": "2021" }, { "authors": "Ali Hatamizadeh; Vishwesh Nath; Yucheng Tang; Dong Yang; Holger Roth; Daguang Xu", "journal": "", "ref_id": "b14", "title": "Swin unetr: Swin transformers for semantic segmentation of brain tumors in mri images", "year": "2022" }, { "authors": "Ali Hatamizadeh; Yucheng Tang; Vishwesh Nath; Dong Yang; Andriy Myronenko; Bennett Landman; Daguang Holger R Roth; Xu", "journal": "", "ref_id": "b15", "title": "Unetr: Transformers for 3d medical image segmentation", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b16", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Yuting He; Guanyu Yang; Jian Yang; Yang Chen; Youyong Kong; Jiasong Wu; Lijun Tang; Xiaomei Zhu; Jean-Louis Dillenseger; Pengfei Shao", "journal": "Medical image analysis", "ref_id": "b17", "title": "Dense biased networks with deep priori anatomy and hard region adaptation: Semisupervised learning for fine renal artery segmentation", "year": "2020" }, { "authors": "Yuting He; Guanyu Yang; Jian Yang; Rongjun Ge; Youyong Kong; Xiaomei Zhu; Shaobo Zhang; Pengfei Shao; Huazhong Shu; Jean-Louis Dillenseger", "journal": "Medical image analysis", "ref_id": "b18", "title": "Meta grayscale adaptive network for 3d integrated renal structures segmentation", "year": "2021" }, { "authors": "Tobias Heimann; Bram Van Ginneken; Martin A Styner; Yulia Arzhaeva; Christian Volker Aurich; Andreas Bauer; Christoph Beck; Reinhard Becker; György Beichel; Bekes", "journal": "IEEE transactions on medical imaging", "ref_id": "b19", "title": "Comparison and evaluation of methods for liver segmentation from ct datasets", "year": "2009" }, { "authors": "Nicholas Heller; Fabian Isensee; Klaus H Maier-Hein; Xiaoshuai Hou; Chunmei Xie; Fengyi Li; Yang Nan; Guangrui Mu; Zhiyong Lin; Miofei Han", "journal": "Medical Image Analysis", "ref_id": "b20", "title": "The state of the art in kidney and kidney tumor segmentation in contrast-enhanced ct imaging: Results of the kits19 challenge", "year": "2020" }, { "authors": "Nicholas Heller; Fabian Isensee; Dasha Trofimova; Resha Tejpaul; Zhongchen Zhao; Huai Chen; Lisheng Wang; Alex Golts; Daniel Khapun; Daniel Shats; Yoel Shoshan; Flora Gilboa-Solomon; Yasmeen George; Xi Yang; Jianpeng Zhang; Jing Zhang; Yong Xia; Mengran Wu; Zhiyang Liu; Ed Walczak; Sean Mcsweeney; Ranveer Vasdev; Chris Hornung; Rafat Solaiman; Jamee Schoephoerster; Bailey Abernathy; David Wu; Safa Abdulkadir; Ben Byun; Justice Spriggs; Griffin Struyk; Alexandra Austin; Ben Simpson; Michael Hagstrom; Sierra Virnig; John French; Nitin Venkatesh; Sarah Chan; Keenan Moore; Anna Jacobsen; Susan Austin; Mark Austin; Subodh Regmi; Nikolaos Papanikolopoulos; Christopher Weight", "journal": "", "ref_id": "b21", "title": "The kits21 challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase ct", "year": "2023" }, { "authors": "Tianrui Hui; Si Liu; Shaofei Huang; Guanbin Li; Sansi Yu; Faxi Zhang; Jizhong Han", "journal": "Springer", "ref_id": "b22", "title": "Linguistic structure guided context modeling for referring image segmentation", "year": "2020" }, { "authors": "Fabian Isensee; Paul F Jaeger; Simon Aa Kohl; Jens Petersen; Klaus H Maier-Hein", "journal": "Nature methods", "ref_id": "b23", "title": "nnu-net: a self-configuring method for deep learning-based biomedical image segmentation", "year": "2021" }, { "authors": "Debesh Jha; Dag Michael A Riegler; Pål Johansen; Håvard D Halvorsen; Johansen", "journal": "IEEE", "ref_id": "b24", "title": "Doubleu-net: A deep convolutional neural network for medical image segmentation", "year": "2020" }, { "authors": "Yuanfeng Ji; Haotian Bai; Jie Yang; Chongjian Ge; Ye Zhu; Ruimao Zhang; Zhen Li; Lingyan Zhang; Wanling Ma; Xiang Wan", "journal": "", "ref_id": "b25", "title": "Amos: A large-scale abdominal multi-organ benchmark for versatile medical image segmentation", "year": "2022" }, { "authors": "Huiyan Jiang; Zhaoshuo Diao; Yu-Dong Yao", "journal": "The Journal of Supercomputing", "ref_id": "b26", "title": "Deep learning techniques for tumor segmentation: a review", "year": "2022" }, { "authors": "A Emre Kavur; N Sinem Gezer; Mustafa Bar; Sinem Aslan; Pierre-Henri Conze; Vladimir Groza; Duy Duc; Soumick Pham; Philipp Chatterjee; Ernst; Bora Sava Zkan; Dmitry Baydar; Shuo Lachinov; Josef Han; Fabian Pauli; Matthias Isensee; Rachana Perkonigg; Ronnie Sathish; Debdoot Rajan; Gurbandurdy Sheet; Oliver Dovletov; Andreas Speck; Klaus H Nrnberger; Gzde Maier-Hein; Ouz Bozda Akar; M Alper Dicle; Selver", "journal": "Medical Image Analysis", "ref_id": "b27", "title": "Chaos challenge -combined (ct-mr) healthy abdominal organ segmentation", "year": "2021-04" }, { "authors": "A Emre Kavur; Naciye Sinem Gezer; Mustafa Bar; Yusuf Ahin; Bora Sava Zkan; Ula Baydar; Yksel; Klker; Gzde Olut; Ouz Bozda Akar; M Alper Dicle; Selver", "journal": "Diagnostic and Interventional Radiology", "ref_id": "b28", "title": "Comparison of semi-automatic and deep learning based automatic methods for liver segmentation in living liver transplant donors", "year": "2020-01" }, { "authors": "M Alper Ali Emre Kavur; Ouz Selver; Mustafa Dicle; N Sinem Bar; Gezer", "journal": "", "ref_id": "b29", "title": "Chaos -combined (ct-mr) healthy abdominal organ segmentation challenge data", "year": "2019-04" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo", "journal": "", "ref_id": "b30", "title": "Segment anything", "year": "2023" }, { "authors": "Zhoubing Bennett Landman; J Xu; Martin Igelsias; T Styner; Arno Langerak; Klein", "journal": "", "ref_id": "b31", "title": "Miccai multi-atlas labeling beyond the cranial vault-workshop and challenge", "year": "2015" }, { "authors": "Shunxing Ho Hin Lee; Yuankai Bao; Bennett A Huo; Landman", "journal": "", "ref_id": "b32", "title": "3d ux-net: A large kernel volumetric convnet modernizing hierarchical transformer for medical image segmentation", "year": "2023" }, { "authors": "Hans Liebl; David Schinz; Anjany Sekuboyina; Luca Malagutti; Maximilian T Löffler; Amirhossein Bayat; Malek El Husseini; Giles Tetteh; Katharina Grau; Eva Niederreiter", "journal": "Scientific data", "ref_id": "b33", "title": "A computed tomography vertebral segmentation dataset with anatomical variations and multi-vendor scanner data", "year": "2021" }, { "authors": "Jie Liu; Yixiao Zhang; Jie-Neng Chen; Junfei Xiao; Yongyi Lu; Bennett A Landman; Yixuan Yuan; Alan Yuille; Yucheng Tang; Zongwei Zhou", "journal": "", "ref_id": "b34", "title": "Clip-driven universal model for organ segmentation and tumor detection", "year": "2023" }, { "authors": "Maximilian T Löffler; Anjany Sekuboyina; Alina Jacob; Anna-Lena Grau; Andreas Scharr; Malek El Husseini; Mareike Kallweit; Claus Zimmer; Thomas Baum; Jan S Kirschke", "journal": "Radiology: Artificial Intelligence", "ref_id": "b35", "title": "A vertebral segmentation dataset with fracture grading", "year": "2020" }, { "authors": "Xiangde Luo; Wenjun Liao; Jianghong Xiao; Jieneng Chen; Tao Song; Xiaofan Zhang; Kang Li; Dimitris N Metaxas; Guotai Wang; Shaoting Zhang", "journal": "Medical Image Analysis", "ref_id": "b36", "title": "WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from ct image", "year": "2022" }, { "authors": "Jun Ma; Yuting He; Feifei Li; Lin Han; Chenyu You; Bo Wang", "journal": "", "ref_id": "b37", "title": "Segment anything in medical images", "year": "2023" }, { "authors": "Jun Ma; Yao Zhang; Song Gu; Cheng Zhu; Cheng Ge; Yichi Zhang; Xingle An; Congcong Wang; Qiyuan Wang; Xin Liu; Shucheng Cao; Qi Zhang; Shangqing Liu; Yunpeng Wang; Yuhui Li; Jian He; Xiaoping Yang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b38", "title": "Abdomenct-1k: Is abdominal organ segmentation a solved problem", "year": "2022" }, { "authors": "Jun Ma; Yao Zhang; Song Gu; Cheng Zhu; Cheng Ge; Yichi Zhang; Xingle An; Congcong Wang; Qiyuan Wang; Xin Liu", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b39", "title": "Abdomenct-1k: Is abdominal organ segmentation a solved problem", "year": "2021" }, { "authors": "Jordi Minnema; Anne Ernst; Maureen Van Eijnatten; Ruben Pauwels; Tymour Forouzanfar; Kees Joost Batenburg; Jan Wolff", "journal": "Dentomaxillofacial Radiology", "ref_id": "b40", "title": "A review on the application of deep learning for ct reconstruction, bone segmentation and surgical planning in oral and maxillofacial surgery", "year": "2022" }, { "authors": "Mehreen Mubashar; Hazrat Ali; Christer Grönlund; Shoaib Azmat", "journal": "Neural Computing and Applications", "ref_id": "b41", "title": "R2u++: a multiscale recurrent residual u-net with dense skip connections for medical image segmentation", "year": "2022" }, { "authors": "Dervis Ishak Pacal; Alper Karaboga; Bahriye Basturk; Ufuk Akay; Nalbantoglu", "journal": "Computers in Biology and Medicine", "ref_id": "b42", "title": "A comprehensive review of deep learning in colon cancer", "year": "2020" }, { "authors": "Gašper Podobnik; Primož Strojan; Primož Peterlin; Bulat Ibragimov; Tomaž Vrtovec", "journal": "Medical physics", "ref_id": "b43", "title": "Han-seg: The head and neck organ-at-risk ct and mr segmentation dataset", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b44", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Chaymae Hiba Ramadan; Hamid Lachqar; Tairi", "journal": "Computational visual media", "ref_id": "b45", "title": "A survey of recent interactive image segmentation methods", "year": "2020" }, { "authors": "Blaine Rister; Kaushik Shivakumar; Tomomi Nobashi; Daniel L Rubin", "journal": "The Cancer Imaging Archive", "ref_id": "b46", "title": "Ct-org: Ct volumes with multiple organ segmentations", "year": "2019" }, { "authors": "Blaine Rister; Darvin Yi; Kaushik Shivakumar; Tomomi Nobashi; Daniel L Rubin", "journal": "", "ref_id": "b47", "title": "Ct organ segmentation using gpu data augmentation, unsupervised labels and iou loss", "year": "2018" }, { "authors": "Amal Holger R Roth; E Farag; Le Turkbey; Jiamin Lu; Ronald M Liu; Summers", "journal": "IEEE Transactions on Image Processing", "ref_id": "b48", "title": "Data from pancreas-ct. the cancer imaging archive", "year": "2016" }, { "authors": "Le Holger R Roth; Amal Lu; Hoo-Chang Farag; Jiamin Shin; Evrim B Liu; Ronald M Turkbey; Summers", "journal": "Springer", "ref_id": "b49", "title": "Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation", "year": "2015" }, { "authors": "Sidra Sajid; Saddam Hussain; Amna Sarwar", "journal": "Arabian Journal for Science and Engineering", "ref_id": "b50", "title": "Brain tumor detection and segmentation in mr images using deep learning", "year": "2019" }, { "authors": "Gihan Samarasinghe; Michael Jameson; Shalini Vinod; Matthew Field; Jason Dowling; Arcot Sowmya; Lois Holloway", "journal": "Journal of Medical Imaging and Radiation Oncology", "ref_id": "b51", "title": "Deep learning for segmentation in radiation therapy planning: a review", "year": "2021" }, { "authors": "Anjany Sekuboyina; E Malek; Amirhossein Husseini; Maximilian Bayat; Hans Löffler; Hongwei Liebl; Giles Li; Jan Tetteh; Christian Kukačka; Darko Payer; Štern", "journal": "Medical image analysis", "ref_id": "b52", "title": "Verse: a vertebrae labelling and segmentation benchmark for multidetector ct images", "year": "2021" }, { "authors": "Arnaud Arindra; Adiyoso Setio; Alberto Traverso; Thomas De Bel; Moira Sn Berens; Cas Van Den; Piergiorgio Bogaard; Hao Cerello; Qi Chen; Maria Evelina Dou; Bram Fantacci; Geurts", "journal": "Medical image analysis", "ref_id": "b53", "title": "Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the luna16 challenge", "year": "2017" }, { "authors": "Pengfei Shao; Chao Qin; Changjun Yin; Xiaoxin Meng; Xiaobing Ju; Jie Li; Qiang Lv; Wei Zhang; Zhengquan Xu", "journal": "European urology", "ref_id": "b54", "title": "Laparoscopic partial nephrectomy with segmental renal artery clamping: technique and clinical outcomes", "year": "2011" }, { "authors": "Pengfei Shao; Lijun Tang; Pu Li; Yi Xu; Chao Qin; Qiang Cao; Xiaobing Ju; Xiaoxin Meng; Qiang Lv; Jie Li", "journal": "European urology", "ref_id": "b55", "title": "Precise segmental renal artery clamping under the guidance of dual-source computed tomography angiography during laparoscopic partial nephrectomy", "year": "2012" }, { "authors": "Nahian Siddique; Sidike Paheding; Colin P Elkin; Vijay Devabhaktuni", "journal": "Ieee Access", "ref_id": "b56", "title": "U-net and its variants for medical image segmentation: A review of theory and applications", "year": "2021" }, { "authors": "Michela Amber L Simpson; Spyridon Antonelli; Michel Bakas; Keyvan Bilello; Bram Farahani; Annette Van Ginneken; Bennett A Kopp-Schneider; Geert Landman; Bjoern Litjens; Menze", "journal": "", "ref_id": "b57", "title": "A large annotated medical image dataset for the development and evaluation of segmentation algorithms", "year": "2019" }, { "authors": "Luc Soler; Alexandre Hostettler; Vincent Agnus; Arnaud Charnoz; Jean-Baptiste Fasquel; Johan Moreau; Anne-Blandine Osswald; Mourad Bouhadjar; Jacques Marescaux", "journal": "", "ref_id": "b58", "title": "3d image reconstruction for comparison of algorithm database", "year": "2010" }, { "authors": "Matthew Tancik; P Pratul; Ben Srinivasan; Sara Mildenhall; Nithin Fridovich-Keil; Utkarsh Raghavan; Ravi Singhal; Jonathan T Ramamoorthi; Ren Barron; Ng", "journal": "", "ref_id": "b59", "title": "Fourier features let networks learn high frequency functions in low dimensional domains", "year": "2020" }, { "authors": "Yucheng Tang; Dong Yang; Wenqi Li; Bennett Holger R Roth; Daguang Landman; Vishwesh Xu; Ali Nath; Hatamizadeh", "journal": "", "ref_id": "b60", "title": "Self-supervised pre-training of swin transformers for 3d medical image analysis", "year": "2022" }, { "authors": "Stefano Trebeschi; Zuhir Bodalal; Teresa M Tareco Thierry N Boellaard; Silvia G Bucho; Ieva Drago; Adriana M Kurilova; Andrea Calin-Vainak; Mirte Delli Pizzi; Karlijn Muller; Hummelink", "journal": "Frontiers in Oncology", "ref_id": "b61", "title": "Prognostic value of deep learningmediated treatment monitoring in lung cancer patients receiving immunotherapy", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b62", "title": "Attention is all you need", "year": "2023" }, { "authors": "Haoyu Wang; Sizheng Guo; Jin Ye; Zhongying Deng; Junlong Cheng; Tianbin Li; Jianpin Chen; Yanzhou Su; Ziyan Huang; Yiqing Shen; Bin Fu; Shaoting Zhang; Junjun He; Yu Qiao", "journal": "", "ref_id": "b63", "title": "Sam-med3d", "year": "2023" }, { "authors": "M Wasserthal; Meyer; J Breit; S Cyriac; M Yang; Segeroth", "journal": "", "ref_id": "b64", "title": "Totalsegmentator: Robust segmentation of 104 anatomical structures in ct images", "year": "2022" }, { "authors": "Zhenda Xie; Zheng Zhang; Yue Cao; Yutong Lin; Jianmin Bao; Zhuliang Yao; Qi Dai; Han Hu", "journal": "", "ref_id": "b65", "title": "Simmim: A simple framework for masked image modeling", "year": "2022" }, { "authors": "Xiao-Xia Yin; Le Sun; Yuhan Fu; Ruiliang Lu; Yanchun Zhang", "journal": "Journal of Healthcare Engineering", "ref_id": "b66", "title": "U-net-based medical image segmentation", "year": "2022" }, { "authors": "Habib Zaidi; Issam El Naqa", "journal": "European journal of nuclear medicine and molecular imaging", "ref_id": "b67", "title": "Pet-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques", "year": "2010" }, { "authors": "Jiawei Zhang; Yuzhen Jin; Jilan Xu; Xiaowei Xu; Yanchun Zhang", "journal": "", "ref_id": "b68", "title": "Mdu-net: Multi-scale densely connected u-net for biomedical image segmentation", "year": "2018" }, { "authors": "Ziang Zhang; Chengdong Wu; Sonya Coleman; Dermot Kerr", "journal": "Computer methods and programs in biomedicine", "ref_id": "b69", "title": "Dense-inception u-net for medical image segmentation", "year": "2020" }, { "authors": "Hong-Yu Zhou; Jiansen Guo; Yinghao Zhang; Lequan Yu; Liansheng Wang; Yizhou Yu", "journal": "", "ref_id": "b70", "title": "nnformer: Interleaved transformer for volumetric segmentation", "year": "2022" }, { "authors": "Zongwei Zhou; Md Mahfuzur Rahman Siddiquee; Nima Tajbakhsh; Jianming Liang", "journal": "Springer", "ref_id": "b71", "title": "Unet++: A nested u-net architecture for medical image segmentation", "year": "2018-09-20" }, { "authors": "", "journal": "", "ref_id": "b72", "title": "Data availability for supervised fine-tuning datasets and validation datasets", "year": "" } ]
[ { "formula_coordinates": [ 10, 56.69, 568.62, 125.71, 11.38 ], "formula_id": "formula_0", "formula_text": "x patch ∈ R N ×(C×P D ×P H ×P W )" }, { "formula_coordinates": [ 10, 328.04, 218.04, 193.88, 9.84 ], "formula_id": "formula_1", "formula_text": "z prompt = F PE (p, b, s, θ PE ) = [z point , z bbox , z text ]." }, { "formula_coordinates": [ 11, 86.97, 261.16, 207.78, 23.38 ], "formula_id": "formula_2", "formula_text": "L pre-training (θ IE ; D 1 ) = 1 Ω(x M ) ||y M -x M || 1 ,(1)" }, { "formula_coordinates": [ 11, 70.71, 423.75, 224.04, 23.71 ], "formula_id": "formula_3", "formula_text": "L BCE (θ; D 2 ) = -E (x,y)∼D2 [⟨y, log(F(x, θ))⟩+ ⟨1 -y, log(1 -F(x, θ))⟩](2)" }, { "formula_coordinates": [ 11, 64.2, 471.91, 230.55, 23.25 ], "formula_id": "formula_4", "formula_text": "L Dice (θ; D 2 ) = 1 -E (x,y)∼D2 [ 2⟨y, F(x, θ)⟩ ∥y∥ 1 + ∥F(x, θ)∥ 1 ](3)" }, { "formula_coordinates": [ 11, 72.09, 522.76, 222.66, 9.81 ], "formula_id": "formula_5", "formula_text": "L fine-tuning (θ; D 2 ) = L BCE (θ; D 2 ) + L Dice (θ; D 2 )(4)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b11", "b2", "b13", "b18", "b1", "b9", "b17", "b10", "b8", "b7" ], "table_ref": [], "text": "An important aspect of data-driven healthcare is making sense of which symptoms will occur when. With the global ageing population comes an increased prevalence of age-related chronic conditions where symptoms tend to accumulate over time -the number of people living with dementia globally is expected to exceed 150 million by 2050 (Nichols et al., 2022). Patients' symptoms can be highly heterogeneous, which is where advanced modelling and machine learning methods can be used to improve health outcomes. Heterogeneous presentation is particu-larly pronounced in the rarer dementias which are associated with unusual symptoms (Marshall et al., 2018), and a higher caregiver burden (Brotherhood et al., 2020). Precise prediction of symptom occurrences post-diagnosis could alleviate care responsibilities and enhance our ability to tailor support to patients.\nExisting disease progression models for neurodegeneration have typically been used with imaging data (Oxtoby and Alexander, 2017), with proven application in identifying data-driven trajectories of brain volume change (Young et al., 2018). We were interested if similar probabilistic models could be applied to healthcare measures in order to understand changes in clinical presentation (Beckett, 1993).\nThis work builds on the research by Huang and Alexander (2012) and Young et al. (2015) that combined disease progression models with the Mallows model for ranking disease events. Employing a Bayesian framework, we infer the parameters of a Mallows distribution (Mallows, 1957), which is analogous to a Gaussian distribution for rankings. Model fitting employed a Markov Chain Monte Carlo (MCMC) approach.\nThe primary contributions of our research are as follows:\n• Enhancement of an existing model -adapted\nMallows model to effectively handle censored data and partial rankings.\n• Application to a novel healthcare datasetapplied our model to a previously unexplored dataset consisting of symptom questionnaires.\nTo evaluate the performance of our model, we first conducted tests on synthetic data with added noise. Subsequently, we assessed the model's effectiveness using a real-world healthcare questionnaire dataset (Hardy et al., 2023) from patients diagnosed with primary progressive aphasia (PPA), a rare language led dementia associated with atypical symptoms (Gorno-Tempini et al., 2011)." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Mathematical model", "publication_ref": [ "b5", "b15", "b15", "b5", "b0", "b4", "b3" ], "table_ref": [], "text": "From a set of disease-related symptoms, each patient will typically present with a subset, with multiple symptoms occurring concurrently. These individual symptom profiles can be viewed as variations of a central ranking at a group level. In our model, we understand these patient-specific symptom profiles as partial rankings that follow a Mallows distribution.\nConsider we have M survey responses. As part of the survey each respondent m ∈ M was asked to rank n symptoms, S = {S 1 , S 2 , ..., S n } in l positions, {1, 2, ..., l} according to if and when the symptoms occurred.\nIndividuals might not rank every symptom either due to randomness or due to having only experienced a subset of symptoms at the time of answering the survey. To account for this we define the subset Ŝm ⊆ S, with r m = | Ŝm |, of symptoms for which individual m's response was recorded. Each questionnaire response is a partial ranking of events represented by the mapping: σ m : { Ŝm,1 , ..., Ŝm,r } -→ {1, ..., l} rm .\n(1)\nFor a participant m, their partial ranking of events is given by:\nX m = {{σ -1 m (1)}, ..., {σ -1 m (l)}},(2)\nwhere σ -1 m (l) refers to the set of events assigned to stage l. The Mallows model is a probability distribution for rankings parameterised by a central ranking π 0 and a spread parameter λ (Fligner and Verducci, 1986;Tang, 2019). The probability density function is given by:\nf π0,λ (x) = ψ(λ)e -1 λ d(x,π0) ,(3)\nψ(λ) = π∈Sn e -1 λ d(π,π0 ),(4)\nwhere ψ(λ) is a normalising function summed over the set of all possible rankings S n , and d(π, π 0 ) is a distance metric generally taken to be Kendall's Tau (Tang, 2019). For λ ≥ 0, the central ranking π 0 is the mode, and as λ -→ 0 the model is concentrated at π 0 . When λ = 0 it is the uniform distribution (Fligner and Verducci, 1986). We describe a set of ranks as having a strong consensus for λ ≤ ε and weak consensus as λ -→ ∞ (Ali and Meilǎ, 2012).\nWe adapt the model to account for partial rankings by using the Kendall's Tau distance metric with penalty parameter p (Fagin et al., 2003):\nd p (π, π 0 ) = |β D | + p * |β E |,(5)\nwhere β D is the set of discordant pairs, and β E is the set of pairs that have equal position in one ranking but not in the other. The choice of p ∈ [0, 1] determines the weighting of partial ranks -for simplicity we fix p = 0.5 (Cohen et al., 1997), further details in Appendix A. To account for comparison of rankings where items are unranked due to censoring, or missingness, we drop the comparison of pairs from the calculation (where one or more rank is missing).\nIt is worth noting the difference in the size of the space between a traditional Mallow's model and one that is partially ranked. For a fully ranked model where rankings are permutations of the range of numbers up to the maximum rank n, the space of rankings is: |S n | = n!. However in the case of partial rankings the space of possible rankings is significantly larger with:\n|S n | = l n . (6\n)\nThe likelihood of a patients data X m given π 0 , λ is:\np(X m |π 0 , λ) = f π0,λ (X m ). (7\n)\nWe assume that data from patients is independent, obtaining the likelihood for dataset X as:\np(X|π 0 , λ) = m f π0,λ (X m ) (8)\nAccording to Bayes theorem, the model posterior is given by:\np(π 0 , λ|X) ∝ p(π 0 , λ)p(X|π 0 , λ)(9)\nwith joint prior,\np(π 0 , λ) = p(π 0 |λ) * p(λ). (10\n)\nThe prior distributions on π 0 and λ are taken to be:\nλ ∼ truncatednorm(0, 1), ( 11)\nπ 0 ∼ mallows(π init , λ). (12\n)\nwith π init informed by clinical input. We justify a choice of an informative prior based on the large distribution space." }, { "figure_ref": [], "heading": "Model fitting", "publication_ref": [], "table_ref": [], "text": "We use an MCMC algorithm to sample from p(π 0 , λ|X). Details are in Appendix A. We derive a maximally likely ranking of symptoms π 0 , and the corresponding spread parameter λ using the MAP estimate of a set of 1, 000 MCMC samples.\nThe MAP estimates are defined as:\nπ 0,MAP = arg max π0 P (D|π 0 , λ)P (π 0 , λ) (13) λ 0,MAP = arg max λ P (D|π 0 , λ)P (π 0 , λ). (14\n)" }, { "figure_ref": [], "heading": "Experimental set-up", "publication_ref": [ "b8" ], "table_ref": [], "text": "We performed two experiments. First, we perform parameter estimation on synthetic data that we generate to mimic our real-world data of interest, providing a ground truth to assess our proposed method's accuracy. Second, we estimate model parameters for real-world healthcare data from a study of people living with PPA (Hardy et al., 2023)." }, { "figure_ref": [ "fig_0" ], "heading": "Synthetic data", "publication_ref": [], "table_ref": [], "text": "For a given central ordering, π 0 , with length n, and maximum rank of l and spread parameter, λ, we generated synthetic datasets of size M by sampling from the space of possible rankings S n according to equation 3. We simulated missing data due to the right censoring issue described earlier (details in Appendix A). Figure 1 shows the distribution of simulated data about the central ordering as a function of the Mallows spread parameter, λ. We used the model to infer the parameters of the synthetic data and compared the result to the true parameters used to generate the data. Table 1 demonstrates the models utility in a dataset of size M = 100 with rankings of length n = 8, and maximum rank of l = 4. The ranking length and maximum rank were chosen based on the healthcare data in Section 3.2. The values are averaged over 12 repeats with random initialisation. " }, { "figure_ref": [], "heading": "Healthcare data", "publication_ref": [ "b8", "b19", "b6" ], "table_ref": [], "text": "Data was collected from both carers to people living with svPPA, and people with svPPA in the UK and Australia. Further details of the original clinical study can be found in Hardy et al. (2023). The full questionnaire included n = 72 symptoms that individuals were asked to rank. Based on the results in the synthetic dataset we realised it was unrealistic to model the full dimensionality of the data, as such we chose a subset of symptoms relating to personal care and well being, and which at least 30% of the cohort had ranked. The resultant list of eight symptoms is given in Appendix B. In total there were m = 30 individuals in the dataset.\nFor model comparison we analysed this dataset using the ordinal subtype and stage inference (SuStaIn) algorithm (Young et al., 2021) with a single subtype, equivalent to an ordinal version of the Event Based Model (EBM) (Fonteijn et al., 2012). This model is based on the assumption that disease events that have been experienced by more individuals in the dataset occur earlier in the disease progression. It uses a Bayesian model and MCMC to calculate the MAP sequence of disease events. To accommodate SuS-taIn, the data is simplified to be either 1 if the event was ranked by the respondent, or 0 otherwise. The model assumes that there are as many stages of disease as events, and hence ranks the events occurring as per a trajectory, or permutation. Our model is able to accurately infer the model parameters. As expected, model results worsen as a function of the percent of missing data, and the spread parameter λ 0 . The mean absolute error of zero, for λ 0 = 1.0 and 10% missing, is out of pattern with the rest of the results, but arises from the model finding the exact central ordering in each experiment repetition." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Synthetic data", "publication_ref": [], "table_ref": [], "text": "Even in this small synthetic data example, the size of the sample space is 8 4 = 4096. Due to the exponential nature of the sample space (6) the number of calculations required scales poorly, especially since the model requires a repeat calculation of the normalising constant at each model iteration, with each calculation of the normalising constant being a sum over l n values." }, { "figure_ref": [ "fig_2", "fig_3", "fig_3", "fig_2", "fig_3", "fig_3", "fig_2" ], "heading": "Healthcare data", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the results of modelling the PPA dataset using the baseline model -the ordinal EBM. Each symptom in this model is assigned a unique stage, resulting in a permutation ranking.\nFigure 3 shows the resultant central ordering for the dataset using our partially-ordered Mallows model. The prior central ordering used to initialise the model from was based on guidance from PPA researchers. We initialised the spread parameter from λ init = 1. The ranking of symptoms shown in Figure 3 was the same as the clinically informed ranking of symptoms suggested by the PPA researchers. Compared to Figure 2, in Figure 3 we see the visual grouping of symptoms which co-occur. Both models identify 'changes to sleeping patterns' as the first symptom, and 'difficulty swallowing' as the final symptom. The stage numbering starts from 2 in Figure 3, as the stage numbering is a direct reflection of how the symptoms ranks were assigned in the questionnaire responses. The numbering in Figure 2 is abstracted from this, as the number of stages relates to the total number of symptoms ranked." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b0", "b3", "b5", "b16", "b0", "b14" ], "table_ref": [], "text": "We adapted the Mallows model for partially ranked and censored data, and applied a Bayesian method to estimate model parameters. We used the method to identify the central ranking of symptoms in a novel healthcare dataset collected in PPA.\nFinding the optimal ordering proves to be computationally challenging, being NP-hard even in cases with just four votes within a fully-ordered model (Ali and Meilǎ, 2012;Cohen et al., 1997). This computational complexity contributes to scalability issues within this modeling framework. The current bottleneck arises from calculating the normalising constant across the entire distribution space. While a more accurate analytic approximation exists for the fully ranked Mallows model (Fligner and Verducci, 1986), a counterpart for the partially ranked model does not yet exist to the best of our knowledge.\nThere was perhaps an imbalance between model complexity and data size -especially for this rare disease. However, the partial rankings model and our new method for handling missing data are exciting prospects for future work in other areas. We explored tolerance of our method to missing (synthetic) data, but further detailed experiments are warranted. In particular rigorous ablation studies, to test model performance as a function of censoring, and evaluation using the widely applicable information criterion (WAIC) (Vehtari et al., 2015). Specific to our real-world experiments, the questionnaire's phrasing itself introduced a bias in data acquisition -symptoms were ordered in a sequence arising from clinical experience. However, this could be seen as a kind of implicit prior rather than a bias, but we acknowledge the potential influence on survey respondents' perceived temporal relationships among the symptoms.\nConsidering real-world healthcare datasets, it is reasonable to anticipate modest consensus levels equating to larger values of the spread parameter λ. Data with pronounced consensus would permit more efficient central ordering estimation, potentially through alternative Kemeny optimisation methods (Ali and Meilǎ, 2012). However, in scenarios marked by weak consensus, the necessity of substantial datasets becomes evident. Future research should consider alternative datasets, such as the activities of daily living questionnaires (ADLQ). The ADLQ inherently establishes latent event rankings, and is commonly used in clinics, thus offering access to larger datasets. Additionally, the potential for application in wearable technology, used for tracking daily activities in dementia patients and offering objective data (Ray et al., 2019), merits exploration." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, our study offers insight into the use of Bayesian inference for uncovering symptom sequences. In particular we developed a new Mallows model which allowed us to model partially ranked and censored data. This is an NP-hard problem and our modest results reflect this -but we are optimistic that solving the optimisation problem will improve performance. Furthermore, we think the idea of using statistical analyses to understand the lived experience of disease warrants further exploration, and we imagine this could have applications in data from wearable technology. section B.2). To simulate a missing percent of q we randomly sampled a subset of q individuals in the dataset. We then artificially truncated their ranking to reflect right censoring. We did this by selecting a rank position, r q in a normal range around three quarters of the way through the total ranking: r q ∼ N ormal(0.75m, 1), where m is the length of the ranking. Then for all ranks in a position greater than or equal to r q we deleted the rank information." }, { "figure_ref": [], "heading": "B.2. Healthcare data", "publication_ref": [], "table_ref": [], "text": "Table 2 lists the subset of symptoms we included in the model. Of the n = 30 respondents 27 were caregivers, or bereaved caregivers, to people living with PPA, and three were responses recorded by people living with PPA.\nWhen answering the PPA questionnaire, respondents were told not to respond to questionnaires relating to symptoms that they had yet to experience. As a result there was considerable right censoring of the data, as most respondents were at a middle disease stage, and thus yet to experience symptoms associated with the latter disease course. Table 2: The list of well-being symptoms we used in the model. We had responses for n = 30 individuals with svPPA, collected from caregivers (and bereaved caregivers)." }, { "figure_ref": [], "heading": "Appendix A. Model details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1. Model parameters", "publication_ref": [ "b4", "b4", "b3" ], "table_ref": [], "text": "The introduction of the hyperparameter p allows us to weight the contribution of partial rankings with the Kendall's Tau measure. For p ∈ [0, 0.5) Kendall's Tau distance metric with penalty parameter p is a 'half metric', failing to satisfy the triangle inequality. For p ∈ [0.5, 1] it is a full metric (Fagin et al., 2003). As per Fagin et al. (2003) the choice of p = 0.5 can be thought of as a 'neutral penalty score' for partial ranks, indicating that whilst not of equal importance as discordant pairs (i.e. p = 1), partial ranks should be considered distinct from concordant pairs (i.e. p = 0). A similar weighting is also utilised in Cohen et al. (1997). Furthermore, a choice of p ∈ [0.5, 1] is desirable as it avoids modelling complications arising from using a half metric." }, { "figure_ref": [], "heading": "A.2. MCMC algorithm", "publication_ref": [ "b17" ], "table_ref": [], "text": "The MCMC algorithm utilises Gibb's sampling similar to the method used by Young et al. (2015). The MCMC algorithm proceeds as given in Algorithm 1." }, { "figure_ref": [], "heading": "Appendix B. Supplementary results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1. Synthetic data", "publication_ref": [], "table_ref": [], "text": "We decided to introduce noise to the synthetic data to mimic what was seen in the healthcare data (see" } ]
Machine learning models offer the potential to understand diverse datasets in a data-driven way, powering insights into individual disease experiences and ensuring equitable healthcare. In this study, we explore Bayesian inference for characterising symptom sequences, and the associated modelling challenges. We adapted the Mallows model to account for partial rankings and right-censored data, employing custom MCMC fitting. Our evaluation, encompassing synthetic data and a primary progressive aphasia dataset, highlights the model's efficacy in revealing mean orderings and estimating ranking variance. This holds the potential to enhance clinical comprehension of symptom occurrence. However, our work encounters limitations concerning model scalability and small dataset sizes.
Bayesian inference of a new Mallows model for characterising symptom sequences applied in primary progressive aphasia
[ { "figure_caption": "Figure 1 :1Figure 1: Example of how the synthetic Mallowsdistributed data varies as a function of the the spread parameter λ.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "λ 0 % missing MAE λ 0 d p (π 0 , Results from synthetic data experiments. Mean Absolute Error (MAE) in Mallows model spread λ 0 and mean distance from central ranking π 0 increased with missing data % (π 0 denotes the MAP estimate). The central ordering was π 0 =[1, 2, 2, 3, 3, 3, 3, 4].", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Data-driven symptom staging for the set of 30 individuals with svPPA using the ordinal EBM.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Data-driven symptom staging for the set of 30 individuals with svPPA, estimated by our partial Mallows model.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" } ]
Beatrice Taylor; Cameron Shand; Chris J D Hardy; Neil P Oxtoby
[ { "authors": "Alnur Ali; Marina Meilǎ", "journal": "Mathematical Social Sciences", "ref_id": "b0", "title": "Experiments with Kemeny ranking: What works when?", "year": "2012" }, { "authors": "Laurel A Beckett", "journal": "Springer", "ref_id": "b1", "title": "Maximum Likelihood Estimation in Mallows's Model Using Partially Ranked Data", "year": "1993" }, { "authors": "Emilie V Brotherhood; Joshua Stott; Gill Windle; Suzie Barker; Siobhan Culley; Emma Harding; Paul M Camic; Maria Caufield; Victory Ezeofor; Zoe Hoare; Roberta Mckee-Jackson; Jennifer Roberts; Rebecca Sharp; Aida Suarez-Gonzalez; Mary Pat Sullivan; Rhiannon Tudor Edwards; Jill Walton; Claire Waddington; Eira Winrow; Sebastian J Crutch", "journal": "International Journal of Geriatric Psychiatry", "ref_id": "b2", "title": "Protocol for the Rare Dementia Support Impact study: RDS Impact", "year": "2020" }, { "authors": " Ww Cohen; R E Schapire; Y Singer", "journal": "", "ref_id": "b3", "title": "Learning to order things", "year": "1997" }, { "authors": "Ronald Fagin; Ravi Kumar; D Sivakumar", "journal": "SIAM Journal on Discrete Mathematics", "ref_id": "b4", "title": "Comparing Top k Lists", "year": "2003" }, { "authors": "M A Fligner; J S Verducci", "journal": "Journal of the Royal Statistical Society: Series B (Methodological)", "ref_id": "b5", "title": "Distance Based Ranking Models", "year": "1986" }, { "authors": "Marc Hubert M Fonteijn; Matthew J Modat; Josephine Clarkson; Manja Barnes; Nicola Z Lehmann; Rachael I Hobbs; Sarah J Scahill; Sebastien Tabrizi; Nick C Ourselin; Fox", "journal": "NeuroImage", "ref_id": "b6", "title": "An event-based model for disease progression and its application in familial alzheimer's disease and huntington's disease", "year": "2012" }, { "authors": "M L Gorno-Tempini; A E Hillis; S Weintraub; A Kertesz; M Mendez; S F Cappa; J M Ogar; J D Rohrer; S Black; B F Boeve; F Manes; N F Dronkers; R Vandenberghe; K Rascovsky; K Patterson; B L Miller; D S Knopman; J R Hodges; M M Mesulam; M Grossman", "journal": "Neurology", "ref_id": "b7", "title": "Classification of primary progressive aphasia and its variants", "year": "2011" }, { "authors": "Chris Jd Hardy; Cathleen Taylor-Rubin; Beatrice Taylor; Emma Harding; Aida Suarez Gonzalez; Jessica Jiang; Laura Thompson; Rachel Kingma; Anthipa Chokesuwattanaskul; Ffion Walker", "journal": "Alzheimer's & Dementia", "ref_id": "b8", "title": "Symptom-led staging for semantic and nonfluent/agrammatic variants of primary progressive aphasia", "year": "2023" }, { "authors": "Jonathan Huang; Daniel Alexander", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "Probabilistic Event Cascades for Alzheimer's disease", "year": "2012" }, { "authors": "Colin L Mallows", "journal": "i. Biometrika", "ref_id": "b10", "title": "Non-null ranking models", "year": "1957" }, { "authors": "Charles R Marshall; Chris J D Hardy; Anna Volkmer; Lucy L Russell; Rebecca L Bond; Phillip D Fletcher; Camilla N Clark; Catherine J Mummery; Jonathan M Schott; Martin N Rossor; Nick C Fox; Sebastian J Crutch; Jonathan D Rohrer; Jason D Warren", "journal": "Journal of Neurology", "ref_id": "b11", "title": "Primary progressive aphasia: a clinical approach", "year": "2018" }, { "authors": "Emma Nichols; Jaimie D Steinmetz; Emil Stein; Kai Vollset; Julian Fukutaki; Foad Chalek; Amir Abd-Allah; Ahmed Abdoli; Eman Abualhasan; Tayyaba Abu-Gharbieh; Tayyaba Akram", "journal": "The Lancet Public Health", "ref_id": "b12", "title": "Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019", "year": "2022-02" }, { "authors": "Neil P Oxtoby; Daniel C Alexander", "journal": "Current Opinion in Neurology", "ref_id": "b13", "title": "Imaging plus X: Multimodal models of neurodegenerative disease", "year": "2017" }, { "authors": "Partha Pratim Ray; Dinesh Dash; Debashis De", "journal": "Journal of Medical Systems", "ref_id": "b14", "title": "A Systematic Review and Implementation of IoT-Based Pervasive Sensor-Enabled Tracking System for Dementia Patients", "year": "2019-09" }, { "authors": "Wenpin Tang", "journal": "", "ref_id": "b15", "title": "Mallows Ranking Models: Maximum Likelihood Estimate and Regeneration", "year": "2019" }, { "authors": "Aki Vehtari; Andrew Gelman; Jonah Gabry", "journal": "Statistics and Computing", "ref_id": "b16", "title": "Practical Bayesian model evaluation using leaveone-out cross-validation and WAIC", "year": "2015" }, { "authors": "Alexandra L Young; Neil P Oxtoby; Jonathan Huang; Razvan V Marinescu; Pankaj Daga; David M Cash; Nick C Fox; Sebastien Ourselin; Jonathan M Schott; Daniel C Alexander", "journal": "Springer International Publishing", "ref_id": "b17", "title": "Multiple Orderings of Events in Disease Progression", "year": "2015" }, { "authors": "Alexandra L Young; V Razvan; Neil P Marinescu; Martina Oxtoby; Keir Bocchetta; Nicholas C Yong; David M Firth; David L Cash; Katrina M Thomas; Jorge Dick; Cardoso", "journal": "Nature Communications", "ref_id": "b18", "title": "Uncovering the heterogeneity and temporal complexity of neurodegenerative diseases with Subtype and Stage Inference", "year": "2018" }, { "authors": "Alexandra L Young; Jacob W Vogel; Leon M Aksman; Peter A Wijeratne; Arman Eshaghi; Neil P Oxtoby; C R Steven; Daniel C Williams; Alexander", "journal": "Frontiers in artificial intelligence", "ref_id": "b19", "title": "Alzheimer's Disease Neuroimaging Initiative. Ordinal sustain: subtype and stage inference for clinical scores, visual ratings, and other ordinal data", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 118.48, 556.52, 182.54, 12.69 ], "formula_id": "formula_0", "formula_text": "X m = {{σ -1 m (1)}, ..., {σ -1 m (l)}},(2)" }, { "formula_coordinates": [ 2, 127.21, 660.69, 173.81, 13.02 ], "formula_id": "formula_1", "formula_text": "f π0,λ (x) = ψ(λ)e -1 λ d(x,π0) ,(3)" }, { "formula_coordinates": [ 2, 141.26, 678.73, 159.76, 23.5 ], "formula_id": "formula_2", "formula_text": "ψ(λ) = π∈Sn e -1 λ d(π,π0 ),(4)" }, { "formula_coordinates": [ 2, 366.94, 247.97, 173.06, 9.65 ], "formula_id": "formula_3", "formula_text": "d p (π, π 0 ) = |β D | + p * |β E |,(5)" }, { "formula_coordinates": [ 2, 403.84, 481.9, 131.92, 11.69 ], "formula_id": "formula_4", "formula_text": "|S n | = l n . (6" }, { "formula_coordinates": [ 2, 535.76, 483.97, 4.24, 8.74 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 2, 369.3, 528.68, 166.45, 9.65 ], "formula_id": "formula_6", "formula_text": "p(X m |π 0 , λ) = f π0,λ (X m ). (7" }, { "formula_coordinates": [ 2, 535.76, 528.68, 4.24, 8.74 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 2, 366.89, 586.61, 173.12, 19.61 ], "formula_id": "formula_8", "formula_text": "p(X|π 0 , λ) = m f π0,λ (X m ) (8)" }, { "formula_coordinates": [ 2, 357.32, 651.52, 182.68, 9.65 ], "formula_id": "formula_9", "formula_text": "p(π 0 , λ|X) ∝ p(π 0 , λ)p(X|π 0 , λ)(9)" }, { "formula_coordinates": [ 2, 371.16, 696.06, 164.42, 9.65 ], "formula_id": "formula_10", "formula_text": "p(π 0 , λ) = p(π 0 |λ) * p(λ). (10" }, { "formula_coordinates": [ 2, 535.57, 696.06, 4.43, 8.74 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 3, 128.65, 130.06, 167.94, 9.65 ], "formula_id": "formula_12", "formula_text": "π 0 ∼ mallows(π init , λ). (12" }, { "formula_coordinates": [ 3, 296.59, 130.06, 4.43, 8.74 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 3, 94.61, 299.87, 206.41, 38.86 ], "formula_id": "formula_14", "formula_text": "π 0,MAP = arg max π0 P (D|π 0 , λ)P (π 0 , λ) (13) λ 0,MAP = arg max λ P (D|π 0 , λ)P (π 0 , λ). (14" }, { "formula_coordinates": [ 3, 296.59, 322.13, 4.43, 8.74 ], "formula_id": "formula_15", "formula_text": ")" } ]
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b11", "b14", "b1", "b10", "b7", "b2", "b15", "b20", "b18", "b13", "b5" ], "table_ref": [], "text": "Vision-based gait recognition refers to the use of vision technologies for individual identification based on human walking patterns. Compared to other biometric techniques such as face, fingerprint, and iris recognition, gait recognition offers the benefits of non-intrusive and long-distance identification without requiring the cooperation of the subject of interest. These advantages make gait recognition particularly suitable for various security scenarios such as suspect tracking and crime investigation (Nixon and Carter 2006).\nBefore leveraging deep models to learn gait features, a fundamental issue worth exploring is to consider the ideal input modality. To achieve robust long-term human identification, this input should be the 'clean' gait representation maintaining gait-related features such as body shape, structure, and dynamics, and meanwhile eliminate the influence of gait-unrelated factors, such as background, clothing, and viewpoints. In recent literature, the binary silhouettes and skeletons serve as the two most prevailing gait representations (Shen et al. 2022). As shown in Fig. 1, they both explicitly present the structural characteristics of the human body, e.g., the length, ratio, and movement of human limbs. Silhouettes, differently, have more discriminative capacity by explicitly maintaining appearance information. However, utilizing appearance information from silhouettes is not always beneficial for identification, as these characteristics are usually vulnerable and mixed up with the shape of dressing and carrying items. Conversely, skeletons present an appearance-free representation and are naturally robust to appearance changes. Nevertheless, existing skeleton-based methods primarily employ Graph Convolutional Networks (GCNs) on conventional skeletal representations (i.e. 2D/3D coordinates) and provide unsatisfactory performance, particularly with real-world applications.\nTo explore the cooperativeness and complementarity arXiv:2311.13444v2 [cs.CV] 18 Dec 2023 natures of body shape and structural features, this paper introduces a novel skeleton-based gait representation called Skeleton Map, drawing inspirations from related works (Duan et al. 2022;Liu and Yuan 2018;Liao et al. 2022). As illustrated in Fig. 1, the skeleton map represents the coordinates of human joints as a heatmap with Gaussian approximation and gait-oriented designs. This approach aligns skeleton and silhouette data across spatial-temporal dimensions, representing the skeleton as a silhouette-like image without exact body shapes. To further align the network architectures, we introduce a baseline model referred to as SkeletonGait. This model is developed by replacing the input of DeepGaitV2 (Fan et al. 2023) from the conventional silhouette to the skeleton map. This straightforward design is strongly motivated by two-fold considerations: a) We establish the alignments between SkeletonGait and DeepGaitV2 in terms of both input data format and network architectures, facilitating an intuitive comparison of the representational capacities of solely body structural features v.s. the combination of body shape and structural features1 . b) Notably, DeepGaitV2 has achieved the latest stateof-the-art performance on various gait datasets, motivating the adoption of its architecture as the baseline for this paper.\nAs shown in Fig. 1, we present a comprehensive evaluation on five popular large-scale gait datasets: OU-MVLP (Takemura et al. 2018), GREW (Zhu et al. 2021), Gait3D (Zheng et al. 2022), SUSTech1K (Shen et al. 2023), and CCPG (Li et al. 2023). Here the label 'SOTA Skeleton' denotes the most cutting-edge performances achieved by existing skeleton-based methods, regardless of the sources of publication. According to in-depth investigations, we have uncovered the following insights: 1) Compared with previous skeleton-based methods, SkeletonGait better exposes the importance of body structural features in describing gait patterns thanks to its competitiveness. The underlying reasons, i.e., the advantages of the skeleton map over raw joint coordinates, will be carefully discussed. 2) Interestingly, despite GREW is usually regarded as the most challenging gait dataset due to its extensive scale and real-world settings, SkeletonGait performing impressive performance suggests that the walking patterns of its subjects can be effectively represented solely by body structural attributes, with no requirement for shape characteristics. This revelation prompts a subsequent investigation into the potential lack of viewpoint diversity of GREW. 3) When the input silhouettes become relatively unreliable, such as in instances of poor illumination in SUSTech1K and complex occlusion in Gait3D and GREW, the skeleton map emerges as a pivotal player in discriminative and robust gait feature learning. Further findings and insights will be discussed in the experiment section.\nBy integrating the superiority of silhouette and skeleton map, a novel gait framework known as SkeletonGait++ is introduced. In practice, SkeletonGait++ effectively ag-gregates the strengths of these two representations by a fusion-based multi-branch architecture. Experiments show that SkeletonGait++ reaches a pioneering state-of-the-art performance, surpassing existing methods by a substantial margin. Further visualizations verify that SkeletonGait++ is capable of adaptively capturing meaningful gait patterns, consisting of discriminative semantics within both body structural and shape features.\nOverall, this paper promotes gait research in three aspects:\n• The introduction of the skeleton map aligns two widely employed gait representations, namely the skeleton and silhouette, in terms of input data format. This alignment facilitates an intuitive exploration of their collaborative and complementary characteristics. " }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b13", "b17", "b8", "b16", "b6", "b18", "b12", "b10", "b7", "b7", "b1" ], "table_ref": [], "text": "Gait Representations. The popular gait representations are primarily derived from RGB images, including raw RGB images, binary silhouettes, optical images, 2D/3D skeletons, and human meshes. To mitigate the influence of extraneous noise stemming from color, texture, and background elements, these representations often rely on preprocessing stages or end-to-end learning approaches. Beyond the typical RGB cameras, some studies propose novel gait representations by incorporating emerging sensors such as Li-DAR (Shen et al. 2023) and event cameras (Wang et al. 2022). However, these sensors are currently less commonly found in existing CCTVs, making them temporarily unsuitable for large-scale video surveillance applications. This paper focuses on two of the most widely-used gait representations, i.e. silhouette and skeleton data.\nAccording to the classical taxonomy, gait recognition methods can be broadly classified into two categories: model-based and appearance-based methods. Model-based Gait Recognition methods utilize the underlying structure of the human body as input, such as the estimated 2D / 3D skeleton and human mesh. With extremely excluding visual clues, these gait representations, which are formally parameterized as coordinates of human joints or customized vectors in most cases, are theoretically 'clean' against factors like carrying and dressing items. In recent literature, PoseGait (Liao et al. 2020) combines the 3D skeleton data with hand-crafted characteristics to overcome the viewpoint and clothing variations, GaitGraph (Teepe et al. 2021) introduces a graph convolution network for 2D skeleton-based gait representation learning, HMRGait (Li et al. 2020) Additionally, there are also some progressive multi-modal gait frameworks, such as SMPLGait (Zheng et al. 2022) that exploited the 3D geometrical information from the SMPL model to enhance the gait appearance feature learning, and BiFusion (Peng et al. 2023) that integrated skeletons and silhouettes to capture the rich gait spatiotemporal features. Related Works to Skeleton Map. Liu et al. (Liu and Yuan 2018) introduced the aggregation of pose estimation maps, which are intermediate feature maps from skeleton estimators, to create a heatmap-based representation for action recognition. This idea has been extended to gait recognition by Liao et al. (Liao et al. 2022). However, the intermediate feature often involves float-encoded noises, potentially incorporating body shape information that is undesirable for model-based gait applications. Additionally, Liao et al. (Liao et al. 2022) have not demonstrated competitive results on the challenging outdoor gait datasets using pose heatmaps. Similar to the approach in (Duan et al. 2022), our skeleton map is generated solely from the coordinates of human joints, deliberately excluding any potential visual clues hidden in pose estimation maps. But differently, we place emphasis on the pre-treatment of data and the design of deep models for gait recognition purposes." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "This section begins with outlining the generation of skeleton maps. Subsequently, we delve into the specifics of Skeleton-Gait and SkeletonGait++. Implementation details are introduced at the end of this section." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_3" ], "heading": "Skeleton Map", "publication_ref": [ "b1", "b1", "b10", "b7" ], "table_ref": [], "text": "Given the coordinates of human joints (x k , y k , c k ), where (x k , y k ) and c k respectively present the location and confi- Firstly, considering the absolute coordinates of joints relative to the original image contain much gait-unrelated information like the walking trajectory and filming distance, we introduce the pre-treatments of center-and scalenormalization to align raw coordinates:\nCoordinates Skeleton Map R R/2 R R/2 H Skeleton Map (a) (b)\nx k = x k -x core + R/2 y k = y k -y core + R/2 x k = x k -y min y max -y min × H y k = y k -y min y max -y min × H(1)\nwhere\n(x core , y core ) = ( x11+x12 2 ,y11+y12\n2\n) presents the center point of two hips (11-th and 12-th human joints, their center can be regarded as the barycenter of the human body), and (y max , y min ) denotes the maximum and minimum heights of human joints (max k y k , min k y k ). In this way, we move the barycenter of the human body to (R/2, R/2) and normalize the body height to H, as shown in Fig. 2(a).\nTypically, the height of the human body is expected to exceed its width. As a result, the normalized coordinates of human joints, as defined in Eq. 1, should fall within the range of H × H. But in practice, the pose estimator is imperfect and may produce some outlier joints outside the H × H scope. To address these out-of-range cases, the resolution of the skeleton map, denoted as R, should be larger than H, ensuring coverage of all the coordinates. In our experiments, let R be 2H is enough for the OUMVLP, GREW, Gait3D, CCPG, and SUSTech1K datasets.\nAs illustrated in Figure 2 (a), the skeleton map is initialized as a blank image with a size of R × R. Then we draw it based on the normalized coordinates of human joints. Inspired by (Duan et al. 2022), we generate the joint map J by composing K Gaussian maps, where each Gaussian map is centered at a specific joint position and contributes to all the R × R pixels:\nJ (i,j) = K k e -(i-x k ) 2 +(j-y k ) 2 2σ 2 × c k (2)\nwhere J (i,j) presents the value of a certain point from {(i, j)|i, j ∈ {1, ..., R}}, and σ is a hyper-parameter controlling the variance of Gaussian maps. Similarly, we can also create a limb map L:\nL (i,j) = N n e -D((i,j),S[n -,n + ]) 2 2σ 2 × min(c n -, c n + ) (3)\nwhere S[n -, n + ] presents the n-th limb determined by n -th and n + -th joints with n -, n + ∈ {1, ..., K}. The function D((i, j), S[n -, n + ]) measures the Euclidean distance from the point (i, j) to the n-th limb, where n ∈ {1, ..., N } and N denotes the count of limbs.\nNext, the skeleton map is obtained by stacking J and L and thus has a size of 2×R×R. Notably, for the convenience of visualization, we repeat the last channel of all the skeleton maps shown in this paper to display the visual three-channel images with the size of 3 × R × R.\nAs shown in Figure 2 (b), we employ a subject-centered cropping operation to remove the unnecessary blank regions, thus reducing the redundancy in skeleton maps. In practice, the vertical range is determined by the minimum and maximum heights of pixels which possess non-zero values. Meanwhile, the horizontal cropping range spans from R-H 2 to R+H 2 . In this way, we remove extraneous areas outside the desired gait region, ensuring a more concise and compact skeleton map. Lastly, to align with the input size required by downstream gait models, the cropped skeleton maps are resized to 2 × 64 × 64 and further cropped by the widely-used double-side cutting strategy.\nAs a result, Fig. 3 exhibits some examples of the used skeleton maps with varying σ. As we can see, a smaller σ produces a visually thinner skeleton map, whereas excessively large σ may lead to visual ambiguity.\nCompared with approaches proposed by (Duan et al. 2022;Liu and Yuan 2018;Liao et al. 2022), our skeleton map introduces the following gait-oriented enhancements:\n• Cleanness. The implementation of center-normalization effectively eliminates identity-unrelated noise present in raw skeleton coordinates, i.e., the walking trajectory, and camera distance information. • Discriminability. Preceding methods tend to directly resize the obtained images of varying sizes into a predetermined fixed size, inevitably resulting in the loss of body ratio information. Conversely, the scale-normalization and subject-centered cropping techniques outlined in this paper ensure that the skeleton map preserves the authenticity of the length and ratio of human limbs. • Compactness. All the joints and limbs are drawn within a single map, optimizing the efficiency of the modeling process, as opposed to a stack of separate maps.\nPrevious skeleton-based gait recognition methods tend to model the coordinates of joints as non-grid gait graphs with learnable edges, potentially losing inherent structural priors within a highly structured human body. In this paper, the proposed skeleton map is a kind of grid-based skeletal gait representation, where the body structural characteristics highly desired by gait recognition, such as the length, ratio, and movement of body limbs, are explicitly and naturally distributed over the spatial and temporal dimensions, exactly matching the locality modeling requirement of fine-grained spatiotemporal gait description. Moreover, the skeleton map offers additional advantages:\n• The skeleton map shares similarities with gait graphs in terms of feature content and with silhouettes in terms of data format. This unique characteristic allows the skeleton map to benefit from recent advancements in both skeleton-based and silhouette-based methods. • Interestingly, the skeleton map can be perceived as a silhouette that excludes body shape information, facilitating an intuitive comparison of the representational capacities of solely body structural features v.s. the combination of body shape and structural features. • As an imagery input, the skeleton map can seamlessly integrate into image-based multi-modal gait models, particularly at the bottom stages of the model." }, { "figure_ref": [ "fig_4" ], "heading": "SkeletonGait", "publication_ref": [ "b2" ], "table_ref": [], "text": "Ideally, we can employ any image-based gait methods to build a skeleton-map-based baseline model. In this paper, SkeletonGait is developed by replacing the input of Deep-GaitV2 (Fan et al. 2023) from the silhouette to skeleton map, as shown in Fig. 4(a) and (b), The only architectural modification is to change the input channel of the Conv0, where the silhouette is a single-channel input and the skeleton map is a double-channel input. This straightforward design is strongly motivated by two primary reasons:\n• The alignment of network architectures enables a seamless and intuitive comparative study between the silhouette and skeleton map representations. • DeepGaitV2 has a straightforward architecture providing state-of-the-art performances across various gait datasets, making it well-suited for benchmarking. Table 1: Implementation details. The batch size (q, k) indicates q subjects with k sequences per subject. " }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "SkeletonGait++", "publication_ref": [], "table_ref": [], "text": "To integrate the superiority of silhouette and skeleton map, as shown in Fig. 4(c), SkeletonGait++ provides a fusionbased two-branch architecture involving the silhouette and skeleton branches. These two branches respectively share the same network architectures with DeepGaitV2 and Skele-tonGait at early stages, such as the Conv0 and Stage1. Then, a fusion module is responsible for aggregating these two feature sequences frame-by-frame. For the sake of brevity, Fig. 4 displays a single frame while ensuring correctness, as frames are processed in parallel. In this paper, we consider three kinds of fusion mechanisms:\n• Add Fusion. The feature maps from the silhouette and skeleton branch are combined using an element-wise addition operation, as demonstrated in Fig. 4(d).\n• Concatenate Fusion. The feature maps from the silhouette and skeleton branch are first concatenated along the channel dimension, and then transformed by a plain 1×1 convolution layer, as demonstrated in Fig. 4(e).\n• Attention Fusion. The feature maps from the silhouette and skeleton branch are first concatenated along the channel dimension, and then transformed by a small network to form a cross-branch understanding. Here the small network is composed of a squeezing 1 × 1, a plain 3 × 3, and an expansion 1 × 1 convolution layer. As shown in Fig. 4(e), a softmax layer is next employed to assign element-wise attention scores respectively for the silhouette and skeleton branch. Lastly, an element-wise weighted-sum operation is used to generate the output.\nNext, the Stage 3 and 4 possess the same network architectures as the SkeletonGait. Moreover, we also consider the fusion location. Fig. 4(c) exhibits the Low-Level fusion case. Another High-Level fusion model aggregates the fea- " }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b3", "b3" ], "table_ref": [], "text": "Table 1 displays the main hyper-parameters of our experiments. Unless otherwise specified, a) Different datasets often employ distinct pose data formats, such as COCO 18 for OU-MVLP, and BODY 25 for CCPG. To enhance flexibility, our implementation standardized these various formats to COCO 17 uniformly. b) DeepGaitV2 denotes its pseudo-3D variant thanks to its computational efficiency. c) The doubleside cutting strategy widely used for processing silhouettes is employed. The input size of skeleton maps is 2 × 64 × 44. d) At the test phase, the entire sequence of skeleton maps will be directly fed into SkeletonGait and SkeletonGait++.\nAs for the training stage, the data sampler collects a fixedlength segment of 30 frames as input. e) The spatial augmentation strategy suggested by (Fan et al. 2022) is adopted. f) The SGD optimizer with an initial learning rate of 0.1 and weight decay of 0.0005 is utilized. g) The σ controlling the variance in Eq. 2 and Eq. 3 is set to 8.0 as default. h) Our code has been integrated into OpenGait (Fan et al. 2022)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "Datasets. Five popular gait datasets are employed for comprehensive comparisons, involving the OU-MVLP, SUSTech1K, CCPG, Gait3D, and GREW datasets. Therefore, the comparison scope spans from fully constrained laboratories (the former three) to real-world scenarios (the latter two). improvements in most cases. Specifically, it gains +5.3%, +22.9%, +15.6%, +36.5%, and 19.3% (average/overall) rank-1 accuracy on the OU-MVLP, GREW, Gait3D, CCPG, and SUSTech1K datasets, respectively. To exclude the potential positive influence brought by the model size of Skele-tonGait, we reduce its channels by half, thus making its model size nearly identical to that of GPGait, i.e., 2.85 v.s. 2.78M. After that, SkeletonGait reached the rank-1 accuracy of 33.2% and 70.9% on Gait3D and GREW, maintaining a higher performance than prior skeleton-based methods.\nSince the skeleton map can be perceived as a silhouette that excludes body shape information, by comparing Skele-tonGait with DeepGaitV2 in detail, we investigate that:\n• Importance. Structural features play a more important role than those shown by prior methods. Or rather, it may contribute over 50% according to the ratios between the performances of SkeletonGait and DeepGaitV2. • Superiority. When silhouettes become relatively unreliable, e.g., the night case of SUSTech1K in Tab. 5, Skele-tonGait surpasses DeepGaitV2 by a large margin, convincingly revealing the advantages of skeleton data. also become unreliable, particularly in scenarios of extensive occlusion or other challenging conditions. However, experimental results reveal that skeleton data is more robust in such demanding situations than silhouette data on existing gait datasets. This observation exhibits the significance of SkeletonGait++, as it effectively harnesses the strengths of both skeleton and silhouette data to tackle these challenges. Ablation Study. Table . 6 shows that: a) SkeletonGait is robust to the value of σ. b) σ = 8.0 is an experimentally optimal choice. Table . 7 reveals that: a) SkeletonGait++ is robust to both fusion location and mode. b) The low-level attention fusion is an experimentally optimal choice." }, { "figure_ref": [], "heading": "Discussions", "publication_ref": [], "table_ref": [], "text": "This paper introduces the skeleton map as a grid-based skeletal representation. The proposed SkeletonGait outperforms existing skeleton-based methods, emphasizing the importance of body structural features. SkeletonGait++ combines skeleton and silhouette features, achieving new stateof-the-art performance. The work demonstrates that modelbased gait recognition has much to explore in the future." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement: This work was supported by the National Natural Science Foundation of China under Grant 61976144 and the Shenzhen International Research Cooperation Project under Grant GJHZ20220913142611021." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "• Challenge. As shown in Fig. 5, the cross-view problem is still a major challenge for skeleton-based methods. • Concerns about GREW. The GREW dataset is widely acknowledged as the most challenging gait dataset due to its largest scale and real-world settings. However, Skele-tonGait achieves a comparable performance compared to DeepGaitV2 on GREW, rather than on other relatively 'easy' datasets. In this paper, we observe that the gait pairs in GREW's test set seemly contain no many cross-view changes. As mentioned, SkeletonGait works well on the cross-limited-view cases as shown in Fig. 5. Therefore, we consider that the GREW dataset may lack viewpoint diversity, making its recognition task relatively easier compared with that of other datasets. Compare SkeletonGait++ with Other State-of-the-Arts. According to Tab. 3, 4, and 5, we find that:\n• Competitiveness. SkeletonGait++ reaches a new stateof-the-art with obvious gains, i.e., +8.1%, +3.2%, and +5.2% rank-1 accuracy on the GREW, Gait3D, and SUSTech1K, respectively. As for the CCPG dataset, it also achieves overall superior performance. • Benefits. Compared to DeepGaitV2, the additional skeleton branch of SkeletonGait++ notably enhances the recognition accuracy, particularly when the body shape becomes less reliable. This augmentation is particularly evident in challenging scenarios involving object carrying, occlusion, and poor illumination conditions, as observed on SUSTech1K dataset, i.e., Tab. 5. • Comprehensiveness. As shown in Fig. 6,DeepGaitV2 directs its attention towards regions that exhibit distinct and discriminative body shapes. On the other hand, SkeletonGait can only concentrate on 'clean' structural features over the body joints and limbs. In comparison, SkeletonGait++ strikes a balance between these approaches, effectively capturing the 'comprehensive' gait patterns that are rich in both body shape and structural characteristics. Especially for night and occlusion cases, SkeletonGait++ adaptively leverages the still reliable skeleton branch to support the robust gait representation learning. This is an urgent need for practical applications, and we also think this is the main reason causing the performance gains on Gait3D and GREW datasets.\nCertainly, there are instances where skeleton data could " } ]
The choice of the representations is essential for deep gait recognition methods. The binary silhouettes and skeletal coordinates are two dominant representations in recent literature, achieving remarkable advances in many scenarios. However, inherent challenges remain, in which silhouettes are not always guaranteed in unconstrained scenes, and structural cues have not been fully utilized from skeletons. In this paper, we introduce a novel skeletal gait representation named skeleton map, together with SkeletonGait, a skeleton-based method to exploit structural information from human skeleton maps. Specifically, the skeleton map represents the coordinates of human joints as a heatmap with Gaussian approximation, exhibiting a silhouette-like image devoid of exact body structure. Beyond achieving state-of-the-art performances over five popular gait datasets, more importantly, SkeletonGait uncovers novel insights about how important structural features are in describing gait and when they play a role. Furthermore, we propose a multi-branch architecture, named SkeletonGait++, to make use of complementary features from both skeletons and silhouettes. Experiments indicate that SkeletonGait++ outperforms existing state-of-theart methods by a significant margin in various scenarios. For instance, it achieves an impressive rank-1 accuracy of over 85% on the challenging GREW dataset.
SkeletonGait: Gait Recognition Using Skeleton Maps
[ { "figure_caption": "Figure 1 :1Figure 1: The representations of the developed skeleton map v.s. the classical gait graph and silhouette. Only a single frame is displayed for brevity.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "fine-tunes a pre-trained human mesh recovery network to construct an end-to-end SMPL-based model, Despite the advances achieved on indoor OU-MVLP, previous model-based methods still have not exhibited competitive performance compared with the appearance-based ones on real-world gait datasets. Appearance-based Gait Recognition methods mostly learn gait features from silhouette or RGB images, leveraging informative visual characteristics. With the advent of deep learning, current appearance-based approaches primarily concentrate on spatial feature extraction and gait temporal modeling. Specifically, GaitSet (Chao et al. 2019) innovatively treats the gait sequence as a set and employs a maximum function to compress the sequence of framelevel spatial features. Due to its simplicity and effectiveness, GaitSet has emerged as one of the most influential gait recognition works in recent years. GaitPart (Fan et al. 2020) meticulously explores the local details of input silhouettes and models temporal dependencies using the Micromotion Capture Module. GaitGL (Lin, Zhang, and Yu 2021) argues that spatially global gait representations often overlook important details, while local region-based descriptors fail to capture relationships among neighboring parts. Consequently, GaitGL introduces global and local convolution layers. More recently, DeepGaitV2 (Fan et al. 2023) presents a unified perspective to explore how to construct deep models for outdoor gait recognition, bringing a breakthrough improvement on the challenging Gait3D and GREW.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The pipeline of skeleton map generation. (a) Center-normalization, scale-normalization, and skeleton rendering. (b) Subject-centered cropping.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: More examples of the skeleton coordinates v.s. silhouette images v.s. skeleton maps.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The network architectures of DeepGaitV2 v.s. SkeletonGait v.s. SkeletonGait++. The 'head' part is ignored for brevity.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "4% and 65.9%, respectively. These results consistently surpass other pose-based methods, revealing the robustness of SkeletonGait to different pose estimators. ‡ The lack of results for SkeletonGait++ on OU-MVLP is due to the absence of frame-by-frame alignment between the skeleton and silhouette.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The heatmaps (Zhou et al. 2016) of DeepGaitV2 v.s. SkeletonGait and SkeletonGait++.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Datasets in use. #ID and #Seq present the number of identities and sequences.", "figure_data": "DataSetTrain Set Id SeqIdTest Set SeqCollection situationsOU-MVLP5,153 144,284 5,154 144,412 ConstrainedCCPG1008,1871008,095ConstrainedSUSTech1K2005,98885019,228 ConstrainedGait3D3,00018,940 1,0006,369Real-worldGREW20,000 102,887 6,000 24,000Real-worldtures before Stage 4, with additional Stage 2 and 3 respec-tively being inserted into the silhouette and skeleton branch.", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Table2shows the key statistical indicators. Our experiments strictly follow the official evaluation protocols. Compare SkeletonGait with Other Skeleton-based Stateof-the-Arts. As shown in Tab. 3, 4, and 5, SkeletonGait outperforms the latest skeleton-based methods by breakthrough Recognition results on three authoritative gait datasets, involving OUMVLP, GREW, and Gait3D. The best performances are in blod, and that by skeleton-based methods are in blod. The same annotation is applied in the following table. For OU-MVLP, we conducted experiments using both AlphaPose and OpenPose data, resulting in rank-1 accuracy of 67.", "figure_data": "Testing DatasetsInputMethodSourceOU-MVLPGREWGait3Drank-1rank-1 rank-5 rank-10 rank-20 rank-1 rank-5 mAP mINPSkeleton CoordinatesGaitGraph2 GaitTR GPGaitCVPRW2022 Arxiv2022 ICCV202362.1 56.2 60.533.5 54.5 53.6---11.1 6.6 22.5---Skeleton MapsSkeletonGaitOurs67.4 †77.487.991.093.238.156.728.916.1GaitSetAAAI201987.146.363.670.3-36.758.330.017.3GaitPartCVPR202088.544.060.767.3-28.247.621.612.4SilhouetteGaitGLICCV202189.747.3-29.748.522.313.6GaitBaseCVPR202390.860.1-64.6-DeepGaitV2Arxiv202391.977.788.991.8-74.488.065.8-Silhouette+ Skeleton / SMPLSMPLGait GaitRefCVPR2022 IJCB2023-90.253.067.9-73.077.546.3 49.064.5 49.337.2 40.722.2 25.3SkeletonGait++Ours- ‡85.892.694.395.577.689.470.342.6†", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Evaluation with different attributes on CCPG.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Chao Fan; Jingzhe Ma; Dongyang Jin; Chuanfu Shen; Shiqi Yu
[ { "authors": "H Chao; Y He; J Zhang; J Feng", "journal": "", "ref_id": "b0", "title": "Gaitset: Regarding gait as a set for cross-view gait recognition", "year": "2019" }, { "authors": "H Duan; Y Zhao; K Chen; D Lin; B Dai", "journal": "", "ref_id": "b1", "title": "Revisiting skeleton-based action recognition", "year": "2022" }, { "authors": "C Fan; S Hou; Y Huang; S Yu", "journal": "", "ref_id": "b2", "title": "Exploring Deep Models for Practical Gait Recognition", "year": "2023" }, { "authors": "C Fan; J Liang; C Shen; S Hou; Y Huang; S Yu", "journal": "", "ref_id": "b3", "title": "OpenGait: Revisiting Gait Recognition Toward Better Practicality", "year": "2022" }, { "authors": "C Fan; Y Peng; C Cao; X Liu; S Hou; J Chi; Y Huang; Q Li; Z He", "journal": "", "ref_id": "b4", "title": "Gaitpart: Temporal part-based model for gait recognition", "year": "2020" }, { "authors": "W Li; S Hou; C Zhang; C Cao; X Liu; Y Huang; Y Zhao", "journal": "", "ref_id": "b5", "title": "An In-Depth Exploration of Person Re-Identification and Gait Recognition in Cloth-Changing Conditions", "year": "2023" }, { "authors": "X Li; Y Makihara; C Xu; Y Yagi; S Yu; M Ren", "journal": "", "ref_id": "b6", "title": "End-to-end model-based gait recognition", "year": "2020" }, { "authors": "R Liao; Z Li; S S Bhattacharyya; G York", "journal": "Neurocomputing", "ref_id": "b7", "title": "PoseMapGait: A model-based gait recognition method with pose estimation maps and graph convolutional networks", "year": "2022" }, { "authors": "R Liao; S Yu; W An; Y Huang", "journal": "Pattern Recognition", "ref_id": "b8", "title": "A modelbased gait recognition method with body pose and human prior knowledge", "year": "2020" }, { "authors": "B Lin; S Zhang; X Yu", "journal": "", "ref_id": "b9", "title": "Gait recognition via effective global-local feature representation and local temporal aggregation", "year": "2021" }, { "authors": "M Liu; J Yuan", "journal": "", "ref_id": "b10", "title": "Recognizing human actions as the evolution of pose estimation maps", "year": "2018" }, { "authors": "M S Nixon; J N Carter", "journal": "", "ref_id": "b11", "title": "Automatic recognition by gait", "year": "2006" }, { "authors": "Y Peng; K Ma; Y Zhang; Z He", "journal": "Multimedia Tools and Applications", "ref_id": "b12", "title": "Learning rich features for gait recognition by integrating skeletons and silhouettes", "year": "2023" }, { "authors": "C Shen; C Fan; W Wu; R Wang; G Q Huang; S Yu", "journal": "", "ref_id": "b13", "title": "LidarGait: Benchmarking 3D Gait Recognition With Point Clouds", "year": "2023" }, { "authors": "C Shen; S Yu; J Wang; G Q Huang; L Wang", "journal": "", "ref_id": "b14", "title": "A comprehensive survey on deep gait recognition: algorithms, datasets and challenges", "year": "2022" }, { "authors": "N Takemura; Y Makihara; D Muramatsu; T Echigo; Y Yagi", "journal": "IPSJ Transactions on Computer Vision and Applications", "ref_id": "b15", "title": "Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition", "year": "2018" }, { "authors": "T Teepe; A Khan; J Gilg; F Herzog; S Hörmann; G Rigoll", "journal": "IEEE", "ref_id": "b16", "title": "Gaitgraph: Graph convolutional network for skeleton-based gait recognition", "year": "2021" }, { "authors": "Y Wang; X Zhang; Y Shen; B Du; G Zhao; L Cui; H Wen", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b17", "title": "Event-Stream Representation for Human Gaits Identification Using Deep Neural Networks", "year": "2022" }, { "authors": "J Zheng; X Liu; W Liu; L He; C Yan; T Mei", "journal": "", "ref_id": "b18", "title": "Gait Recognition in the Wild with Dense 3D Representations and A Benchmark", "year": "2022" }, { "authors": "B Zhou; A Khosla; A Lapedriza; A Oliva; A Torralba", "journal": "", "ref_id": "b19", "title": "Learning deep features for discriminative localization", "year": "2016" }, { "authors": "Z Zhu; X Guo; T Yang; J Huang; J Deng; G Huang; D Du; J Lu; J Zhou", "journal": "", "ref_id": "b20", "title": "Gait recognition in the wild: A benchmark", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 331.2, 61.09, 216.51, 78.46 ], "formula_id": "formula_0", "formula_text": "Coordinates Skeleton Map R R/2 R R/2 H Skeleton Map (a) (b)" }, { "formula_coordinates": [ 3, 391.44, 286.76, 166.56, 75.72 ], "formula_id": "formula_1", "formula_text": "x k = x k -x core + R/2 y k = y k -y core + R/2 x k = x k -y min y max -y min × H y k = y k -y min y max -y min × H(1)" }, { "formula_coordinates": [ 3, 345.97, 364.81, 130.89, 14 ], "formula_id": "formula_2", "formula_text": "(x core , y core ) = ( x11+x12 2 ,y11+y12" }, { "formula_coordinates": [ 3, 367.89, 638.2, 190.11, 30.55 ], "formula_id": "formula_3", "formula_text": "J (i,j) = K k e -(i-x k ) 2 +(j-y k ) 2 2σ 2 × c k (2)" }, { "formula_coordinates": [ 4, 68.66, 342.66, 223.84, 30.03 ], "formula_id": "formula_4", "formula_text": "L (i,j) = N n e -D((i,j),S[n -,n + ]) 2 2σ 2 × min(c n -, c n + ) (3)" } ]
2023-11-22
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b5", "b26", "b33", "b43", "b37", "b53", "b55", "b13", "b16", "b23" ], "table_ref": [], "text": "Image matting, a critical process in extracting objects from images, has indispensable applications in numerous fields, including media production, virtual reality, and intelligent editing tools [6,27,34,44,52]. The process is mathematically modeled as a decomposition of an image I into its constituent foreground F and background B, combined via an alpha matte α. The composite model is given by:\nI i = α i F i + (1 -α i )B i , α i ∈ [0, 1],(1)\nwhere each pixel i entails solving for the transparency α i alongside the colors of F i and B i . The inherent challenge in matting stems from its illposed structure, in which each pixel's calculation involves estimating more variables than the available observations. Traditional approaches [30,38,48,52,54,56] partially mitigate this issue by leveraging auxiliary inputs, such as trimaps, to provide boundary conditions for the unknown regions. However, this method of one-off prediction has intrinsic limitations. It assumes a static relationship between the auxiliary input and the image content, which can lead to inaccuracies due to oversimplified assumptions about the spatial distribution of opacity within the unknown regions.\nTo overcome these limitations, we introduce a reconceptualized framework, DiffusionMat, that embraces a sequential refinement learning strategy. Unlike one-off prediction models that generate a single alpha matte in a direct manner, our approach progressively refines the matte through a series of informed iterations. This methodology aligns with the stochastic nature of diffusion processes, which have demonstrated exceptional capability in capturing complex data distributions in recent studies [3,14,17,24,42].\nDiffusionMat operates on the premise that the matting of unknown regions can be enhanced incrementally, benefiting from each iteration's feedback to correct and refine the predictions. This is in stark contrast to one-off predictions that lack the flexibility to revise and improve upon initial estimations. Our correction module plays a pivotal role in this iterative process, fine-tuning the matte at each step to ensure fidelity to the input image's structural nuances. Particularly, our method commences with the training of an unconditional diffusion model on a comprehensive dataset of alpha mattes, preparing it to understand and generate the distribution of matting details. We then inject random noise into a trimap in a controlled manner to mimic real-world unpredictability, enabling our model to explore a diverse set of potential alpha matte solutions. The noised trimap is fed into the diffusion model, which denoises it step by step, iteratively refining towards a high-quality alpha matte (see in Fig. 1). This stochastic process naturally incorporates variability, which we harness rather than constrain, to produce a range of plausible matte solutions.\nTo achieve a deterministic result aligned with the input image I, our framework incorporates a deterministic denoising process using Denoising Diffusion Implicit Models (DDIM) inversion [46]. This process inverts a ground truth alpha matte through the diffusion model to establish a reference trajectory for the denoising steps. A correction module, augmented with an image encoder, ensures that the intermediate outputs adhere to this trajectory, focusing particularly on the unknown regions as identified by the trimap. We further propose an Alpha Reliability Propagation (ARP) to regulate the intermediate denoised results by the determined regions. This operation enhances the model's focus and efficiency in learning, allowing for rapid refinement within the most ambiguous regions of the trimap.\nIn summary, our primary contributions are as follows: • We present the first attempt to utilize diffusion models for image matting. This marks a novel step in the context of matting tasks, demonstrating the potential of diffusion models in this domain. • We reconceptualize image matting as a process of transforming trimap guidance into precise alpha mattes, employing a novel correction strategy to align random denoising trajectories with the GT ones. • We propose an Alpha Reliability Propagation module to accelerate and regulate the sequential learning process. • Our extensive experiments on both portrait and nonportrait matting datasets validates that DiffusionMat sets a new benchmark for state-of-the-art performance. Through this approach, we demonstrate that the proposed DiffusionMat not only addresses the core challenges of image matting but also opens up a new avenue to apply sequential learning to complex vision tasks." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b12", "b17", "b48", "b0", "b25", "b33", "b37", "b55", "b13", "b16", "b23", "b35", "b38", "b52", "b56", "b10", "b9", "b20" ], "table_ref": [], "text": "Natural Image Matting. Image matting techniques can be broadly divided into sampling-based and affinity-based methods. Sampling-based methods [13,18,45,49] estimate alpha values by drawing color samples from foreground and background regions. In contrast, affinity-based methods [1,9,26] extrapolate alpha values into unknown regions based on the relational affinities between pixels. Recent advancements have seen the application of deep learning to image matting. Xu et al. [52] were pivotal in introducing a dedicated image-matting dataset alongside an encoderdecoder network tailored for this task. Lutz et al. [35] further enhanced matting frameworks by integrating generative adversarial networks, while Lu et al. [34] developed In-dexNet, which dynamically adapts to feature maps for index generation. Li et al. [30] introduced the concept of guided contextual attention for the effective global communication of opacity information. Building upon the transformer architecture, Park et al. [38] implemented a trimap prior token to leverage the self-attention mechanism for improved matting. Cai et al. [5] focused on the challenges presented by transparent objects, employing a design with an expansive receptive field. Furthermore, Yu et al. [56] proposed a concatenate network that enhances the prediction of the unknown region. Diverging from the conventional end-toend matting paradigm, our work redefines the process as a transformation from noisy trimap to alpha matte, capitalizing on the generative capacities of diffusion models. This approach not only acknowledges but actively engages with the generative aspects of diffusion to inform the matting process.\nDiffusion Models. Diffusion models have recently risen to prominence for their superior generative performance in various domains [3,14,17,24,42]. These models function by iteratively refining from random noise to create coherent samples, showcasing impressive capabilities in image synthesis and restoration. Notable implementations include GDP [16], which uses pre-trained diffusion models for image restoration and enhancement, and SDEdit [36], which synthesizes realistic images through stochastic differential denoising. Stable Diffusion further extends this utility to diverse applications such as image and video manipulation, language-to-image translation, and synthetic media generation [39,53,57]. In this paper, we harness the generative capabilities of pre-trained diffusion models to address image matting, a fundamental perceptual task.\nDiffusion Models for Perception Tasks. The versatility of diffusion models, traditionally celebrated for generative tasks, is now being explored for perceptual challenges. Segdiff [2] marked an initial foray into diffusion-driven segmentation, while Chen et al. [11] expanded the utility with the Bit Diffusion model [12], adept at panoptic segmentation across still images and videos. In a similar vein, Boah et al. [23] combined diffusion models with adversarial training for enhanced vascular segmentation accuracy. Baranchuk et al. [4] investigated the potential of diffusion model activations in capturing the semantic segmentation of images. Building on this, DiffusionDet [10] introduced a paradigm shift in object detection by conceptualizing it as a denoising diffusion sequence. The recent VPD model by Zhao et al. [59] exploits a pre-trained text-to-image diffusion model's semantic prowess for a range of visual perception tasks, and DDP by Ji et al. [21] applies it to various dense visual prediction scenarios. Our work pioneers the application of diffusion models for image matting, leveraging their generative strengths to advance this intricate perceptual task." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "transforming trimap guidance into precise alpha mattes In this section, we present DiffusionMat, a novel diffusionbased framework for image matting. Our key idea is to transform the trimap guidance into precise alpha matte by a novel correction strategy. We first present the background of diffusion models, and then describe how we correct the denoised results via deterministic denoising and our alpha reliability propagation." }, { "figure_ref": [], "heading": "Preliminaries: Diffusion Models", "publication_ref": [ "b13", "b16", "b23" ], "table_ref": [], "text": "Diffusion models are classes of likelihood-based models that learn the distribution space in a gradual denoising process [3, 14,17,24,42]. A diffusion model consists of a noising process and a reverse denoise sampling process. In the forward process, the diffusion model adds the noise to the data gradually via a Markov chain. Each forward step can be represented as:\nq(x t |x t-1 ) = N 1 -β t x t-1 , β t I ,\nwhere {β t } T t=0 are variance schedule. The latent variable x t can be denoted as:\nx t = √ α t x 0 + √ 1 -α t ϵ, ϵ ∼ N (0, I),\nwhere\nα t := 1 -β t 1 .\nDuring training, the diffusion model ϵ θ is used for predicting ϵ from x t with following equation.\nL DM = ∥ϵ -ϵ θ (x t , t)∥ 2 2 .\nDuring the inference, the data can be sampled using the following Denoising Diffusion Probabilistic Models (DDPM) [19] reverse diffusion process: \nx t-1 = 1 √ 1 -β t x t - β t √ 1 -α t ϵ θ (x t , t) + σ t z," }, { "figure_ref": [ "fig_2" ], "heading": "Procedure", "publication_ref": [ "b35" ], "table_ref": [], "text": "Given an input image I along with its trimap guidance m, our objective is to derive the corresponding alpha matte α. Diverging from existing one-off prediction approaches, we introduce a sequential refinement learning strategy that transforms noised trimap to clean alpha matte.\nThe trimap-to-alpha transformation can be seen as a conditional image generation process, SDEdit [36] provides a simple pipeline for this generation, it synthesizes output images under given guidance by iteratively denoising through a stochastic differential equation. Here, we present the pipeline that applies SDEdit to synthesize an alpha matte under trimap guidance. As illustrated in Fig. 2, SDEdit begins with the trimap guidance m, which is initially perturbed with random noise to create a noised mask m T at time step T . Subsequently, this corrupted trimap m T undergoes denoising in the diffusion model's sampling process through iterative application of Eq. 3, leading to the final alpha matte denoted as α SDE . This process can be represented as follows:\nm T = √ α T m + (1 -α T )ϵ, ϵ ∼ N (0, I),(2)\nm T -1 = √ α T -1 ( m T - √ 1 -α t ϵ θ (m T ; T ) √ α T )\n\"predicted clean alpha matte in one step \"\n+ 1 -α T -1 • ϵ θ (m T ; T ) \"direction pointing to m T \" ,(3)\nwhere ϵ θ is the unconditional diffusion model pre-trained on large scaled alpha matte dataset, and the first term is the clear alpha matte α0 t predicted in one step, and the second term is the direction pointing to m T ." }, { "figure_ref": [ "fig_3" ], "heading": "Deterministic Denoising", "publication_ref": [ "b36", "b36" ], "table_ref": [], "text": "In the SDEdit, a noised guidance trimap can be denoised to arbitrary alpha matte due to the randomness brought by the random noise. However, as a perception task, image matting has one only deterministic alpha matte. For obtaining the precisely alpha matte, we correct the intermediate denoised results using the GT inverted guidance [37]. Particularly, given the GT alpha matte α, we invert it to the pre- trained diffusion model via DDIM inversion, and obtain the deterministic inversion trajectory, which can be used as the supervision during correction.\n× m m T m T-1 m T-1 ϵ θ C θ P m T-1 inv ϵ C × + … × m T-1 m t m t corr ϵ θ C θ C × + DDIM Inversion m t inv L Inv α T-1 L Inv E θ DDIM Inversion\nIn the case of SDEdit, a noised guidance trimap has the potential to be denoised into an arbitrary alpha matte due to the randomness introduced by the random noise. However, image matting, being a classic perception task, necessitates a single deterministic alpha matte. To achieve this precise alpha matte, we correct the denoised results under the supervision of the GT inverted guidance. Different from [37], we use a learnable correction module for the revision.\nSpecifically, starting with the GT alpha matte α, we perform an inversion process to map it onto the pre-trained diffusion model via DDIM inversion, and yields a deterministic inversion trajectory.\nIn each denoised timestep t, we correct the intermediate denoised result m t using an image encoder E θ and a correction module C θ . As illustrated in Fig. 3, we initiate the process by encoding the image I with the image encoder e θ , resulting in the image feature f I . Subsequently, we concatenate f I with m t and feed this combined input to the correction module c θ , generating the corrected denoising result m corr t . These procedures can be expressed as follows:\nf I = E θ (I), m corr t = C θ (cat(f I ; m t ); t),\nwhere cat(•; •) denotes the concatenation operator." }, { "figure_ref": [], "heading": "Alpha Reliability Propagation", "publication_ref": [], "table_ref": [], "text": "Learning such a correction for all pixels is challenging and unnecessary since the difference between the trimap and alpha matte only exists in the unknown region. To alleviate the learning complexity and concentrate the module's efforts on the unknown regions, we introduce the \"Alpha Re-liability Propagation\" (ARP) module that regulates the intermediate denoised results by the known regions.\nParticularly, we first invert the trimap m to diffusion model via DDIM inversion and get the inversion trajectory m inv t , then we replace the known regions' values on m corr t with those in m ddim t according the known mask m known , that is:\nm ARP t = m corr t × (1 -m known ) + m inv t × m known .\nThen we denoise m ARP t to its next timestep following Eq. 3. Note that the first term in Eq. 3 can be regard as the alpha matte α0\nt that predicted in one step directly." }, { "figure_ref": [], "heading": "Loss Functions", "publication_ref": [ "b37", "b55" ], "table_ref": [], "text": "In this section, we provide the loss functions for training DiffusionMat, the first one is the DDIM inversion loss L Inv for correcting the intermediate denoised results. Let {α T , ..., α t , ...α 0 } denotes the deterministic inversion trajectory of GT alpha matte α, in each timestep t, we define the DDIM inversion loss L Inv as:\nL Inv = ∥α t -m ARP t ∥ 2 2 .(4)\nMoreover, we also develop another loss term L α by aligning the distance between one step alpha matte α0 t with GT alpha matte α:\nL α = ∥α -α0 t ∥ 1 .(5)\nFor those matting datasets that have GT foreground and background, we also also introduce the composition loss that wild used in many image matting works [30, 38,52,56]. We first composite an image I comp t using α0 t following Eq. 1, then we minimize the absolute difference between the composited image and the GT image I: \nL Comp = ∥I -I comp t ∥ 1 .(6)\nL Final = λ 1 L Inv + λ 2 L α + λ 3 L Comp ,(7)\nwhere {λ i } denote the weight factors for balancing loss terms. It is noticed that L Final can be applied on arbitrary timesteps. Different from classical diffusion models trained and inference in two different pipelines, our DiffusionMat is trained in the inference process of diffusion. Once trained, we can predict the alpha matte with the same pipeline." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b32", "b30", "b14", "b14", "b30", "b40" ], "table_ref": [], "text": "Datasets. To evaluating the effectiveness of our Diffusion-Mat, we conduct experiment on following image matting datasets.\n• P3M-10K dataset [28] is the largest privacy-preserving portrait matting dataset, it contains 10,421 highresolution real-word portrait images and the corresponding alpha matte. The dataset is divided into one training set and two test datasets. The training set contains 9,421 face-blurred portrait-matte pairs. The first test set has 500 face-blurred portrait-matte pairs, which are denoted as P3M-500-P. Another test set also contains 500 portrait-matte pairs, but the portraits are not blurred faces, and it is denoted as P3M-500-NP. • Human-2K dataset [33] provides 2,100 portrait foreground image and matte pairs. Then the foreground images are composited with background images from MS COCO [31] and Pascal VOC [15] datasets, which result in 43,100 training samples and 2,000 testing samples. • Composition-1k dataset [52].\nBeyond above portrait matting datasets, we also conduct experiment on Composition-1k dataset, which contains not only portrait but also object images. It contains 431,000 training image-matting pairs which composited with background from Pascal VOC [15] datasets. The test dataset contains 1,000 composited images which composited with background image from MS COCO [31] dataset. Evaluation Metrics. We use four metrics to evaluate the alpha matting results, namely, the mean square error (MSE), the sum of absolute differences (SAD), the gradient error (Grad), and connectivity errors (Conn) [41]. Particularly, MSE and SAD measure the statistical differences between predicted and GT alpha mattes, Conn evaluates the disconnected foreground objects, and Grad focuses on the oversmoothed or erroneous discontinuities in the alpha matte. Compared with MSE and SAD, Conn and Grad are more related to human perception." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b42", "b6", "b24" ], "table_ref": [], "text": "We implement the proposed framework in Pytorch on a PC with Nvidia GeForce RTX 3090. We first follow [19] that uses U-Net architecture [43] as the diffusion model's structure. Since the resolution of alpha matte is relatively high, we remove the \"att block\" of U-Net to save GPU memory. For two portrait matting datasets, we use the diffusion model that trains on their mixed alpha mattes. The diffusion model fine-tuned on the Composition-1k dataset is utilized on this object matting dataset. We use the Swin-Unet as the image encoder [7], and the correction network also has the same U-Net structure as the diffusion model. The framework is optimized by the Adam optimizer [25] with the learning rate of 1e -4 . We empirically set the balancing weights in Eq. 7 as λ 1 = 2, λ 2 = 1, and λ 3 = 1. We crop the input images and trimaps to the resolution of 512×512 for training the framework. During the inference, we predict the alpha matte of the full-resolution images. " }, { "figure_ref": [ "fig_4" ], "heading": "Evaluations", "publication_ref": [ "b55", "b55", "b32", "b46", "b37" ], "table_ref": [], "text": "P3M-10K. We first compare DiffusionMat with state-ofthe-art models on P3M-10K dataset in Tab. 1. Generally speaking, guidance-based approaches tend to outperform guidance-free methods. This is primarily due to the auxiliary guidance effectively reducing learning ambiguity. Moreover, our DiffusionMat achieves comparable results to its competitors on two test sets. This validates the effectiveness of our approach. In particular, DiffusionMat yields lower Grad and Conn metrics, indicating that our predicted alpha mattes are more perceptually favorable to humans. This can be attributed to our utilization of the generative prior from pre-trained diffusion models, which effectively eliminates discontinuous regions that are rarely encountered in portrait alpha mattes. This further underscores the effectiveness of DiffusionMat in sequential refinement.\nThe qualitative comparison results are shown in Fig. 4. It is evident that DiffusionMat produces the most refined alpha mattes. In the 2 nd sample, DiffusionMat accurately captures the semi-transparent region around the \"plant\", while other methods failed on capturing it. Notably, our approach also demonstrates robustness against inaccurate trimaps. In the 5 th sample, where the trimap inaccurately marks regions in the background as unknown, most trimapbased methods erroneously predict alpha mattes around the object. This error even affects the mask-guided method MG [56]. Conversely, DiffusionMat remains unaffected by the imprecise trimap, accurately predicting the alpha matte exclusively around the portrait. We attribute this to the strong generative prior of the diffusion model, which has been trained on extensive portrait matte datasets, naturally preventing predictions on background objects.\nHuman-2K. We present the quantitative comparison on the Human-2K dataset in Tab. 2. We can see that Diffusion-Mat also works well on this compositional dataset, which evidences its generalization ability. Same as the P3M-10K dataset, our predicted alpha mattes achieve better performance on Grad and Conn metrics, which is more favorable Trimap 33.5 0.007 14.5 29.9 MG [56] Mask 31.5 0.007 13.5 27.3 TIMINet [33] Trimap 29.1 0.006 11.5 25.4 SIM [47] Trimap 28.0 0.006 10.8 24.8 TransMatting [5] Trimap 25.0 0.005 9.7 20.2 MatteFormer [38] Trimap 23.8 0.004 8.7 18.9 DiffusionMat Trimap 22.8 0.004 6.8 18.4\nto human perceptual. Composition-1k. The quantitative comparison on the Composition-1k dataset is presented in Tab. 3. Our Dif-fusionMat also outperforms the state-of-the-art methods on this dataset, especially on human perceptual-related Grad and Conn metrics, which evidences that our method is not limited on portrait matting dataset, but also generalize well on the object matting dataset." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b9" ], "table_ref": [], "text": "In this section, we perform ablation studies to evaluate Dif-fusionMat on the Human-2K dataset. Then we develop various variants with different settings and the modification of loss functions.\nw/ vs. w/o Diffusion. To demonstrate the effectiveness of introducing the pre-trained diffusion model, we propose a vanilla variant (w/o Diffusion) by removing the diffusion model. In this variant, we concatenate the feature f I with the trimap along channels and feed them directly to the correction network for alpha matte prediction. The comparison between w/ Diffusion and w/o Diffusion can be found in Tab. 4. The w/o Diffusion variant exhibits poor performance on the Human-2K dataset, while w/ Diffusion (Dif-fusionMat) achieves significantly better results. This suggests that our performance gain does not originate from the image encoder or correction network but rather from our sequential refinement trimap-to-alpha framework.\nQuantitative comparisons are provided in Fig. 5, where the w/o Diffusion variant only produces a coarse alpha matte with significant detail loss. Conversely, w/ Diffusion captures fine details effectively. w/ vs. w/o ARP. To highlight the effectiveness of our Alpha Reliability Propagation, we trained DiffusionMat by removing this module. The comparison is presented in Tab. 5, where the variant w/o ARP exhibits inferior performance across all four metrics. ARP module plays a vital role in propagating reliable knowledge from the trimap. It not only eases the learning process but also ensures that DiffusionMat focuses on the unknown regions during alpha matte prediction, resulting in more accurate predictions.\nChoice of Perturbation Timestep T . We further conduct an ablation study to evaluate the impact of the perturbation timestep (T ), which controls the noise level added to the trimap and subsequent denoising. We set different values of T with 3 sampling steps in this ablation study and the comparison is shown in Tab. 6. Variant T = 1, 000 performs the worst performance, which indicates a higher level of trimap perturbation, providing less useful guidance to the denoising process. Conversely, the variant with T = 100 also showed suboptimal performance compared to T = 250, indicates fewer perturbation cannot transform the trimap to an alpha matte successfully. Notably, the T = 250 variant yields the best performance across all metrics and we use it as our default setting.\nChoice of Sampling Steps. We evaluated the performance and computational costs of DiffusionMat with vary- ing numbers of sampling steps with the input resolution of 512×512. As shown in Tab. 7, the increased sampling steps results in a more accurate alpha matte. However, larger sampling steps also incur higher computational costs, leading to elevated FLOPs values. Notably, compared to the variant with 5 steps, using 10 steps does not yield a significant improvement in accuracy, yet it doubles the computational resources cost. Considering the trade-off between accuracy and efficiency, we set sampling steps as 5.\nAblation Study on Different Losses. To evaluate the impact of different loss terms, we developed three variants: 1) w/o L Inv by removing the DDIM inversion loss, 2) w/o L α by omitting the alignment between one-step alpha matte and GT alpha matte, and 3) w/o L Comp by eliminating the composition loss. The comparison is detailed in Tab. 8. It is evident that the variant w/o L Inv presents inferior performance across all metrics. This highlights the crucial role played by the DDIM inversion loss in DiffusionMat, as it corrects the denoised results with the GT inverted trajectory. Additionally, both L Comp and L α contributed to improved performance.\nRandom seeds. To evaluating the stability of Diffu-sionMat under varying random seeds, we follow Diffusion-Det [10] that trains 5 models with different initial seeds and evaluate their performance with 10 different random seeds. As illustrated in Fig. 6, DiffusionMat demonstrates consistent mean values on the MSE metric, signifying its robustness to different sources of noise. This stability can be attributed to our deterministic denoising approach, which effectively reduces randomness during the denoising process.\nVisualization of Denoised Results. We provide visualizations of the original and corrected denoised results at different timesteps in Fig. 7. The original denoised results primarily preserve global semantic structures but fail to capture local details. In contrast, the corrected denoised results effectively capture these local details, resulting in an accu- rate alpha matte. This highlights the effectiveness of our sequential refinement learning strategy.\nRandom Seeds MSE" }, { "figure_ref": [], "heading": "Conclusion and Discussions", "publication_ref": [], "table_ref": [], "text": "In this paper, we present DiffusionMat, a framework that transforms a noise-injected trimap into a clean alpha matte by treating image matting as a sequential refinement learning process. Initially, we perturb the trimap with noise and subsequently denoise it using a pre-trained diffusion model. We introduce a correction module designed to adjust each intermediate denoised result. Furthermore, we propose the Alpha Reliability Propagation module, which enhances the efficacy of the correction module, particularly in unknown regions. Our experiments across various image matting datasets demonstrate DiffusionMat's superior performance.\nThe primary limitation of our DiffusionMat lies in its computational efficiency. It requires more processing time compared to traditional single-pass image matting methods, taking approximately 0.6 seconds for an input resolution of 512 × 512. However, these drawbacks could potentially be mitigated through the development of more efficient diffusion models, which is a direction we aim to explore in future research." }, { "figure_ref": [ "fig_6" ], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "We provide the pseudo code for training and inference our DiffusionMat in Alg. 1 and Alg. 2.\nIn addition, more qualitative comparison with related works on Composition-1k dataset [52] can be seen in Fig. 8. Our diffusionmat is robust to semitransparent areas, such as glass. Moreover, it captures the local fine details effectively.\nWe also apply our DiffusionMat on real video matting dataset provided by [50], it consists of 19 human videos results in 711 frames. We use our DiffusionMat trained on P3M dataset [28] and evaluate on this video matting dataset. The quantitative comparison can be seen in Tab. 9, though our DiffusionMat is trained on image-matting pairs, it also achieves state-of-the-art performance, which demonstrates that effectiveness of our DiffusionMat. " } ]
In this paper, we introduce DiffusionMat, a novel image matting framework that employs a diffusion model for the transition from coarse to refined alpha mattes. Diverging from conventional methods that utilize trimaps merely as loose guidance for alpha matte prediction, our approach treats image matting as a sequential refinement learning process. This process begins with the addition of noise to trimaps and iteratively denoises them using a pre-trained diffusion model, which incrementally guides the prediction towards a clean alpha matte. The key innovation of our framework is a correction module that adjusts the output at each denoising step, ensuring that the final result is consistent with the input image's structures. We also introduce the Alpha Reliability Propagation, a novel technique designed to maximize the utility of available guidance by selectively enhancing the trimap regions with confident alpha information, thus simplifying the correction task. To train the correction module, we devise specialized loss functions that target the accuracy of the alpha matte's edges and the consistency of its opaque and transparent regions. We evaluate our model across several image matting benchmarks, and the results indicate that DiffusionMat consistently outperforms existing methods.
DiffusionMat: Alpha Matting as Sequential Refinement Learning
[ { "figure_caption": "Figure 1 .1Figure 1. Our DiffusionMat transforms alpha matting into a sequential refinement process, progressing from a noise-injected trimap to a precisely defined alpha matte (see bottom-right). The resulting composited foregrounds are showcased against a green screen background.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "where z ∼ N (0, I). Moreover, Song et al. propose the DDIM [46] denoising process with fewer steps. The DDIM sampling is a deterministic process, which allows us to fully invert the real data to its latent variables.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure2. Pipeline of SDEdit[36] for synthesizing alpha matte under the trimap guidance.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Pipeline of our DiffusionMat. Given a trimap guidance m, we first perturb it with random noise ϵ to a noised mask mT at time step T . In each denoising timestep t, we propose the deterministic denoising that corrects the immediate results mt with an image encoder E θ and correction module C θ . We further propose an Alpha Reliability Propagation that regulates the intermediate denoised results by the determined unknown regions. Then the output result m ARP tis aligned with the corresponding GT inverted results αt. We also predict a clean alpha matte α0 t in each step and minimize its distance with the GT α. Note that loss Lα and LComp are omitted for simplicity.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. The qualitative comparison results on P3M dataset, our method produces the most refined alpha mattes and is robustness against inaccurate trimaps. Best viewed by zooming in.Table 4. Ablation study on w/ Diffusion vs. w/o Diffusion model. Variants SAD↓ MSE↓ Grad↓ Conn↓ w/o Diffusion 17.34 0.0709 51.33 16.38 w/ Diffusion 4.04 0.0020 1.66 2.66", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure 6. We train DiffusionMat with 5 different random seeds and evaluate each one 10 times. They get the very similar mean values on MSE metric.", "figure_data": "", "figure_id": "fig_5", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. The qualitative comparison results on Composition-1k dataset. Best viewed by zooming in.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Quantitative evaluation on portrait matting with state-of-the-art methods on P3M-10K dataset. \"Trimap * \" denotes the GT trimap. The lower the better for all metrics, and the best results are marked in Bold. Now we define the final loss L Final for training the Dif-fusionMat:", "figure_data": "MethodsMetricsGuidanceSAD↓P3M-500-P MSE↓ Grad↓Conn↓SAD↓P3M-500-NP MSE↓ Grad↓Conn↓LF [58]None42.950.019142.1918.8032.590.013131.9319.50HATT [40]None25.990.005414.9125.2930.530.007219.8827.42SHM [8]None21.560.010021.2417.5320.770.009320.3017.09GFM [29]None13.200.005012.5817.7515.500.005614.8218.03P3M-Net [28]None8.730.00268.2213.8811.230.003510.3512.51MODNet [22]None13.310.003816.5010.8816.700.005115.2913.81DIM [52]Trimap *4.890.00094.489.685.320.00094.707.70AlphaGAN [35]Trimap5.270.0112--5.240.0112--GCA [30]Trimap4.360.008810.045.034.350.00898.275.26IndexNet [34]Trimap4.690.00078.944.205.360.00077.194.71MG [56]Mask5.600.001211.275.166.230.00129.545.59MatteFormer [38]Trimap4.540.00089.324.084.910.00077.194.27DiffusionMatTrimap4.580.00078.673.895.030.00076.294.15", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation on portrait matting with state-ofthe-art methods on Human-2K dataset. \"Trimap * \" denotes the GT trimap. The lower the better for all metrics. The best results are marked in Bold.", "figure_data": "MethodsGuidance SAD↓ MSE↓ Grad↓ Conn↓DIM [52]Trimap * 7.53 0.008 6.40 6.70IndexNet [34] Trimap 6.55 0.006 4.50 5.50GCA [30]Trimap 5.18 0.004 3.00 4.00TIMI-Net [33] Trimap 4.20 0.0026 2.06 2.95MODNet [22] None7.80 0.0080 7.20 7.40MG [56]Mask4.40 0.0040 2.50 3.20SPGM [51]Mask4.00 0.0020 2.00 2.80DiffusionMat Trimap 4.04 0.0020 1.66 2.66", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation on image matting with state-ofthe-art methods on Composition-1k dataset. \"Trimap * \" denotes the GT trimap. The lower the better for all metrics. The best results are marked in Bold.", "figure_data": "MethodsGuidance SAD↓ MSE↓ Grad↓ Conn↓Learning Based [60] None 113.9 0.048 91.6 122.2Closed-Form [26]None 168.1 0.091 126.9 167.9KNN [9]None 175.4 0.103 124.1 176.4DIM [52]Trimap * 50.4 0.014 31.0 50.8AlphaGAN [35]Trimap 52.4 0.030 38.0-IndexNet [34]Trimap 45.8 0.013 25.9 43.7HATT [40]Trimap 44.0 0.007 29.3 46.4AdaMatting [6]Trimap 41.7 0.010 16.8-SampleNet [48]Trimap 40.4 0.010--Fine-Grained [32] Trimap 37.6 0.009 18.3 35.4Context-Aware [20] Trimap 35.8 0.008 17.3 33.2GCA [30]Trimap 35.3 0.009 16.9 32.5HDMatt [55]", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study of w/ vs. w/o Alpha Reliability Propagation module.", "figure_data": "VariantsSAD↓ MSE↓ Grad↓ Conn↓w/o ARP7.070.00623.825.57w/ ARP4.040.00201.662.66", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study on corrupt timestep T with fixed 3 sampling steps.", "figure_data": "VariantsSAD↓ MSE↓ Grad↓ Conn↓T = 1005.520.00452.964.16T = 2505.340.00442.713.99T = 5005.330.00502.843.74T = 7505.460.00472.774.03T = 1, 0009.400.01267.158.04", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ablation study on different sampling steps with the perturbation timestep T = 250.", "figure_data": "VariantsSAD↓ MSE↓ Grad↓ Conn↓ FLOPs↓step = 16.03 0.0050 3.414.78 517.96Gstep = 35.34 0.0044 2.713.99 1455.54Gstep = 54.04 0.0020 1.662.66 2393.12Gstep = 10 3.99 0.0020 1.592.57 4737.07G", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Ablation study on the modification of loss functions.", "figure_data": "VariantsSAD↓ MSE↓ Grad↓ Conn↓w/o L Inv4.960.00342.513.44w/o L α4.340.00241.842.91w/o L Comp4.770.00271.983.19DiffusionMat4.040.00201.662.66", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation on image matting with state-ofthe-art methods on video matting dataset. The lower the better for all metrics. The best results are marked in Bold.", "figure_data": "MethodsMSE↓ SAD↓ Grad↓ Conn↓DIM [52]13.3298.92129.188.56IndexNet [34]10.9195.07120.073.05LF [58]29.61141.4168.5131.7CAM [20]11.62101.0123.978.21CRGNN [50]9.22473.5112.158.49DiffusionMat7.78970.3101.052.13", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" } ]
Yangyang Xu; Shengfeng He; Wenqi Shao; Kwan-Yee K Wong; Yu Qiao; Ping Luo
[ { "authors": "Yagiz Aksoy; Tunc Ozan Aydin; Marc Pollefeys", "journal": "", "ref_id": "b0", "title": "Designing effective inter-pixel information flow for natural image matting", "year": "2017" }, { "authors": "Tomer Amit; Tal Shaharbany; Eliya Nachmani; Lior Wolf", "journal": "", "ref_id": "b1", "title": "Segdiff: Image segmentation with diffusion probabilistic models", "year": "2021" }, { "authors": "Jacob Austin; Jonathan Daniel D Johnson; Daniel Ho; Rianne Tarlow; Van Den; Berg", "journal": "NeurIPS", "ref_id": "b2", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "Dmitry Baranchuk; Ivan Rubachev; Andrey Voynov; Valentin Khrulkov; Artem Babenko", "journal": "", "ref_id": "b3", "title": "Label-efficient semantic segmentation with diffusion models", "year": "2022" }, { "authors": "Huanqia Cai; Fanglei Xue; Lele Xu; Lili Guo", "journal": "", "ref_id": "b4", "title": "Transmatting: Enhancing transparent objects matting with transformers", "year": "2022" }, { "authors": "Shaofan Cai; Xiaoshuai Zhang; Haoqiang Fan; Haibin Huang; Jiangyu Liu; Jiaming Liu; Jiaying Liu; Jue Wang; Jian Sun", "journal": "", "ref_id": "b5", "title": "Disentangled image matting", "year": "2019" }, { "authors": "Yueyue Hu Cao; Joy Wang; Dongsheng Chen; Xiaopeng Jiang; Qi Zhang; Manning Tian; Wang", "journal": "", "ref_id": "b6", "title": "Swin-unet: Unet-like pure transformer for medical image segmentation", "year": "2022" }, { "authors": "Quan Chen; Tiezheng Ge; Yanyu Xu; Zhiqiang Zhang; Xinxin Yang; Kun Gai", "journal": "", "ref_id": "b7", "title": "Semantic human matting", "year": "2018" }, { "authors": "Qifeng Chen; Dingzeyu Li; Chi-Keung Tang", "journal": "IEEE TPAMI", "ref_id": "b8", "title": "Knn matting", "year": "2013" }, { "authors": "Shoufa Chen; Peize Sun; Yibing Song; Ping Luo", "journal": "", "ref_id": "b9", "title": "Diffusiondet: Diffusion model for object detection", "year": "2023" }, { "authors": "Ting Chen; Lala Li; Saurabh Saxena; Geoffrey Hinton; David J ", "journal": "", "ref_id": "b10", "title": "Fleet. A generalist framework for panoptic segmentation of images and videos", "year": "2023" }, { "authors": "Ting Chen; Ruixiang Zhang; Geoffrey Hinton", "journal": "ICLR", "ref_id": "b11", "title": "Analog bits: Generating discrete data using diffusion models with self-conditioning", "year": "2023" }, { "authors": "Yung-Yu Chuang; Brian Curless; Richard David H Salesin; Szeliski", "journal": "", "ref_id": "b12", "title": "A bayesian approach to digital matting", "year": "2001" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "NeurIPS", "ref_id": "b13", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "IJCV", "ref_id": "b14", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Ben Fei; Zhaoyang Lyu; Liang Pan; Junzhe Zhang; Weidong Yang; Tianyue Luo; Bo Zhang; Bo Dai", "journal": "", "ref_id": "b15", "title": "Generative diffusion prior for unified image restoration and enhancement", "year": "2023" }, { "authors": "Shuyang Gu; Dong Chen; Jianmin Bao; Fang Wen; Bo Zhang; Dongdong Chen; Lu Yuan; Baining Guo", "journal": "", "ref_id": "b16", "title": "Vector quantized diffusion model for text-to-image synthesis", "year": "2022" }, { "authors": "Kaiming He; Christoph Rhemann; Carsten Rother; Xiaoou Tang; Jian Sun", "journal": "", "ref_id": "b17", "title": "A global sampling method for alpha matting", "year": "2011" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b18", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Qiqi Hou; Feng Liu", "journal": "", "ref_id": "b19", "title": "Context-aware image matting for simultaneous foreground and alpha estimation", "year": "2019" }, { "authors": "Yuanfeng Ji; Zhe Chen; Enze Xie; Lanqing Hong; Xihui Liu; Zhaoqiang Liu; Tong Lu; Zhenguo Li; Ping Luo", "journal": "", "ref_id": "b20", "title": "Ddp: Diffusion model for dense visual prediction", "year": "2023" }, { "authors": "Zhanghan Ke; Jiayu Sun; Kaican Li; Qiong Yan; Rynson W H Lau", "journal": "AAAI", "ref_id": "b21", "title": "Modnet: Real-time trimap-free portrait matting via objective decomposition", "year": "2022" }, { "authors": "Boah Kim; Yujin Oh; Jong Chul; Ye ", "journal": "ICLR", "ref_id": "b22", "title": "Diffusion adversarial representation learning for self-supervised vessel segmentation", "year": "2023" }, { "authors": "Diederik Kingma; Tim Salimans; Ben Poole; Jonathan Ho", "journal": "NeurIPS", "ref_id": "b23", "title": "Variational diffusion models", "year": "2021" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "ICLR", "ref_id": "b24", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Anat Levin; Dani Lischinski; Yair Weiss", "journal": "IEEE TPAMI", "ref_id": "b25", "title": "A closed-form solution to natural image matting", "year": "2007" }, { "authors": "A Levin; A Rav-Acha; D Lischinski", "journal": "IEEE TPAMI", "ref_id": "b26", "title": "Spectral matting", "year": "2008" }, { "authors": "Jizhizi Li; Sihan Ma; Jing Zhang; Dacheng Tao", "journal": "", "ref_id": "b27", "title": "Privacypreserving portrait matting", "year": "2021" }, { "authors": "Jizhizi Li; Jing Zhang; Stephen J Maybank; Dacheng Tao", "journal": "", "ref_id": "b28", "title": "End-to-end animal image matting", "year": "2020" }, { "authors": "Yaoyi Li; Hongtao Lu", "journal": "", "ref_id": "b29", "title": "Natural image matting via guided contextual attention", "year": "2020" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b30", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Chang Liu; Henghui Ding; Xudong Jiang", "journal": "", "ref_id": "b31", "title": "Towards enhancing fine-grained details for image matting", "year": "2021" }, { "authors": "Yuhao Liu; Jiake Xie; Xiao Shi; Yu Qiao; Yujie Huang; Yong Tang; Xin Yang", "journal": "", "ref_id": "b32", "title": "Tripartite information mining and integration for image matting", "year": "2021" }, { "authors": "Hao Lu; Yutong Dai; Chunhua Shen; Songcen Xu", "journal": "", "ref_id": "b33", "title": "Indices matter: Learning to index for deep image matting", "year": "2019" }, { "authors": "Sebastian Lutz; Konstantinos Amplianitis; Aljosa Smolic", "journal": "BMVC", "ref_id": "b34", "title": "Alphagan: Generative adversarial networks for natural image matting", "year": "2018" }, { "authors": "Chenlin Meng; Yutong He; Yang Song; Jiaming Song; Jiajun Wu; Jun-Yan Zhu; Stefano Ermon", "journal": "ICLR", "ref_id": "b35", "title": "Sdedit: Guided image synthesis and editing with stochastic differential equations", "year": "2021" }, { "authors": "Ron Mokady; Amir Hertz; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b36", "title": "Null-text inversion for editing real images using guided diffusion models", "year": "2023" }, { "authors": "Gyutae Park; Sungjoon Son; Jaeyoung Yoo; Seho Kim; Nojun Kwak", "journal": "", "ref_id": "b37", "title": "Matteformer: Transformer-based image matting via prior-tokens", "year": "2022" }, { "authors": "Chenyang Qi; Xiaodong Cun; Yong Zhang; Chenyang Lei; Xintao Wang; Ying Shan; Qifeng Chen", "journal": "", "ref_id": "b38", "title": "Fatezero: Fusing attentions for zero-shot text-based video editing", "year": "2023" }, { "authors": "Yu Qiao; Yuhao Liu; Xin Yang; Dongsheng Zhou; Mingliang Xu; Qiang Zhang; Xiaopeng Wei", "journal": "", "ref_id": "b39", "title": "Attention-guided hierarchical structure aggregation for image matting", "year": "2020" }, { "authors": "Christoph Rhemann; Carsten Rother; Jue Wang; Margrit Gelautz; Pushmeet Kohli; Pamela Rott", "journal": "", "ref_id": "b40", "title": "A perceptually motivated online benchmark for image matting", "year": "2009" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b41", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b42", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Soumyadip Sengupta; Vivek Jayaram; Brian Curless; Steven M Seitz; Ira Kemelmacher-Shlizerman", "journal": "", "ref_id": "b43", "title": "Background matting: The world is your green screen", "year": "2020" }, { "authors": "Ehsan Shahrian; Deepu Rajan; Brian Price; Scott Cohen", "journal": "", "ref_id": "b44", "title": "Improving image matting using comprehensive sampling sets", "year": "2013" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "ICLR", "ref_id": "b45", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Yanan Sun; Chi-Keung Tang; Yu-Wing Tai", "journal": "", "ref_id": "b46", "title": "Semantic image matting", "year": "2021" }, { "authors": "Jingwei Tang; Yagız Aksoy; Cengiz Öztireli; Markus Gross; Tunc ¸ozan Aydın", "journal": "", "ref_id": "b47", "title": "Learning-based sampling for natural image matting", "year": "2019" }, { "authors": "Jue Wang; Michael F Cohen", "journal": "", "ref_id": "b48", "title": "Optimized color sampling for robust matting", "year": "2007" }, { "authors": "Tiantian Wang; Sifei Liu; Yapeng Tian; Kai Li; Ming-Hsuan Yang", "journal": "", "ref_id": "b49", "title": "Video matting via consistency-regularized graph neural networks", "year": "2021" }, { "authors": "Bo Xu; Jiake Xie; Han Huang; Ziwen Li; Cheng Lu; Yong Tang; Yandong Guo", "journal": "", "ref_id": "b50", "title": "Situational perception guided image matting", "year": "2022" }, { "authors": "Ning Xu; Brian Price; Scott Cohen; Thomas Huang", "journal": "", "ref_id": "b51", "title": "Deep image matting", "year": "2009" }, { "authors": "Shuai Yang; Yifan Zhou; Ziwei Liu; Chen Change Loy", "journal": "", "ref_id": "b52", "title": "Rerender a video: Zero-shot text-guided video-to-video translation", "year": "2023" }, { "authors": "Xin Yang; Ke Xu; Shaozhe Chen; Shengfeng He; Rynson Baocai Yin Yin; Lau", "journal": "NeurIPS", "ref_id": "b53", "title": "Active matting", "year": "2018" }, { "authors": "Haichao Yu; Ning Xu; Zilong Huang; Yuqian Zhou; Humphrey Shi", "journal": "", "ref_id": "b54", "title": "High-resolution deep image matting", "year": "2021" }, { "authors": "Qihang Yu; Jianming Zhang; He Zhang; Yilin Wang; Zhe Lin; Ning Xu; Yutong Bai; Alan Yuille", "journal": "", "ref_id": "b55", "title": "Mask guided matting via progressive refinement network", "year": "2021" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b56", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Yunke Zhang; Lixue Gong; Lubin Fan; Peiran Ren; Qixing Huang; Hujun Bao; Weiwei Xu", "journal": "", "ref_id": "b57", "title": "A late fusion cnn for digital matting", "year": "2019" }, { "authors": "Wenliang Zhao; Yongming Rao; Zuyan Liu; Benlin Liu; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b58", "title": "Unleashing text-to-image diffusion models for visual perception", "year": "2023" }, { "authors": "Yuanjie Zheng; Chandra Kambhamettu", "journal": "", "ref_id": "b59", "title": "Learning based digital matting", "year": "2009" } ]
[ { "formula_coordinates": [ 1, 89.74, 673.6, 196.62, 9.65 ], "formula_id": "formula_0", "formula_text": "I i = α i F i + (1 -α i )B i , α i ∈ [0, 1],(1)" }, { "formula_coordinates": [ 3, 89.03, 459.19, 158.41, 9.68 ], "formula_id": "formula_1", "formula_text": "q(x t |x t-1 ) = N 1 -β t x t-1 , β t I ," }, { "formula_coordinates": [ 3, 87.63, 501.89, 161.22, 17.63 ], "formula_id": "formula_2", "formula_text": "x t = √ α t x 0 + √ 1 -α t ϵ, ϵ ∼ N (0, I)," }, { "formula_coordinates": [ 3, 76.9, 525.91, 62.06, 11 ], "formula_id": "formula_3", "formula_text": "α t := 1 -β t 1 ." }, { "formula_coordinates": [ 3, 117.85, 554.55, 100.77, 12.69 ], "formula_id": "formula_4", "formula_text": "L DM = ∥ϵ -ϵ θ (x t , t)∥ 2 2 ." }, { "formula_coordinates": [ 3, 62.88, 624.03, 210.72, 23.61 ], "formula_id": "formula_5", "formula_text": "x t-1 = 1 √ 1 -β t x t - β t √ 1 -α t ϵ θ (x t , t) + σ t z," }, { "formula_coordinates": [ 3, 341.68, 448.25, 203.43, 16.63 ], "formula_id": "formula_6", "formula_text": "m T = √ α T m + (1 -α T )ϵ, ϵ ∼ N (0, I),(2)" }, { "formula_coordinates": [ 3, 323.18, 465.96, 186.59, 30.72 ], "formula_id": "formula_7", "formula_text": "m T -1 = √ α T -1 ( m T - √ 1 -α t ϵ θ (m T ; T ) √ α T )" }, { "formula_coordinates": [ 3, 370.54, 480.2, 174.58, 63.04 ], "formula_id": "formula_8", "formula_text": "+ 1 -α T -1 • ϵ θ (m T ; T ) \"direction pointing to m T \" ,(3)" }, { "formula_coordinates": [ 4, 73.91, 65.95, 417.8, 153.53 ], "formula_id": "formula_9", "formula_text": "× m m T m T-1 m T-1 ϵ θ C θ P m T-1 inv ϵ C × + … × m T-1 m t m t corr ϵ θ C θ C × + DDIM Inversion m t inv L Inv α T-1 L Inv E θ DDIM Inversion" }, { "formula_coordinates": [ 4, 108.89, 573.33, 118.69, 31.41 ], "formula_id": "formula_10", "formula_text": "f I = E θ (I), m corr t = C θ (cat(f I ; m t ); t)," }, { "formula_coordinates": [ 4, 321.19, 377.9, 211.6, 12.69 ], "formula_id": "formula_11", "formula_text": "m ARP t = m corr t × (1 -m known ) + m inv t × m known ." }, { "formula_coordinates": [ 4, 377.64, 543.62, 167.47, 12.69 ], "formula_id": "formula_12", "formula_text": "L Inv = ∥α t -m ARP t ∥ 2 2 .(4)" }, { "formula_coordinates": [ 4, 390.74, 607.01, 154.37, 10.62 ], "formula_id": "formula_13", "formula_text": "L α = ∥α -α0 t ∥ 1 .(5)" }, { "formula_coordinates": [ 4, 376.93, 701.45, 168.19, 13.36 ], "formula_id": "formula_14", "formula_text": "L Comp = ∥I -I comp t ∥ 1 .(6)" }, { "formula_coordinates": [ 5, 90.29, 327.24, 196.07, 9.65 ], "formula_id": "formula_15", "formula_text": "L Final = λ 1 L Inv + λ 2 L α + λ 3 L Comp ,(7)" } ]
2023-11-22
[ { "figure_ref": [ "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b25", "b39", "b81", "b86", "b87", "b7", "b26", "b34", "b109", "b115", "b70", "b108", "b71", "b88", "b8", "b39", "b10", "b8", "b88", "b26", "b87", "b89", "b92", "b92", "b5", "b21", "b76", "b60", "b112", "b31", "b93", "b96", "b81", "b75", "b82", "b2", "b32", "b81", "b98", "b103", "b9", "b54", "b0", "b79", "b97", "b119", "b80" ], "table_ref": [ "tab_1" ], "text": "High-quality 2D and 3D content creation is of great importance for virtual reality, animated movies, gaming and robotics simulation. In the past years, deep generative models have demonstrated immense potential, enabling photorealistic image synthesis at high resolution (Goodfellow et al., 2014;Karras et al., 2020;2021;Rombach et al., 2021;Dhariwal & Nichol, 2021b;Sauer et al., 2022). Recently, 3D-aware generative models advanced image synthesis to view-consistent, 3D-aware image generation (Schwarz et al., 2020;Chan et al., 2021;Gu et al., 2022;Jo et al., 2021;Xu et al., 2022;Zhou et al., 2021b;Zhang et al., 2021;Or-El et al., 2022;Xu et al., 2021;Pan et al., 2021;Deng et al., 2022b;Xiang et al., 2023a;Schwarz et al., 2022;Chan et al., 2022). They generate images with explicit control over the camera viewpoint. Importantly, these approaches do not require 3D training data or multiview supervision, which is costly or impossible to obtain for large-scale real-world data.\nWhile existing 3D-aware generative models achieve high photorealism and 3D-consistent viewpoint control, the vast majority of approaches only consider single-class and aligned data like human faces (Karras et al., 2020) or cat faces (Choi et al., 2020). The reason for this is that existing methods assume a shared canonical coordinate system to represent 3D objects. As a consequence, they require either poses from an off-the-shelf pose estimator (Chan et al., 2022;Schwarz et al., 2022;Gu et al., 2022;Xiang et al., 2023a) or assume, and sometimes learn to refine, a given pose distribution (Schwarz et al., 2020;Niemeyer & Geiger, 2021b;Shi et al., 2023;Skorokhodov et al., 2023;2022). In contrast, in-the-wild images typically have no clearly defined canonical camera system and camera poses or pose distributions are not available or very challenging to obtain.\nWe propose to instead model instances in view space: Our coordinate system is viewer-centric, i.e., it parameterizes the space as seen from the camera's point of view. This removes the need for camera samples and geometry from our second-stage latent diffusion model and 3DGP (Skorokhodov et al., 2023) for the ImageNet classes \"macaw\" (top), \"king penguin\" (middle), \"kimono\" (bottom). Included videos for more results.\nposes and a priori camera pose distributions, unlocking 3D-aware image synthesis on unaligned, diverse datasets. We identify crucial challenges for training in view space: For complex datasets, without a shared canonical representation, existing techniques are prone to generating poor 3D representations and the GAN-based methods struggle with distribution coverage (see Fig. 2, Table 2).\nTo prevent generating flat 3D representations, we leverage cues from monocular depth prediction. While monocular depth estimators are typically trained with multi-view data, we leverage an off-the-shelf pretrained model, such that our approach does not require any direct multi-view supervision for training. Recent works (Bhat et al., 2023;Eftekhar et al., 2021;Ranftl et al., 2020;Miangoleh et al., 2021) demonstrate high prediction quality and generalization ability to in-the-wild data and have been successfully applied to improve 3D-reconstruction (Yu et al., 2022). To ensure distribution coverage on more diverse datasets, we build our approach upon denoising diffusion-based generative models (DDMs) (Ho et al., 2020;Sohl-Dickstein et al., 2015;Song et al., 2020). DDMs have shown state-of-the-art 2D image synthesis quality and offer a scalable and robust training objective (Nichol & Dhariwal, 2021;Nichol et al., 2022b;Rombach et al., 2021;Dhariwal & Nichol, 2021a;Ramesh et al., 2022;Saharia et al., 2022;Balaji et al., 2022;Ho et al., 2022). More specifically, we develop a 3D-aware generative model based on latent diffusion models (LDMs) (Rombach et al., 2021;Vahdat et al., 2021). By training an autoencoder first and then modeling encoded data in a compressed latent space, LDMs achieve an excellent trade-off between computational efficiency and quality. Further, their structured latent space can be learnt to capture a 3D representation of the modeled inputs, as we show in this work.\nOur 3D-aware LDM, called WildFusion, follows LDMs' two-stage approach: First, we train a powerful 3D-aware autoencoder from large collections of unposed images without multiview supervision that simultaneously performs both compression and enables novel-view synthesis. The autoencoder is trained with pixel-space reconstruction losses on the input views and uses adversarial training to supervise novel views. Note that by using adversarial supervision for the novel views, our autoencoder is trained for novel-view synthesis without the need for multiview supervision, in contrast to previous work (Watson et al., 2022;Chan et al., 2023;Liu et al., 2023). Adding monocular depth cues helps the model learn a faithful 3D representation and further improves novel-view synthesis. In the second stage, we train a diffusion model in the compressed and 3D-aware latent space, which enables us to synthesize novel samples and turns the novel-view synthesis system, i.e., our autoencoder, into a 3D-aware generative model. We validate WildFusion on multiple image generation benchmarks, including ImageNet, and find that it outperforms recent state-of-the-art 3D-aware GANs. Moreover, we show that our autoencoder is able to directly synthesize high-quality novel views for a given single image and performs superior compared to recent GAN-based methods, which usually require an inversion process to embed a given image into their latent space (Abdal et al., 2019;Richardson et al., 2021;Tov et al., 2021;Zhu et al., 2020;Roich et al., 2023). Further, in contrast to inversion methods, our autoencoder is trained in a single stage and does not require a pretrained 3D-aware GAN as well as elaborate and often slow techniques for latent optimization.\nMain contributions: (i) We remove the need for posed images and a priori camera pose distributions for 3D-aware image synthesis by modeling instances in view space instead of canonical space. (ii) We learn a powerful 3D-aware autoencoder from unposed images without multiview supervision that simultaneously performs compression, while inferring a 3D representation suitable for novel-view synthesis. (iii) We show that our novel 3D-aware LDM, WildFusion, enables high-quality 3D-aware image synthesis with reasonable geometry and strong distribution coverage, achieving state-of-the-art performance in the unposed image training setting, which corresponds to training on in-the-wild image data. Moreover, we can more efficiently perform novel view synthesis for given images than common GAN-based methods and explore promising 3d-aware image manipulation techniques. We hope that WildFusion paves the way towards scalable and robust in-the-wild 3D-aware image synthesis." }, { "figure_ref": [ "fig_1" ], "heading": "BACKGROUND", "publication_ref": [ "b93", "b31", "b96", "b33", "b100", "b95", "b30", "b113", "b4", "b90", "b63", "b81", "b98", "b61", "b87", "b8", "b73", "b35", "b61", "b8", "b25", "b89", "b92", "b8", "b57", "b105", "b85" ], "table_ref": [], "text": "We briefly provide the theoretical fundamentals and the most relevant related works in this section, and present a comprehensive discussion of the related literature, including concurrent works, in App. A.\nDiffusion Models (Sohl-Dickstein et al., 2015;Ho et al., 2020;Song et al., 2020) create diffused inputs x τ =α τ x+σ τ ϵ, ϵ∼N (0, I) from data x∼p data , where α τ and σ τ define a noise schedule, parameterized by a diffusion-time τ . A denoiser model F ω with parameters ω is trained to denoise the perturbed data via denoising score matching (Hyvärinen, 2005;Vincent, 2011;Song & Ermon, 2019),\narg min ω E x∼pdata,τ ∼pτ ,ϵ∼N (0,I) ∥v -F ω (x τ , τ )∥ 2 2 ,(1)\nwith the target v = α τ ϵ -σ τ x (this is known as v-prediction (Salimans & Ho, 2022)). Further, p τ is a uniform distribution over the diffusion time τ , such that the model is trained to denoise for all different times τ . The noise schedule is designed such that input data is entirely perturbed into Gaussian random noise after the maximum diffusion time. An iterative generative denoising process that employs the learned denoiser F ω can then be initialized from such Gaussian noise to synthesize novel data. Classifier-free guidance can be used to amplify conditioning strength when conditioning the diffusion model on data such as classes; see Ho & Salimans (2021) and App. B.2.\nDiffusion models have also been applied to 3D data (Zeng et al., 2022;Wang et al., 2022b;Bautista et al., 2022;Shue et al., 2022;Nam et al., 2022) but usually require explicit 3D or multiview supervision. In contrast,WildFusion learns from an unstructured image set without multiview supervision.\nLatent Diffusion Models (LDMs) (Rombach et al., 2021;Vahdat et al., 2021) first train a regularized autoencoder with encoder E and decoder D to transform input images I ∼ p data into a spatially lowerdimensional latent space Z of reduced complexity, from which the original data can be reconstructed, this is, Î = D(E(I)) ≈ I. A diffusion model is then trained in the compressed latent space, with x in Eq. ( 1) replaced by an image's latent representation Z = E(I). This latent space diffusion model can be typically smaller in terms of parameter count and memory consumption compared to corresponding pixel-space diffusion models of similar performance. More diffusion model details in Appendix.\n3D-Representations for 3D-Aware Image Synthesis. 3D-aware generative models typically generate neural radiance fields or feature fields, i.e., they represent a scene by generating a color or a feature value f and a density σ for each 3D point p ∈ R 3 (Mildenhall et al., 2020;Schwarz et al., 2020;Niemeyer & Geiger, 2021a). Features and densities can be efficiently computed from a triplane representation [T xy , T xz , T yz ] ( Chan et al., 2022;Peng et al., 2020). The triplane feature t is obtained by projecting p onto each of the three feature planes and averaging their feature vectors (t xy , t xz , t yz ).\nAn MLP then converts the triplane feature t to a feature and density value [f , σ] = M LP (t).\nGiven a camera pose, the feature field is rendered via volume rendering (Kajiya & Herzen, 1984;Mildenhall et al., 2020). For that, the feature field is evaluated at discrete points p i r along each camera ray r yielding features and densities {(f i r , σ i r )} N i=1 . For each ray r, these features are aggregated to a feature f r using alpha composition\nf r = N i=1 w i r f i r , w i r = T i r α i r , T i r = i-1 j=1 1 -α j r , α i r = 1 -exp -σ i r δ i r ,(2)\nwhere T i r and α i r denote the transmittance and alpha value of sample point p i r along ray r and δ i r = p i+1 r -p i r 2 is the distance between neighboring sample points. Similarly, depth can be rendered, see Appendix. For efficiency, a low-resolution feature map, and optionally a low-resolution image Îlow , can be rendered instead of an image at full resolution (Niemeyer & Geiger, 2021a;Chan et al., 2022). The feature map is then subsequently upsampled and decoded into a higher-resolution image Î.\nMost works on 3D-aware image synthesis rely on GANs (Goodfellow et al., 2014) and focus on aligned datasets with well-defined pose distributions. For instance, POF3D (Shi et al., 2023) infers camera poses and works in a canonical view space; it has been used only for datasets with simple pose distributions, such as cat and human faces. To enable training on more complex datasets, 3DGP (Skorokhodov et al., 2023) proposes an elaborate camera model and learns to refine an initial prior on the pose distribution. Specifically, 3DGP predicts the camera location in a canonical coordinate system per class and sample-specific camera rotation and intrinsics. This assumes that samples within a class share a canonical system, and we observe that learning this complex distribution can aggravate training instability. Further, the approach needs to be trained on heavily filtered training data. In contrast, WildFusion can generate high-quality and diverse samples even when trained on the entire ImageNet dataset without any filtering (see Sec. 4.2). Moreover, we use EG3D's triplanes and their dual discriminator to improve view consistency (Chan et al., 2022). Note that in contrast to POF3D, 3DGP, EG3D, and the vast majority of 3D-aware image generative models, WildFusion is not a GAN. GANs are notoriously hard to train (Mescheder et al., 2018) and often do not cover the data distribution well (see mode collapse in 3DGP, Fig. 2). Instead, we explore 3D-aware image synthesis with latent diffusion models for the first time. Concurrently with us, IVID (Xiang et al., 2023b) trained a 2D diffusion model that first synthesizes an initial image and subsequently generates novel views conditioned on it. However, the iterative generation is extremely slow because it requires running the full reverse diffusion process for every novel view. Further, an explicit 3D representation can only be constructed indirectly from a large collection of generated multi-view images, afterwards. Instead, WildFusion uses a fundamentally different approach and only runs the reverse diffusion process once to generate a (latent) 3D representation from which multi-view images can be rendered directly and geometry can be extracted easily. Another concurrent work, VQ3D (Sargent et al., 2023), also proposes an autoencoder architecture, but uses sequence-like latent variables and trains an autoregressive transformer in the latent space. Instead, WildFusion trains a diffusion model on latent feature maps. Another difference is that VQ3D applies two discriminators on the generated images, one that distinguishes between reconstruction and training image, and another one that discriminates between reconstruction and novel view. WildFusion only applies a single discriminator to supervise the novel views and instead has an additional discriminator on the depth. At time of submission neither work had code for experimental comparisons available." }, { "figure_ref": [], "heading": "WILDFUSION", "publication_ref": [ "b81", "b98", "b81", "b23", "b53" ], "table_ref": [], "text": "Our goal is to design a 3D-aware image synthesis framework that can be trained using unposed in-the-wild images. We base our framework, WildFusion, on LDMs (Rombach et al., 2021;Vahdat et al., 2021) for several reasons: (i) Compared to diffusion in the original data space, they offer excellent computational efficiency due to their compressed latent space. (ii) Diffusion models use a robust objective that offers sample diversity and does not suffer from problems that plague GANs such as mode collapse. (iii) Most importantly, one can construct a latent space that can be trained to not only perform compression but also to learn a powerful 3D representation for novel view synthesis, as we demonstrate in this work. Fig. 3 shows an overview over WildFusion. It consists of two training stages (Rombach et al., 2021;Esser et al., 2021). In the first stage, our new autoencoder Figure 3: WildFusion Overview: In the first stage, we train an autoencoder for both compression and novel-view synthesis. A Feature Pyramid Network (FPN) (Lin et al., 2017) encodes a given unposed image I into an 3D-aware latent representation Z, constructed as a 2D feature grid. A combination of transformer blocks and a CNN then decode Z into a triplane representation, which is rendered from both the input view P0 and a novel view Pnv. As we model instances in view space, P0 is a fixed, pre-defined camera pose. The input view is supervised with reconstruction losses. Adversarial training provides supervision for novel views. In the second stage, a latent diffusion model is trained on the learned latent space to obtain a 3D-aware generative model. learns a compressed and abstracted latent space suitable for reconstruction and novel view synthesis from single view training data. In the second stage, a latent diffusion model is trained on the latent representation from the first stage autoencoder to obtain a full generative model." }, { "figure_ref": [], "heading": "AUTOENCODER FOR COMPRESSION AND NOVEL-VIEW SYNTHESIS", "publication_ref": [ "b81", "b5", "b53", "b81", "b8", "b91", "b52", "b9", "b106", "b35", "b61", "b3", "b59", "b6", "b81", "b116", "b22", "b76", "b59", "b6", "b8", "b57" ], "table_ref": [ "tab_4", "tab_4", "tab_4", "tab_4" ], "text": "Following the LDM framework (Rombach et al., 2021), we first train an autoencoder that encodes training data into latent representations. Unlike LDMs, however, where the task of the autoencoder is simply to compress and reconstruct the inputs, our setting is more complex, as the autoencoding model must also learn a 3D representation of the data such that it can infer reasonable novel views from a single input image. The capability for novel-view synthesis will be used later by the diffusion model to perform 3D-aware image synthesis with 3D-consistent viewpoint changes. However, as no multiview or explicit 3D geometry supervision is available, this novel-view synthesis task is highly under-constrained and non-trivial to solve. To aid this process, we provide additional cues about the geometry in the form of monocular depth supervision from a pre-trained network (Bhat et al., 2023).\nSpecifically, we concatenate a given image I ∈ R 3×H×W with its estimated monocular depth D ∈ R 1×H×W channel-wise and encode them into a compressed latent representation Z ∈ R c×h×w via an encoder. As the encoder must infer a latent representation that encodes the underlying 3D object or scene of the input image, we found it beneficial to provide both I and D as input (see Table 4). We choose a Feature Pyramid Network (FPN) (Lin et al., 2017) architecture due to its large receptive field. For LDMs, latent space compression is crucial to train a diffusion model efficiently in the second stage. At input resolution 256 × 256 pixels, we use c=4, h=w=32 as in Rombach et al. (2021).\nThe decoder predicts a feature field from the compressed latent code Z, which can be rendered from arbitrary viewing directions. The feature field is represented with triplanes. In contrast to previous works (Chan et al., 2022;Skorokhodov et al., 2022), our triplane representation is constructed from the latent feature map Z instead of being generated from random noise such that it is possible to reconstruct the input image. Taking inspiration from (Lin et al., 2023;Chan et al., 2023), we process Z with a combination of transformer blocks to facilitate learning global features and a CNN to increase the resolution of the features. For the transformer blocks after the CNN we use efficient self-attention (Xie et al., 2021) to keep the computational cost manageable for the larger resolution feature maps. We find that this combination of transformer blocks and convolutional layers achieves better novel view synthesis than using a fully convolutional architecture (see Table 4).\nNext, the feature field is projected to the input view P 0 and a novel view P nv via volume rendering (Kajiya & Herzen, 1984;Mildenhall et al., 2020), as described in Sec. 2. For the input view, we use the same fixed pose P 0 for all instances. This means that we are modeling instances in view space where the coordinate system is defined from the input camera's point of view. Therefore, novel views can be sampled uniformly from a predefined range of angles around P 0 . In this work, we assume fixed camera intrinsics that we choose according to our camera settings. We find that using the same intrinsics for all datasets works well in practice (for details, see Appendix). To model unbounded scenes, we sample points along rays linearly in disparity (inverse depth) instead of depth. This effectively samples more points close to the camera and uses fewer samples at large depths. Recall that these points are projected onto triplanes for inferring the triplane features. To ensure that the full depth range is mapped onto the triplanes, we use a contraction function as in (Barron et al., 2022). The contraction function maps all coordinates to a bounded range, which ensures that sampling points are mapped to valid coordinates on the triplanes (Supp. Mat. for details). We find that representing unbounded scenes with a combination of disparity sampling and a contraction function improves novel view synthesis, see Table 4. We render low-resolution images Îlow , Îlow nv , depth maps Dlow , Dlow nv and feature maps F low , F low nv from the feature field using volume rendering at 64 × 64 resolution, see Eq. ( 2). The rendered low-resolution feature maps are then processed with a superresolution CNN module (SR CNN) that increases the spatial dimensions by 4× to yield the reconstructed image Î and a novel view image Înv (see Fig. 3).\nTraining Objective. We train the autoencoder with a reconstruction loss on the input view and use an adversarial objective to supervise novel views (Mi et al., 2022;Cai et al., 2022). Similar to Rombach et al. (2021), we add a small Kullback-Leibler (KL) divergence regularization term L KL on the latent space Z. The reconstruction loss L rec consists of a pixel-wise loss L px = | Î -I|, a perceptual loss L V GG (Zhang et al., 2018), and depth losses L depth .\nAs our monocular depth estimation D is defined only up to scale, we first compute a scale s and shift t for each image by solving a least-squares criterion for s and t, which has a closed-form solution (Eigen et al., 2014). Following Ranftl et al. (2020), we enforce consistency between rendered (2D) depth Dlow and the downsampled monocular depth D low that was estimated on the input images:\nL 2D depth = ||(s Dlow +t)-D low || 2 .\nWe further found it beneficial to directly supervise the (normalized) rendering weights w i r of the 3D sampling points (see Eq. ( 2)) with the depth. Let K r (sD low + t) denote the index set of the k sampling points closest to the rescaled monocular depth along ray r. Then,\nL 3D depth = r (1 - i∈Kr w i r ) 2 + ( i / ∈Kr w i r ) 2 .(3)\nIntuitively, the loss encourages large rendering weights for the points close to the re-scaled monocular depth and small rendering weights for points farther away. Note that we regularize the sum of the weights in the neighborhood K r instead of the individual weights to account for imperfections in the monocular depth. In our experiments, we use a neighborhood size of k = 5.\nIn addition to the reconstruction losses on the input view, we supervise the novel views of the input image. As per-pixel supervision is not available for novel views in our setting, we follow (Mi et al., 2022;Cai et al., 2022) and use adversarial training to supervise novel views. We use a dual discriminator (Chan et al., 2022), i.e., we upsample Îlow nv and concatenate it with Înv as input to the discriminator as a fake pair (see Fig. 3). Similarly, I is first downsampled to simulate a lower resolution image, and then is upsampled back to the original resolution and concatenated with the original I to be used as the real pair to the discriminator. Let E θ , G ψ and D ϕ denote encoder, decoder and discriminator with parameters θ, ψ and ϕ, respectively. For brevity, we omit the upsampling and concatenation of the discriminator inputs in the adversarial objective\nV (I, P nv , λ; θ, ψ, ϕ) = f (-D ϕ (G ψ (E θ (I, D), P nv ))) + f (D ϕ (I)) -λ∥∇D ϕ (I)∥ 2 , (4\n)\nwhere f (x) = -log(1 + exp(-x)) and λ controls the R1-regularizer (Mescheder et al., 2018).\nWe find that an additional discriminator D depth χ on the low-resolution depth maps further improves novel view synthesis. D depth χ helps to ensure the volume-rendered Dlow is realistic (see Table 4). The autoencoder and discriminators are trained with alternating gradient descent steps combining the adversarial objectives with the reconstruction and regularization terms. Implementation details in Appendix.\nIn conclusion, to learn a latent representation suitable not only for reconstruction but also novel view synthesis, we use both reconstruction and adversarial objectives. Note, however, that this is still fundamentally different from regular GAN-like training, which most existing works on 3D-aware image synthesis rely on. Our reconstruction losses on input views prevent mode collapse and ensure stable training. This makes our approach arguably more robust and scalable. Moreover, inputs to the decoder are not sampled from random noise, like in GANs, but correspond to image encodings." }, { "figure_ref": [], "heading": "LATENT DIFFUSION MODEL", "publication_ref": [ "b81", "b81", "b27", "b99", "b45", "b78", "b81", "b94" ], "table_ref": [], "text": "The autoencoder trained as described above learns a latent representation that is (i) compressed, and therefore suitable for efficient training of a latent diffusion model, and simultaneously also (ii) 3D-aware in the sense that it enables the prediction of a triplane representation from which consistent novel views can be synthesized corresponding to different viewing directions onto the modeled scene. Consequently, once the autoencoder is trained, we fit a latent diffusion model on its 3D-aware latent space in the second training stage (see Fig. 3). To this end, we encode our training images into the latent space and train the diffusion model on the encoded data. Importantly, although the autoencoder produces a compact 3D-aware latent representation, it is structured as a spatial 2D latent feature grid, as in standard LDMs for 2D image synthesis. Therefore, we can directly follow the training procedure of regular LDMs (Rombach et al., 2021) when training the diffusion model. Eventually, this allows us to train a powerful 3D-aware generative model that can be trained and sampled efficiently in 2D latent space. Our training objective is the standard denoising score matching objective as given in Eq (1), applied in latent space.\nWe adopt the architecture of regular 2D image LDMs (Rombach et al., 2021) to train our model. The denoiser F ω is implemented as a 2D U-Net with residual blocks (He et al., 2016) and self-attention layers (Vaswani et al., 2017). As discussed, the autoencoder's latent distribution is regularized with a KL divergence loss (Kingma & Welling, 2014;Rezende et al., 2014;Rombach et al., 2021) (see Appendix) to be roughly aligned with the standard normal distribution. However, as we enforce a very low-weighted KL loss, the distribution can have a larger variance. We estimate the standard deviation of the latent space using a batch of encoded training data and use the resulting value to normalize the latent distribution to yield a standard deviation close to 1 before fitting the diffusion model. We use the DDIM sampler (Song et al., 2021) with 200 steps. More implementation details in the Appendix." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b62", "b111", "b12", "b8", "b92", "b89", "b26", "b92", "b114", "b29", "b7", "b92", "b47" ], "table_ref": [], "text": "The performance of LDMs is upper-bounded by the quality of the latent space they are trained on, i.e., we cannot expect to generate better novel images than what the autoencoder achieves in terms of reconstructions. Hence, a powerful autoencoder is key to training a good generative model in the second stage. We first analyze the reconstruction quality of WildFusion's autoencoder as well as its ability to synthesize novel views (Sec. 4.1). Next, we evaluate the full WildFusion model against the state-of-the-art approaches for 3D-aware image synthesis (Sec. 4.2). We provide ablation studies in Sec. 4.3. Our videos included on the project page (https://katjaschwarz.github.io/wildfusion/) show generated 3D-aware samples with camera motions. Further results are also shown in App. D.\nDatasets. While previous 3D-aware generative models mainly focus on aligned datasets like portrait images, we study a general setting in which a canonical camera system cannot be clearly defined. Hence, we use non-aligned datasets with complex geometry: SDIP Dogs, Elephants, Horses (Mokady et al., 2022;Yu et al., 2015) as well as class-conditional ImageNet (Deng et al., 2009). Baselines. We compare against the state-of-the-art generative models for 3D-aware image synthesis, EG3D (Chan et al., 2022), 3DGP (Skorokhodov et al., 2023) and POF3D (Shi et al., 2023) as well as StyleNeRF (Gu et al., 2022). 3DGP and POF3D learn a camera distribution in canonical space and can be trained on unposed images. Since we also aim to compare to other models working in the same setting as WildFusion, i.e., in view space, we adapt EG3D so that it can be trained in view space and without camera poses (indicated as EG3D* below); see Appendix for details. We also train another variant of EG3D* where we add a depth discriminator to incorporate monocular depth information. Note that the regular version of EG3D that relies on object poses is clearly outperformed by 3DGP (as shown in their paper (Skorokhodov et al., 2023)); hence, we do not explicitly compare to it. Evaluation Metrics. For the autoencoder, we measure reconstruction via learned perceptual image patch similarity (LPIPS) (Zhan et al., 2018) and quantify novel view quality with Fréchet Inception Distance (nvFID) (Heusel et al., 2017) on 1000 held-out dataset images. Following prior art, we sample camera poses around the input view P 0 from Gaussian distributions with σ = 0.3 and 0.15 radians for the yaw and pitch angles (Chan et al., 2021). We also report non-flatness-score (NFS) (Skorokhodov et al., 2023). It measures average entropy of the normalized depth maps' histograms and quantifies surface flatness, indicating geometry quality. For the full generative models, we measure NFS and evaluate FID between 10k generated images and the full dataset, sampling camera poses as for nvFID. As FID can be prone to distortions (Kynkäänniemi et al., 2023), we also show FID CLIP , which uses CLIP features. To quantify diversity, we report Recall, and we also show " }, { "figure_ref": [ "fig_2", "fig_0", "fig_3" ], "heading": "AUTOENCODER FOR RECONSTRUCTION AND NOVEL-VIEW SYNTHESIS", "publication_ref": [ "b8", "b8", "b80" ], "table_ref": [ "tab_0" ], "text": "As EG3D (Chan et al., 2022) is a GAN-based approach, we need to perform GAN-inversion (Chan et al., 2022;Roich et al., 2023) to reconstruct input images (we use the scripts provided by the authors).\nQuantitative results are in Table 1, qualitative comparisons in Fig. 4 (more in App. D). Compared to EG3D using GAN-inversion, WildFusion's autoencoder achieves superior performance on all metrics and is also more efficient, since we do not require a lengthy optimization process (which occasionally diverges) to embed input images into latent space. Despite its low latent space dimension of 32×32×4, our autoencoder achieves both good compression and novel view synthesis. In Fig. 1, Fig. 5 and in App. D, we also show novel view synthesis results on ImageNet, which performs equally well." }, { "figure_ref": [ "fig_1", "fig_3", "fig_1", "fig_4", "fig_5" ], "heading": "3D-AWARE IMAGE SYNTHESIS WITH LATENT DIFFUSION MODELS", "publication_ref": [ "b8", "b92", "b89", "b47", "b81", "b41", "b44" ], "table_ref": [ "tab_1", "tab_3" ], "text": "SDIP Datasets. We compare WildFusion against state-of-the-art 3D-aware generative models (Table 2) and provide model samples in the App. D. Our videos in the Supp. Mat. show more generated 3D-aware samples, including smooth viewpoint changes. Compared to EG3D* (Chan et al., 2022), which effectively also models instances in view-space, WildFusion achieves higher performance on all metrics, i.e., in terms of image quality (FID/FID CLIP ), 3D geometry (NFS), and diversity (Recall). This validates the effectiveness of our LDM framework in this setting. When also adding a depth discriminator to EG3D*, WildFusion still achieves better FID and NFS and Recall. Note that in particular on NFS and Recall, the performance gap is generally large. We also outperform StyleNeRF. We conclude that previous works that operate in view space can struggle with flat geometry and distribution coverage when training on reasonably complex, non-aligned images.\nIn contrast, WildFusion achieves state-of-the-art performance in this setting.\nThe baselines 3DGP (Skorokhodov et al., 2023) and POF3D (Shi et al., 2023) both rely on sophisticated camera pose estimation procedures and do not operate in view space, which makes a direct comparison with WildFusion difficult. Nevertheless, WildFusion performs on-par or even better on the different metrics, despite not relying on posed images or learned pose or camera distributions at all. These previous works' reliance on complex canonical camera systems represents a major limitation with regards to their scalability, which our 3D-aware LDM framework avoids. Note that in particular on Recall, WildFusion always performs superior compared to all other methods, demonstrating that diffusion-based frameworks are usually better than GAN-based ones in terms of sample diversity. ImageNet. We find that WildFusion outperforms all baselines on NFS, Precision and Recall by large margins, for varying classifier-free guidance scales (Table 3). The extremely low Recall scores of the GAN-based baselines indicate very low sample diversity (mode collapse). We visually validate this for 3DGP, the strongest of the three baselines, in Fig. 2: 3DGP collapses and produces almost identical samples within classes, showing virtually no diversity. In contrast, our model produces diverse, high-quality samples. Note that this failure of 3DGP is pointed out by the authors (see \"Limitations and failure cases\" at https://snap-research.github.io/3dgp/). The FID metric does not accurately capture that mode collapse. While we outperform EG3D and StyleNeRF also on FID, WildFusion is slightly behind 3DGP. However, it is known that FID is a questionable metric (Kynkäänniemi et al., 2023) and the qualitative results in Fig. 5 and Fig. 2 show that WildFusion generates diverse and high-fidelity images and generally learns a reasonable geometry in this challenging setting. We believe that a mode-collapsed generative model like 3DGP, despite good FID scores, is not useful in practice. Finally, note that due to limited compute resources our ImageNet model was trained only with a total batch size of 256. However, non-3D-aware ImageNet diffusion models are typically trained on batch sizes >1000 to achieve strong performance (Rombach et al., 2021;Karras et al., 2022;Kingma & Gao, 2023). Hence, scaling our model and the compute resources would likely boost the results significantly.\nInterpolation and Generative Resampling. In Fig. 6, we use WildFusion to perform semantically meaningful 3D-aware image interpolation between two given (or generated) images. Moreover, in Fig. 7 we demonstrate how we can use our 3D-aware latent diffusion model to refine images and geometry by only partially diffusing their encodings and regenerating from those intermediate diffusion levels. These two applications highlight the versatility of WildFusion and have potential use cases in 3D-aware image editing. To the best of our knowledge, this is the first time such applications have been demonstrated for such 3D-aware image generative models. See the project page (https://katjaschwarz.github.io/wildfusion/) for animations with viewpoint changes for these 3D-aware image interpolation and generative image resampling results." }, { "figure_ref": [ "fig_10" ], "heading": "ABLATION STUDIES", "publication_ref": [ "b81", "b18" ], "table_ref": [ "tab_4" ], "text": "We provide a detailed ablation study in Table 4, starting from a base model, where the decoder architecture follows EG3D's generator. We gradually introduce changes and track their impact on novel view synthesis (nvFID) and geometry (NFS) (corresponding samples in App. D in Fig. 12). For computational efficiency, we perform the study at image resolution 128 2 and reduce the number of network parameters compared to our main models. Like LDMs (Rombach et al., 2021), the initial setting is trained only with reconstruction losses. Unsurprisingly, this results in planar geometry, indicated by low NFS. This changes when a discriminator on novel views is added. However, geometry becomes noisy and incomplete, indicated by high NFS and worse nvFID. We hypothesize the purely convolutional architecture is suboptimal. Hence, in the next step, we instead use a combination of convolutions and transformer blocks (ViT (Dosovitskiy et al., 2020)) in the decoder, which improves novel view quality and results in less noisy geometry. Adding the monocular depth discriminator D depth χ significantly improves nvFID, and we observe an even bigger improvement when tailoring the model to represent unbounded scenes with disparity sampling and a coordinate contraction function. Further supervision on the rendered depth (L 2D depth ) does not improve results but as it does not hurt performance either, we kept it in our pipeline. Lastly, we find that both adding depth as an input to the encoder and directly supervising the rendering weights with L 3D\ndepth result in slight improvements in NFS and qualitatively improve geometry." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [ "b75", "b81", "b82", "b2" ], "table_ref": [], "text": "We introduce WildFusion, a 3D-aware LDM for 3D-aware image synthesis. WildFusion is trained without multiview or 3D geometry supervision and relies neither on posed images nor on learned pose or camera distributions. Key to our framework is an image autoencoder with a 3D-aware latent space that simultaneously enables not only novel view synthesis but also compression. This allows us to efficiently train a diffusion model in the autoencoder's latent space. WildFusion outperforms recent state-of-the-art GAN-based methods when training on diverse data without camera poses. Future work could scale up 3D-aware LDMs to the text-conditional setting, similar to how 2D diffusion models have been applied on extremely diverse datasets (Ramesh et al., 2022;Rombach et al., 2021;Saharia et al., 2022;Balaji et al., 2022)." }, { "figure_ref": [], "heading": "A RELATED WORK", "publication_ref": [ "b93", "b31", "b96", "b81", "b32", "b98", "b75", "b82", "b2" ], "table_ref": [], "text": "Here, we present an extended discussion about related work.\nDiffusion Models. Diffusion models (DMs) (Sohl-Dickstein et al., 2015;Ho et al., 2020;Song et al., 2020) have proven to be powerful image generators, yielding state-of-the art results in unconditional as well as class-and text-guided synthesis (Nichol & Dhariwal, 2021;Rombach et al., 2021;Dhariwal & Nichol, 2021a;Ho et al., 2022;Dockhorn et al., 2022a;b;Vahdat et al., 2021;Nichol et al., 2022b;Ramesh et al., 2022;Saharia et al., 2022;Balaji et al., 2022). However, none of these works tackles 3D-aware image synthesis." }, { "figure_ref": [], "heading": "3D Diffusion Models.", "publication_ref": [ "b56", "b113", "b36", "b4", "b63", "b42", "b20", "b20", "b20", "b1", "b74", "b51", "b58", "b50", "b87", "b7", "b25", "b87", "b7", "b26", "b34", "b109", "b115", "b70", "b108", "b71", "b26", "b61", "b88", "b8", "b64", "b28", "b26", "b38", "b57", "b55", "b38", "b89", "b92", "b77", "b59", "b49", "b6", "b52", "b107", "b110", "b48", "b72", "b103", "b9", "b54" ], "table_ref": [], "text": "There is also much literature on applying diffusion models to 3D data, e.g. 3D point clouds (Zhou et al., 2021a;Luo & Hu, 2021;Zeng et al., 2022) or tetrahedral meshes (Kalischek et al., 2022). Shue et al. ( 2022) learn a diffusion model on a triplane representation parameterizing a neural occupancy field. GAUDI (Bautista et al., 2022) and 3D-LDM (Nam et al., 2022) train diffusion models in latent spaces learnt using an autodecoder framework and generate 3D scenes and 3D shapes, respectively. RODIN (Wang et al., 2022b) proposes a hierarchical latent diffusion model framework to learn 3D human avatars and NF-LDM (Kim et al., 2023) trains a hierarchical diffusion model for outdoor scene generation. Dupont et al. (2022) andDu et al. (2021) treat data as functions and also explore encoding 3D signals into latent spaces, but using more inefficient meta-learning (Dupont et al., 2022) or auto-decoder (Du et al., 2021) methods. Dupont et al. (2022) also trains a diffusion model on the encoded 3D data. However, all aforementioned works rely on explicit 3D or multiview supervision. In contrast, our approach learns from an unstructured image collection without multiview supervision.\nRenderDiffusion (Anciukevicius et al., 2023) trains a diffusion model directly on images, using a triplanar 3D feature representation inside the denoiser network architecture, thereby enabling 3D-aware image generation during synthesis. However, scaling RenderDiffusion to high-resolution outputs is challenging, as it operates directly on images. In fact, it considers only small image resolutions of 32x32 or 64x64, likely due to computational limitations. When trained on single-view real-world data, the paper only considers data with little pose variation (human and cat faces) and it is unclear whether the approach is scalable to diverse or high-resolution image data (moreover, no perceptual quality evaluations on metrics such as FID are presented and there is no code available). In contrast, our diffusion model is trained efficiently in a low-resolution, compressed and 3Daware latent space, while simultaneously predicting high-resolution triplanes and enabling highresolution rendering. Hence, WildFusion generates significantly higher quality 3D-aware images than RenderDiffusion and it is scalable to diverse datasets such as ImageNet, as we demonstrate.\nOptimization from Text-to-Image Diffusion Models. Another line of work distills 3D objects from large-scale 2D text-to-image diffusion models (Poole et al., 2022;Lin et al., 2022;Nichol et al., 2022a;Metzer et al., 2022;Wang et al., 2022a;Deng et al., 2022a). However, these methods follow an entirely different approach compared to 3D-and 3D-aware diffusion models and require a slow optimization process that needs to be run per instance.\n3D-Aware Image Synthesis. 3D-aware generative models consider image synthesis with control over the camera viewpoint (Liao et al., 2020;Schwarz et al., 2020;Chan et al., 2021). Most existing works rely on generative adversarial networks (GANs) (Goodfellow et al., 2014) and use coordinatebased MLPs as 3D-generator (Schwarz et al., 2020;Chan et al., 2021;Gu et al., 2022;Jo et al., 2021;Xu et al., 2022;Zhou et al., 2021b;Zhang et al., 2021;Or-El et al., 2022;Xu et al., 2021;Pan et al., 2021;Deng et al., 2022b;Xiang et al., 2023a;Gu et al., 2022), building on Neural Radiance Fields (Mildenhall et al., 2020) as 3D representation. VoxGRAF (Schwarz et al., 2022) and EG3D (Chan et al., 2022) proposed efficient convolutional 3D generators that require only a single forward pass for generating a 3D scene. Our autoencoder uses EG3D's triplane representation and their dual discriminator to improve view consistency. Early 3D-aware generative models that do not require camera poses during training and operate in view space include HoloGAN Nguyen-Phuoc et al. (2019) and PlatonicGAN (Henzler et al., 2019). They are outperformed, for instance, by the more recent StyleNeRF (Gu et al., 2022), which uses a style-based architecture (Karras et al., 2019;2020) and proposes a novel path regularization loss to achieve 3D consistency. In contrast to the aforementioned approaches, however, our generative model is not a GAN. GANs are notoriously hard to train (Mescheder et al., 2018) and often do not cover the data distribution well. Instead, we explore 3D-aware image synthesis with latent diffusion models for the first time.\nUntil recently, 3D-aware image synthesis focused on aligned datasets with well-defined pose distributions, such as portrait images (Liu et al., 2015;Karras et al., 2019). For instance, POF3D (Shi et al., 2023) is a recent 3D-aware GAN that infers camera poses and works in a canonical view space; it has been used only for datasets with simple pose distributions, such as cat and human faces. Advancing to more complex datasets, GINA-3D learns to generate assets from driving data, assuming known camera and LIDAR sensors. The two-stage approach first trains a vision transformer encoder yielding a latent triplane representation. Next, a discrete token generative model is trained in the latent space. We consider a setting where camera information and ground truth depth are not available and train a latent diffusion model on 2D feature maps. To scale 3D-aware image synthesis to more complex datasets, i.e. ImageNet, 3DGP (Skorokhodov et al., 2023) proposes an elaborate camera model and learns to refine an initial prior on the pose distribution. Specifically, 3DGP predicts the camera location in a canonical coordinate system per class and sample-specific camera rotation and intrinsics. This assumes that samples within a class share a canonical system, and we observe that learning this complex distribution can aggravate training instability. Further, the approach needs to be trained on heavily filtered training data. In contrast, WildFusion can generate high-quality and diverse samples even when trained on the entire ImageNet dataset without any filtering (see Sec. 4.2).\nNovel View Synthesis. Our autoencoder is related to methods that synthesize novel views given a single input image: LoLNeRF (Rebain et al., 2021) trains an autodecoder with per-pixel reconstruction loss and mask supervision. In contrast, we add an adversarial objective to supervise novel views. Mi et al. (2022) proposes a similar approach but is not investigated in the context of generative modeling. Another line of recent works leverage GAN inversion to generate novel views from single input images (Li et al., 2022;Cai et al., 2022;Lin et al., 2023;Xie et al., 2022;Yin et al., 2022;Lan et al., 2022;Pavllo et al., 2022) but rely on pretrained 3D-aware GANs and thereby inherit their aforementioned limitations. Several recent works (Watson et al., 2022;Chan et al., 2023;Liu et al., 2023) tackled novel view synthesis with view-conditioned 2D diffusion models, but are trained with explicit 3D or multiview supervision. Unlike these approaches, we use our autoencoder to also train a 3D-aware generative model." }, { "figure_ref": [], "heading": "B IMPLEMENTATION DETAILS B.1 CAMERA SYSTEM", "publication_ref": [ "b3" ], "table_ref": [], "text": "In the following, we describe the camera system we use to learn a 3D representation in view space. An overview is shown in Fig. 8. The input view P 0 is defined by the camera intrinsics K 0 and extrinsics [R 0 , T 0 ], where R 0 and T 0 denote the rotation and translation of the camera in the world coordinate system. We fix R 0 and T 0 , such that the camera is located at a fixed radius and looks at the origin of the world coordinate center. For the intrinsics K 0 , we choose a small, fixed field of view since most images in the datasets are cropped and perspective distortion is generally small. For the experiments on all datasets, we set\nR 0 = 1 0 0 0 -1 0 0 0 -1 T 0 = 0 0 2.7 K 0 = 5.4 0 0.5 0 5.4 0.5 0 0 1.0 . (5\n)\nDuring training, we sample novel views around P 0 by drawing offsets for azimuth and polar angles uniformly from [-35 As we consider unbounded scenes, we sample points along the camera rays linearly in disparity (inverse depth). Thereby, we effectively sample more points close to the camera and use fewer samples at large depths where fewer details need to be modeled. In practice, we set the near and far planes to t n = 2.25 and t f = 5.0, respectively. During rendering, the sampled points are mapped to 3D coordinates in [-1, 1] and subsequently projected to 2D points on the triplanes (see Fig. 8).\nReflecting the disparity sampling, we choose a non-linear mapping function for the 3D coordinates that places sampling points with a large depth closer together. This assigns a larger area on the triplanes to points close to the camera while points far away are projected onto a smaller area.\nFigure 8: Camera System: Our autoencoder models objects in view space, i.e., it uses a fixed camera pose P0 to render the reconstructed images. The orange line represents a camera ray and tn and t f denote the near and the far plane, respectively, between which the 3D object is located. We evaluate the camera ray at the red sample coordinates which are spaced linearly in disparity (inverse depth) to better model large depth ranges. However, the triplanes that carry the features for encoding the 3D object are only defined within a normalized range ([-1, 1] in all directions); hence, we need to normalize the samples on the camera ray accordingly to ensure that all coordinates are projected onto the triplanes. Specifically, we use the contraction function in Eq. ( 6). Sample coordinates within a fixed radius r are mapped linearly to a sphere of radius rin < 1, i.e., into the domain where the triplanes are defined (linear domain). Sample coordinates with norm > r are contracted such that they have norm ≤ 1 in the domain of the triplanes (contracted domain).\nSpecifically, we use a contraction function inspired by Barron et al. (2022) that maps points within a sphere of radius r s to a sphere of radius r in < 1 and all points outside of the sphere to a sphere of radius 1. Let x ∈ R 3 denote a sampled point; then, the contracted coordinate x c is calculated as\nx c = x rin r , if ||x|| ≤ r (1 -r in ) 1 - 1 ||x||-r+1 + r in x ||x|| , otherwise(6)\nWe set r s = 1.3 and r in = 0.8 for all experiments." }, { "figure_ref": [], "heading": "B.2 NETWORK ARCHITECTURE, OBJECTIVES, AND TRAINING", "publication_ref": [ "b53", "b45", "b78", "b106", "b8", "b23", "b43", "b59", "b6", "b81", "b116", "b38", "b8", "b81", "b37", "b38", "b57", "b81", "b30", "b81", "b41", "b44" ], "table_ref": [ "tab_3", "tab_6", "tab_6", "tab_7" ], "text": "First Stage: 3D-aware Autoencoder. The encoder network is a feature pyramid network (FPN) (Lin et al., 2017). In practice, we use the setup from variational autoencoders (VAEs) (Kingma & Welling, 2014;Rezende et al., 2014) and predict means µ and variances σ 2 of a normal distribution from which we sample the latent representation Z\n[µ, σ 2 ] = F P N (I), Z ∼ N (µ, σ 2 ),(7)\nwhere Z ∈ R c×h×w (formally, we assume a diagonal covariance matrix and predict means and variances for all latent dimensions independently). We regularize the latent space by minimizing a low-weighted Kullback-Leibler divergence loss L KL between q E (Z|I P ) = N (µ, σ 2 ) and a standard normal distribution N (0, I).\nThe decoder consists of transformer blocks at resolution of the feature map Z followed by a CNN that increases the resolution to 128 2 pixels. The CNN applies grouped convolutions with three groups to prevent undesired spatial correlation between the triplanes, see Wang et al. (2022b). At resolution 128 2 pixels, we then add two further transformer blocks with efficient self-attention (Xie et al., 2021) to facilitate learning the correct spatial correlations for the triplanes. The triplanes have a resolution of 128 2 pixels. We use the superresolution modules and dual discriminator from EG3D (Chan et al., 2022). The main task of the discriminator is to provide supervision for novel views, but it can also be used to improve the details in the reconstructed views (Esser et al., 2021). We hence use 95% novel views and 5% input views when training the discriminator. For the main models in the paper, the encoder has ∼ 32M parameters. We use 8 transformer blocks in the decoder accumulating to ∼ 26M parameters for the full decoder. The discriminator has ∼ 29M parameters. For computational efficiency, ablation studies (Table 3 of the main paper) are performed with downscaled models and at an image resolution of 128 2 pixels instead of 256 2 pixels. The downscaled models have a reduced channel dimension; specifically, the triplane resolution is reduced to 64 2 pixels and the decoder uses 4 instead of 8 transformer blocks. The resulting models count 1.6M , 2.5M , and 1.8M parameters for encoder, decoder and discriminator, respectively.\nThe autoencoder uses Adam (Kingma & Ba, 2015) with a learning rate of 1.4 × 10 -4 . However, for the superresolution modules and dual discriminator, we found it important to use the original learning rates from EG3D, which are 2 × 10 -3 and 1.9 × 10 -3 , respectively. We train all autoencoders with a batch size of 32 on 8 NVIDIA A100-PCIE-40GB GPUs until the discriminator has seen around 5.5M training images. Training our autoencoder in this setting takes around 2.5 days. Implementation and Training. Our autoencoder is trained with a reconstruction loss on the input view and an adversarial objective to supervise novel views (Mi et al., 2022;Cai et al., 2022). Similar to Rombach et al. (2021), we add a small Kullback-Leibler (KL) divergence regularization term L KL on the latent space Z, as discussed above. The reconstruction loss L rec consists of a pixel-wise loss (Zhang et al., 2018), and depth losses L 2D depth , L 3D depth . The full training objective is as follows \nL px = | Î -I|, a perceptual loss L V GG\nL rec = λ px L px + λ V GG L V GG + λ 2D depth L 2D depth + λ 3D depth L 3D depth(8) min θ\nwhere λ {} weigh the individual loss terms (λ without subscript denotes the R1 regularization coefficient for the regular discriminator and λ d is the R1 regularization coefficient for the depth discriminator). The values for the weights are summarized in Table 5. Note that V and V depth denote the adversarial objectives of the regular and depth discriminator, respectively (see Eq. ( 5) in main paper).\nOur code base builds on the official PyTorch implementation of StyleGAN (Karras et al., 2019) available at https://github.com/NVlabs/stylegan3, EG3D (Chan et al., 2022) available at https:// github.com/NVlabs/eg3d and LDM (Rombach et al., 2021) available at https://github.com/CompVis/ latent-diffusion. Similar to StyleGAN, we use a minibatch standard deviation layer at the end of the discriminator (Karras et al., 2018) and apply an exponential moving average of the autoencoder weights. Unlike Karras et al. (2019), we do not train with path regularization or style-mixing. To reduce computational cost and overall memory usage R1-regularization (Mescheder et al., 2018) is applied only once every 16 minibatches (also see R1-regularization coefficients λ and λ d in Table 5).\nSecond Stage: Latent Diffusion Model. We provide detailed model and training hyperparameter choices in Table 6. We follow the naming convention from LDM Rombach et al. (2021) and train the models for ∼200 epochs for each dataset. Our LDMs are trained on 4 NVIDIA A100-PCIE-40GB GPUs for 8 hours on SDIP elephant and for 1 day on SDIP horse, dog. On ImageNet, we train a class-conditional model for ∼5 days, doubling the batch size from 128 to 256. Otherwise, we use the same hyperparameters and model size due to computational constraints.\nGuidance. Class-conditioning is implemented through cross attention with learned embeddings, following Rombach et al. (2021). We also drop the class conditioning 10% of the time to enable sampling with classifier-free guidance (Ho & Salimans, 2021). We use a guidance scale of s = 2 in all our quantitative and qualitative results, unless indicated otherwise. The guidance scale s is defined according to the equation\nεs ω (x τ , c) = ϵ ω (x τ ) + s (ϵ ω (x τ , c) -ϵ ω (x τ )) ,(10\n) using the noise ϵ prediction formulation (we can easily obtain the noise prediction ϵ from the v prediction, which we use for training; see Salimans & Ho (2022)). In the above equation, ϵ ω (x τ ) denotes the unconditional score function, ϵ ω (x τ , c) the conditional score function when conditioning on class c, and εs ω (x τ , c) is the resulting guided score function with guidance scale s.\nCompute Limitations and Further Scaling. Note that due to their noisy training objective, diffusion models have been shown to scale well with more compute and larger batch sizes (Rombach et al., 2021;Karras et al., 2022;Kingma & Gao, 2023). State-of-the-art models on regular, non-3D-aware image synthesis usually use significantly larger batch sizes (> 1000) than we do. This suggests that our models in particular on the highly diverse ImageNet dataset could probably improved a lot with more computational resources." }, { "figure_ref": [], "heading": "B.3 MONOCULAR DEPTH", "publication_ref": [ "b5", "b22", "b76", "b8", "b5", "b8", "b92", "b8", "b80", "b89", "b92", "b26" ], "table_ref": [], "text": "In our pipeline, we leverage geometric cues from a pretrained monocular depth estimation network (Bhat et al., 2023) to supervise the predicted depth D from the autoencoder. Note that the predicted depth is obtained using volume rendering, similarly to Eq. ( 2) in the main paper\nD(r) = 1 N j=1 w j r N i=1 w i r d i r (11\n)\nwhere d i r denotes the depth of sampling point i along camera ray r and w i r is its corresponding rendering weight as defined in Eq. ( 2) of the main paper. The monocular depth used for supervision, however, is only defined up to scale. Let D denote the depth predicted by the monocular depth estimator. We first downsample it to match the resolution of the predicted, i.e., rendered depth, which refer to as D low . Next, a scale s and a shift t are computed for each image by solving a least-squares criterion (Eigen et al., 2014;Ranftl et al., 2020) (s, t) = arg min\ns,t r∈R s D(r) + t -D low (r) .(12)\nDefining h = (s, t) T and d r = ( D(r), 1) T , the closed-form solution is given by\nh = r d r d T r -1 r d r D low (r) .(13)\nC BASELINES EG3D*. EG3D (Chan et al., 2022) relies on estimated camera poses which are not available for the datasets we consider in this work. Hence, we adapt it to work in view space and remove the need for camera poses. Both the generator and discriminator are originally conditioned on the camera poses.\nFor our version, EG3D*, we remove the camera pose conditioning and model objects in view space by sampling novel views around the input view P 0 as described in Sec. B.1. For fair comparison, we additionally train a variant of EG3D* that leverages monocular depth. Specifically, we equip EG3D* with the depth discriminator from our pipeline D depth χ which compares the rescaled rendered depth with the predictions from a pretrained monocular depth estimatior (Bhat et al., 2023), see Sec. B.3 for more details on the rescaling of the depth.\nWe follow the training procedure from EG3D (Chan et al., 2022) and ensure that the models train stably by tuning the regularization strength λ in the adversarial objective. We use λ = 1 and λ = 2.5 for both variants on SDIP Elephants and Horses, respectively. For SDIP Dogs, we found it best to use λ = 2.5 for EG3D* and λ = 1.0 EG3D*+ D depth . For the depth discriminator, we set λ d = 10λ for all experiments. The models are trained until the discriminator has seen 10M images as we find that FID improvements are marginal afterwards. For evaluation, we select the best models in terms of FID.\nNote that (Skorokhodov et al., 2023) includes a detailed study on training EG3D without camera poses. As 3DGP clearly outperforms EG3D in this setting, we did not train the original EG3D in canonical space but instead directly compare to 3DGP.\nFor inversion, we use code kindly provided by the authors of EG3D (Chan et al., 2022). The inversion is performed using PTI (Roich et al., 2023) and consists of two optimization stages. In the first stage, the style code w is optimized to best match the input image. In the second stage, the network parameters are finetuned to further improve reconstruction quality. We observed that the inversion occasionally diverges. For the divergent cases, we reduce the learning rate in the optimization from 10 -3 to 10 -6 finding that this stabilizes the optimization. POF3D, 3DGP. For POF3D (Shi et al., 2023), we use the unpublished code that was kindly provided by the authors to train their model. We follow their training procedure and hyperparameter choices. For 3DGP (Skorokhodov et al., 2023), we trained the models using the publicly available code https://github.com/snap-research/3dgp, which was released shortly before the submission deadline. We found that the training diverges on SDIP Elephants but, as suggested by the authors, were able to restart the model from the last stable checkpoint which then converged. For SDIP Horses training diverges after around 2.5M images, even when restarting from the last stable checkpoint, so we report results on the last stable checkpoint. Both POF3D and 3DGP are trained until the discriminator has seen 10M images as we observed no or only marginal improvements on FID with longer training.\nStyleNeRF. We train StyleNeRF (Gu et al., 2022) using the official implementation of the authors https://github.com/facebookresearch/StyleNeRF. On SDIP datasets, we train until the disciminator has seen 20M images, on Imagenet we stop training after 35M images. In both cases, we only observed marginal changes in FID with longer training.\nSceneScape*. As an additional baseline, we analyzed a combination of a 2D generative model and a 2D inpainting model. We base our implementation on the publicly available code of SceneScape (Fridman et al., 2023) (https://github.com/RafailFridman/SceneScape.git). Specifically, we generate images using the ImageNet checkpoint from LDM (https://github.com/CompVis/latent-diffusion.git). Next, we predict the corresponding depth using ZoeDepth, i.e. using the same pretrained depth estimator as in our approach, and warp the image to a novel view. Lastly, an inpainting model fills the wholes that result from warping. We use an inpainting variant of Stable Diffusion (https://huggingface.co/docs/diffusers/using-diffusers/inpaint#stable-diffusion-inpainting) and provide the warped image, its mask, and the text prompt \"a photo of a <class name>\" as input." }, { "figure_ref": [ "fig_8" ], "heading": "D ADDITIONAL RESULTS", "publication_ref": [], "table_ref": [], "text": "Autoencoder for Compression and Novel-View Synthesis Fig. 9 shows further examples of WildFusion and baselines for novel view synthesis on images unseen during training (using GAN-inversion to embed the given images in latent space for EG3D* and EG3D* + D depth ). We see that our model generally correctly reconstructs the input object and is able to synthesize a high-quality novel view. Moreover, although there are small artifacts, WildFusion also produces plausible geometry. In comparison, the baselines cannot accurately reconstruct the correct object and the geometry is often flat or incorrect. " }, { "figure_ref": [ "fig_0", "fig_0", "fig_9", "fig_15", "fig_16", "fig_17", "fig_1" ], "heading": "3D-Aware Image Synthesis with Latent Diffusion Models", "publication_ref": [], "table_ref": [], "text": "We first compare WildFusion against the additional baseline SceneScape*. For quantitative analysis, we sample novel views similar to our evaluation by sampling yaw and pitch angles from Gaussian distributions with σ = 0.3 and 0.15 radians, using the depth at the center of the image to define the rotation center. With this approach, we get an FID of 12.3 on ImageNet, compared to 35.4 for WildFusion. However, as discussed in the main paper, FID only measures image quality and does not consider all important aspects of 3D-aware image synthesis, e.g. 3D consistency. In fact, we observe that the inpainting often adds texture-like artifacts or more instances of the given ImageNet class. We provide some examples in Fig. 10. To enforce consistency between novel views, we run the full pipeline of SceneScape to fuse the mesh for multiple generated images. For this setting, we sample 9 camera poses by combining yaw angles of [-0.3, 0., 0.3] and pitch angles of [-0.15, 0., 0.15] and iteratively update the mesh by finetuning the depth estimator. We show the final meshes in Fig. 10 (bottom two rows). For all samples we evaluated, we observe severe intersections in the mesh and generally inferior geometry to our approach. We remark that SceneScape's test time optimization takes multiple minutes per sample and a large-scale quantitative evaluation was out of the scope of this work. Our rotating camera movements around a single object are much more challenging, e.g. due to larger occlusions, than the backward motion shown in SceneScape. We hypothesize that this causes SceneScape to struggle more in our setting.\nWe also include more samples from WildFusion and 3DGP on ImageNet in Fig. 11. While samples from 3DGP look very similar within a class, WildFusion generates diverse samples. Fig. 17, Fig. 18, Fig. 19 and Fig. 20 show further WildFusion results for 3D-aware image synthesis leveraging our latent diffusion model that synthesizes 3D-aware latent space encodings. We can see that all novel samples are high quality and the camera angle changes result in realistic view point changes of the scenes. Note that our model only trains on unposed, single view images and does not need to learn a complex pose distribution because it models objects in view space.\nMore generated results can be found in the supplementary video, including baseline comparisons and extracted geometries." }, { "figure_ref": [ "fig_10", "fig_0", "fig_8" ], "heading": "Ablation Studies.", "publication_ref": [ "b30" ], "table_ref": [ "tab_8" ], "text": "Fig. 12 visualizes the impact of different configurations on the geometry. For the base config, the model learns a planar geometry which reflects in a low NFS, cf. Tab. 3 in the main paper. Adding a Figure 10: We compare WildFusion against a variant of SceneScape that combines a 2D generative model with a pre-trained inpainting model. First for rows: Leftmost image is input view and next two images are novel views at ±17 degree yaw angles for the two methods. We observe severe inpainting inconsistencies for the SceneScape baseline, especially for the higher angle novel views. Last two rows: Leftmost image is again input image, next image is a novel view at -17 degree yaw angle, and the last image shows the extracted geometry/mesh for the two methods). We find that due to the inconsistencies of the inpainting model across views, the fused meshes for SceneScape have severe intersections and overall inferior geometry to WildFusion.\ndiscriminator and the ViT backbone improve geometry significantly but the geometry remains noisy. Incorporating geometry cues in the form of monocular depth and a depth discriminator D depth χ helps to remove artifacts from the geometry, resulting in a lower NFS. Modeling unbounded scenes with contracted coordinates does not significantly change geometry but improves nvFID, cf. Tab. 3 in the main paper. Further supervision on the rendered depth (L 2D depth ) does not improve results but as it does not hurt performance either, we kept it in our pipeline. Lastly, both adding depth as an input to the encoder and directly supervising the rendering weights with L 3D depth result in slight improvements in NFS and qualitatively improve geometry. Note how the geometry is less planar when supervising the rendering weights with L 3D depth . We remark that the model used in this ablation is much smaller than our main models and has limited expressivity.\nWe further ablate inference with different classifier-free guidance scales (Ho & Salimans, 2021) in Table 7. For better compatibility with previous works, we also evaluated FID on 50K generated images for s = 3, which drops from 28.5 on 10K images to 25.3 on 50K images. For computational efficiency, we report FID on 10K images on all other instances throughout the paper. Limitations. Modeling instances in view space alleviates the need for posed images and learned camera distributions. However, it is a very challenging task. This can be seen, for instance, in Fig. 9. It becomes difficult to produce sharp 3D geometry for both baseline models and WildFusion, although WildFusion produces 3D-aware novel-views with high quality and still the most realistic geometry.\nFurthermore, as WildFusion is trained with fixed azimuth and polar angle ranges, it is currently not possible to perform novel view synthesis across the full 360 • . Increasing the ranges would be an interesting direction for future work. The challenge would lie in producing realistic content when largely unobserved regions become visible after large view point changes. Note, however, that to the best of our knowledge currently there exist no methods that can produce 360 • views when trained on the kind of datasets we are training on, which only show a single view per instance, often from a similar front direction.\nWe observed that the synthesized samples occasionally exhibited plane-like geometry artifacts. Our adversarial loss in the autoencoder, in principle, should avoid this, as it enforces realistic renderings from different viewpoints. We hypothesize that this is due to the autoencoder favoring in rare cases the simple solution of copying the image across triplanes to reduce the reconstruction loss.\nMoreover, WildFusion relies on fixed camera intrinsics (see Sec. B.1), which we need to pick ourselves. However, we found that our choice worked well for all three datasets without further tuning. Hence, this is a minor limitation, in particular, compared to methods that work in canonical coordinate systems and need to estimate a pose for each instance (as discussed, this is not possible for many complex, non-aligned datasets). In future work, the camera intrinsics could potentially be learned. " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "Katja Schwarz and Andreas Geiger were supported by the ERC Starting Grant LEGO-3D (850533) and the DFG EXC number 2064/1 -project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Katja Schwarz. Lastly, we would like to thank Nicolas Guenther for his general support." } ]
Modern learning-based approaches to 3D-aware image synthesis achieve high photorealism and 3D-consistent viewpoint changes for the generated images. Existing approaches represent instances in a shared canonical space. However, for in-the-wild datasets a shared canonical system can be difficult to define or might not even exist. In this work, we instead model instances in view space, alleviating the need for posed images and learned camera distributions. We find that in this setting, existing GAN-based methods are prone to generating flat geometry and struggle with distribution coverage. We hence propose WildFusion, a new approach to 3D-aware image synthesis based on latent diffusion models (LDMs). We first train an autoencoder that infers a compressed latent representation, which additionally captures the images' underlying 3D structure and enables not only reconstruction but also novel view synthesis. To learn a faithful 3D representation, we leverage cues from monocular depth prediction. Then, we train a diffusion model in the 3D-aware latent space, thereby enabling synthesis of high-quality 3D-consistent image samples, outperforming recent state-of-the-art GAN-based methods. Importantly, our 3D-aware LDM is trained without any direct supervision from multiview images or 3D geometry and does not require posed images or learned pose or camera distributions. It directly learns a 3D representation without relying on canonical camera coordinates. This opens up promising research avenues for scalable 3D-aware image synthesis and 3D content creation from in-the-wild image data. See https://katjaschwarz.github.io/wildfusion/ for videos of our 3D results.
WILDFUSION: LEARNING 3D-AWARE LATENT DIFFUSION MODELS IN VIEW SPACE
[ { "figure_caption": "Figure 1 :1Figure 1: WildFusion: Left: Input images, novel views and geometry from first-stage autoencoder. Right: Novel", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Sample Diversity: Generated samples on ImageNet. Rows indicate class; columns show uncurated random samples. While WildFusion generates diverse samples due to its diffusion model-based framework (left), the GAN-based 3DGP (Skorokhodov et al., 2023) has very low intra-class diversity (mode collapse, right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Baseline comparisons for novel view synthesis on images unseen during training. Shown are the input image and two novel views per method. Viewpoints across methods are the same. Included video for more results.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Generated 3D-aware image samples and geometry by WildFusion. Included videos for more results.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: 3D-Aware Image Interpolation. We encode two images into latent space (far left and far right), further encode into the diffusion model's Gaussian prior space (inverse DDIM), interpolate the resulting encodings, and generate the corresponding 3D images along the interpolation path.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: 3D-Aware Generative Image Resampling. Given an image (far left), we forward diffuse its latent encoding for varying numbers of steps and re-generate from the partially noised encodings. Depending on how far we diffuse, we obtain varying levels of generative image resampling.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "I, P nv , λ; θ, ψ, ϕ) + V depth (I, P nv , λ d ; θ, ψ, χ) + L rec (I P , P; θ, ψ) + λ KL L KL (I P ; θ)]", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 13 ,13Fig. 13, Fig. 14, Fig. 15 and Fig. 16 show further novel view synthesis results from WildFusion's 3D-aware autoencoder. The results demonstrate our model's ability to correctly generate novel views, given the encoded input view. All viewpoint changes result in high-quality, realistic outputs.", "figure_data": "", "figure_id": "fig_7", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Comparison with baselines for novel view synthesis on images unseen during training. Shown are the input image, a novel view and the geometry extracted with marching cubes. The viewpoints across methods are the same. See the included video for more results.", "figure_data": "", "figure_id": "fig_8", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Sample Diversity: Generated samples on ImageNet. Rows indicate class; columns show uncurated random samples. While WildFusion generates diverse samples due to its diffusion model-based framework (left), the GAN-based 3DGP (Skorokhodov et al., 2023) has very low intra-class diversity (mode collapse, right).", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Reconstructions (small) and geometry for different settings in the ablation study. The geometry was extracted by applying marching cubes to the density values of the feature field. We can see an improvement in geometry, as more components are added to the model. Note that the underlying model for this experiment is very small and was used only for the ablation study. It has limited expressivity.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Input images (left column) and novel views from WildFusion's 3D-aware autoencoder for SDIP Dogs. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_11", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Input images (left column) and novel views from WildFusion's 3D-aware autoencoder for SDIP Elephants. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_12", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Input images (left column) and novel views from WildFusion's 3D-aware autoencoder for SDIP Horses. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_13", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Input images (left column) and novel views from WildFusion's 3D-aware autoencoder for ImageNet. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_14", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Generated images and novel views from WildFusion for SDIP Dogs. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_15", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Generated images and novel views from WildFusion for SDIP Elephants. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_16", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Generated images and novel views from WildFusion for SDIP Horses. The results span a yaw angle of 40 • .", "figure_data": "", "figure_id": "fig_17", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Reconstruction and novel-view synthesis on SDIP Dogs, Elephants and Horses at resolution 256 2 . All evaluations on held-out test set. We report LPIPS, novel-view FID (nvFID) and non-flatness-score (NFS).", "figure_data": "SDIP DogsSDIP ElephantsSDIP HorsesRec.TimeEG3D* (Chan et al., 2022)0.4471.2312.010.4327.9912.890.4068.2512.90> 100EG3D* + D depth0.3836.6514.430.4556.8615.740.3436.2712.98> 100WildFusion (ours)0.2117.431.80.289.032.00.2213.428.70.04", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "3D-aware image synthesis results on unimodal datasets. Baselines above double line require camera pose estimation; methods below work in view space.", "figure_data": "SDIP DogsSDIP ElephantsSDIP Horses↓FID ↓FIDCLIP ↑NFS ↑Precision ↑Recall ↓FID ↓FIDCLIP ↑NFS ↑Precision ↑Recall ↓FID ↓FIDCLIP ↑NFS ↑Precision ↑RecallPOF3D (Shi et al., 2023)17.45.428.90.570.366.49.230.20.590.30 16.415.132.60.560.253DGP (Skorokhodov et al., 2023) 5.96.236.30.730.383.75.932.10.670.229.013.029.20.600.28EG3D* (Chan et al., 2022)16.35.811.80.600.293.06.813.30.590.316.710.214.10.570.23EG3D* + Ddepth18.78.813.90.710.154.58.518.30.560.246.58.713.70.590.30StyleNeRF (Gu et al., 2022)12.37.930.00.650.34 10.09.120.10.530.174.58.127.20.650.35WildFusion (ours)12.25.231.70.660.382.96.532.20.700.344.38.828.80.700.37", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "3D", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation study on SDIP Dogs.", "figure_data": "Model configuration↓ nvFID ↑NFSBase config53.210.3+ D ϕ61.137.0+ ViT backbone48.333.9+ D depth χ40.534.0+ modeling unbounded scenes32.832.4+ L 2D depth34.632.0+ encode depth33.333.3+ L 3D depth34.033.7", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Weights of all losses used in the training objective.", "figure_data": "λ px λ V GG λ 2D depthλ 3D depthλ KL λ λ dWeight 1010111e-4 1 10", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Hyperparameters for our latent diffusion models.", "figure_data": "ArchitectureTrainingDiffusion SetupImage shape256 × 256 × 3 ParameterizationvDiffusion steps 1000z-shape32 × 32 × 4Learning rate10 -4 Noise schedule CosineChannels224Batch size per GPU 64Offset s0.008Depth2#GPUs4Scale factor z 0.5Channel multiplier1,2,4,4p drop0.1SamplerDDIMAttention resolutions 32,16,8Steps200Head channels32η1.0", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Evaluation on ImageNet with different classifier-free guidance scales s.", "figure_data": "↓FID↓FID CLIP↑NFS↑Precision↑Recalls = 165.115.333.60.580.16s = 1.545.212.933.70.600.19s = 235.411.733.80.590.20s = 2.530.211.032.70.590.19s = 328.510.932.90.580.18s = 525.511.633.60.530.13s = 1029.713.733.10.440.08", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" } ]
Katja Schwarz; Seung Wook Kim; Jun Gao; Sanja Fidler; Andreas Geiger; Karsten Kreis
[ { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Image2stylegan: How to embed images into the stylegan latent space", "year": "2019" }, { "authors": "Titas Anciukevicius; Zexiang Xu; Matthew Fisher; Paul Henderson; Hakan Bilen; Niloy J Mitra; Paul Guerrero", "journal": "", "ref_id": "b1", "title": "RenderDiffusion: Image diffusion for 3D reconstruction, inpainting and generation", "year": "2023" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro; Tero Karras; Ming-Yu Liu", "journal": "", "ref_id": "b2", "title": "ediff-i: Text-to-image diffusion models with ensemble of expert denoisers", "year": "2022" }, { "authors": "Jonathan T Barron; Ben Mildenhall; Dor Verbin; P Pratul; Peter Srinivasan; Hedman", "journal": "CVPR", "ref_id": "b3", "title": "Mip-nerf 360: Unbounded anti-aliased neural radiance fields", "year": "2022" }, { "authors": "Miguel Ángel Bautista; Pengsheng Guo; Samira Abnar; Walter Talbott; Alexander Toshev; Zhuoyuan Chen; Laurent Dinh; Shuangfei Zhai; Hanlin Goh; Daniel Ulbricht; Afshin Dehghan; Josh M Susskind", "journal": "", "ref_id": "b4", "title": "GAUDI: A neural architect for immersive 3d scene generation", "year": "2022" }, { "authors": "Farooq Shariq; Reiner Bhat; Diana Birkl; Peter Wofk; Matthias Wonka; Müller", "journal": "", "ref_id": "b5", "title": "Zoedepth: Zero-shot transfer by combining relative and metric depth", "year": "2023" }, { "authors": "Shengqu Cai; Anton Obukhov; Dengxin Dai; Luc Van Gool", "journal": "", "ref_id": "b6", "title": "Pix2nerf: Unsupervised conditional p-gan for single image to neural radiance fields translation", "year": "2022" }, { "authors": "Eric R Chan; Marco Monteiro; Petr Kellnhofer; Jiajun Wu; Gordon Wetzstein", "journal": "", "ref_id": "b7", "title": "Pi-gan: Periodic implicit generative adversarial networks for 3d-aware image synthesis", "year": "2021" }, { "authors": "Eric R Chan; Connor Z Lin; Matthew A Chan; Koki Nagano; Boxiao Pan; Shalini De Mello; Orazio Gallo; Leonidas Guibas; Jonathan Tremblay; Sameh Khamis; Tero Karras; Gordon Wetzstein", "journal": "CVPR", "ref_id": "b8", "title": "Efficient geometry-aware 3D generative adversarial networks", "year": "2022" }, { "authors": "Eric R Chan; Koki Nagano; Matthew A Chan; Alexander W Bergman; Jeong Joon Park; Axel Levy; Miika Aittala; Shalini De Mello; Tero Karras; Gordon Wetzstein", "journal": "", "ref_id": "b9", "title": "GeNVS: Generative novel view synthesis with 3D-aware diffusion models", "year": "2023" }, { "authors": "Yunjey Choi; Youngjung Uh; Jaejun Yoo; Jung-Woo Ha", "journal": "", "ref_id": "b10", "title": "Stargan v2: Diverse image synthesis for multiple domains", "year": "2020" }, { "authors": "Congyue Deng; \" Chiyu; \" Max; Charles R Jiang; Xinchen Qi; Yin Yan; Leonidas Zhou; Dragomir Guibas; Anguelov", "journal": "", "ref_id": "b11", "title": "Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors", "year": "2022" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Kai Li Jia Li; Li Li; Fei-Fei", "journal": "", "ref_id": "b12", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Yu Deng; Jiaolong Yang; Jianfeng Xiang; Xin Tong", "journal": "", "ref_id": "b13", "title": "Gram: Generative radiance manifolds for 3d-aware image generation", "year": "2022" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b14", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Prafulla Dhariwal; Alexander Quinn; Nichol ", "journal": "NeurIPS", "ref_id": "b15", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Tim Dockhorn; Arash Vahdat; Karsten Kreis", "journal": "ICLR", "ref_id": "b16", "title": "Score-based generative modeling with criticallydamped langevin diffusion", "year": "2022" }, { "authors": "Tim Dockhorn; Arash Vahdat; Karsten Kreis", "journal": "", "ref_id": "b17", "title": "GENIE: Higher-Order Denoising Diffusion Solvers", "year": "2022" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b18", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "M Yilun Du; B Joshua Collins; Vincent Tenenbaum; Sitzmann", "journal": "", "ref_id": "b19", "title": "Learning signalagnostic manifolds of neural fields", "year": "2021" }, { "authors": "Emilien Dupont; Hyunjik Kim; S M Ali Eslami; Danilo Jimenez Rezende; Dan Rosenbaum", "journal": "", "ref_id": "b20", "title": "From data to functa: Your data point is a function and you can treat it like one", "year": "2022" }, { "authors": "Ainaz Eftekhar; Alexander Sax; Jitendra Malik; Amir Zamir", "journal": "", "ref_id": "b21", "title": "Omnidata: A scalable pipeline for making multi-task mid-level vision datasets from 3d scans", "year": "2021" }, { "authors": "David Eigen; Christian Puhrsch; Rob Fergus", "journal": "NeurIPS", "ref_id": "b22", "title": "Depth map prediction from a single image using a multi-scale deep network", "year": "2014" }, { "authors": "Patrick Esser; Robin Rombach; Björn Ommer", "journal": "", "ref_id": "b23", "title": "Taming transformers for high-resolution image synthesis", "year": "2021" }, { "authors": "Rafail Fridman; Amit Abecasis; Yoni Kasten; Tali Dekel", "journal": "", "ref_id": "b24", "title": "Scenescape: Text-driven consistent scene generation", "year": "2023" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron C Courville; Yoshua Bengio", "journal": "NeurIPS", "ref_id": "b25", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "ICLR", "ref_id": "b26", "title": "Stylenerf: A style-based 3d-aware generator for high-resolution image synthesis", "year": "2022" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b27", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Philipp Henzler; Tobias Niloy J Mitra; Ritschel", "journal": "", "ref_id": "b28", "title": "Escaping plato's cave: 3d shape from adversarial rendering", "year": "2019" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b29", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b30", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b31", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "J. Mach. Learn. Res", "ref_id": "b32", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": "Aapo Hyvärinen", "journal": "Journal of Machine Learning Research", "ref_id": "b33", "title": "Estimation of non-normalized statistical models by score matching", "year": "2005" }, { "authors": "Kyungmin Jo; Gyumin Shim; Sanghun Jung; Soyoung Yang; Jaegul Choo", "journal": "", "ref_id": "b34", "title": "Cg-nerf: Conditional generative neural radiance fields", "year": "2021" }, { "authors": "James T Kajiya; Brian Von Herzen", "journal": "ACM Trans. on Graphics", "ref_id": "b35", "title": "Ray tracing volume densities", "year": "1984" }, { "authors": "Nikolai Kalischek; Torben Peters; Jan D Wegner; Konrad Schindler", "journal": "", "ref_id": "b36", "title": "Tetrahedral diffusion models for 3d shape generation", "year": "2022" }, { "authors": "Tero Karras; Timo Aila; Samuli Laine; Jaakko Lehtinen", "journal": "ICLR", "ref_id": "b37", "title": "Progressive growing of GANs for improved quality, stability, and variation", "year": "2018" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b38", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Tero Karras; Samuli Laine; Miika Aittala; Janne Hellsten; Jaakko Lehtinen; Timo Aila", "journal": "", "ref_id": "b39", "title": "Analyzing and improving the image quality of StyleGAN", "year": "2020" }, { "authors": "Tero Karras; Miika Aittala; Samuli Laine; Erik Härkönen; Janne Hellsten; Jaakko Lehtinen; Timo Aila", "journal": "", "ref_id": "b40", "title": "Alias-free generative adversarial networks", "year": "2021" }, { "authors": "Tero Karras; Miika Aittala; Timo Aila; Samuli Laine", "journal": "", "ref_id": "b41", "title": "Elucidating the design space of diffusionbased generative models", "year": "2022" }, { "authors": "Seung Wook Kim; Bradley Brown; Kangxue Yin; Karsten Kreis; Katja Schwarz; Daiqing Li; Robin Rombach; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b42", "title": "Neuralfield-ldm: Scene generation with hierarchical latent diffusion models", "year": "2023" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b43", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "P Diederik; Ruiqi Kingma; Gao", "journal": "", "ref_id": "b44", "title": "Understanding diffusion objectives as the elbo with simple data augmentation", "year": "2023" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "ICLR", "ref_id": "b45", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Tuomas Kynkäänniemi; Tero Karras; Samuli Laine; Jaakko Lehtinen; Timo Aila", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b46", "title": "Improved precision and recall metric for assessing generative models", "year": "2019" }, { "authors": "Tuomas Kynkäänniemi; Tero Karras; Miika Aittala; Timo Aila; Jaakko Lehtinen", "journal": "ICLR", "ref_id": "b47", "title": "The role of imagenet classes in fréchet inception distance", "year": "2023" }, { "authors": "Yushi Lan; Xuyi Meng; Shuai Yang; Chen Change Loy; Bo Dai", "journal": "", "ref_id": "b48", "title": "Self-supervised geometry-aware encoder for style-based 3d GAN inversion", "year": "2022" }, { "authors": "Yu-Jhe Li; Tao Xu; Bichen Wu; Ningyuan Zheng; Xiaoliang Dai; Albert Pumarola; Peizhao Zhang; Peter Vajda; Kris Kitani", "journal": "", "ref_id": "b49", "title": "3d-aware encoding for style-based neural radiance fields", "year": "2022" }, { "authors": "Yiyi Liao; Katja Schwarz; Lars M Mescheder; Andreas Geiger", "journal": "CVPR", "ref_id": "b50", "title": "Towards unsupervised learning of generative models for 3d controllable image synthesis", "year": "2020" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b51", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2022" }, { "authors": "Kai-En Lin; Lin Yen-Chen; Wei-Sheng Lai; Tsung-Yi Lin; Yi-Chang Shih; Ravi Ramamoorthi", "journal": "", "ref_id": "b52", "title": "Vision transformer for nerf-based view synthesis from a single input image", "year": "2023" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross B Girshick; Kaiming He; Bharath Hariharan; Serge J Belongie", "journal": "", "ref_id": "b53", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b54", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Ziwei Liu; Ping Luo; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b55", "title": "Deep learning face attributes in the wild", "year": "2015-12" }, { "authors": "Shitong Luo; Wei Hu", "journal": "", "ref_id": "b56", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "Lars Mescheder; Andreas Geiger; Sebastian Nowozin", "journal": "", "ref_id": "b57", "title": "Which training methods for gans do actually converge?", "year": "2018" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b58", "title": "Latent-nerf for shape-guided generation of 3d shapes and textures", "year": "2022" }, { "authors": "Lu Mi; Abhijit Kundu; David A Ross; Frank Dellaert; Noah Snavely; Alireza Fathi", "journal": "", "ref_id": "b59", "title": "im2nerf: Image to neural radiance field in the wild", "year": "2022" }, { "authors": "S Mahdi; H Miangoleh; Sebastian Dille; Long Mai; Sylvain Paris; Yagiz Aksoy", "journal": "", "ref_id": "b60", "title": "Boosting monocular depth estimation models to high-resolution via content-adaptive multi-resolution merging", "year": "2021" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "", "ref_id": "b61", "title": "NeRF: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Ron Mokady; Michal Yarom; Omer Tov; Oran Lang; Daniel Cohen-Or; Tali Dekel; Michal Irani; Inbar Mosseri", "journal": "", "ref_id": "b62", "title": "Self-distilled stylegan: Towards generation from internet photos", "year": "2022" }, { "authors": "Gimin Nam; Mariem Khlifi; Andrew Rodriguez; Alberto Tono; Linqi Zhou; Paul Guerrero", "journal": "", "ref_id": "b63", "title": "3d-ldm: Neural implicit 3d shape generation with latent diffusion models", "year": "2022" }, { "authors": "Thu Nguyen-Phuoc; Chuan Li; Lucas Theis; Christian Richardt; Yong-Liang Yang", "journal": "", "ref_id": "b64", "title": "Hologan: Unsupervised learning of 3d representations from natural images", "year": "2019" }, { "authors": "Alex Nichol; Heewoo Jun; Prafulla Dhariwal; Pamela Mishkin; Mark Chen", "journal": "", "ref_id": "b65", "title": "Point-e: A system for generating 3d point clouds from complex prompts", "year": "2022" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal", "journal": "", "ref_id": "b66", "title": "Improved denoising diffusion probabilistic models", "year": "2021-07" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b67", "title": "GLIDE: towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b68", "title": "Giraffe: Representing scenes as compositional generative neural feature fields", "year": "2021" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b69", "title": "Campari: Camera-aware decomposed generative neural radiance fields", "year": "2021" }, { "authors": "Roy Or-El; Xuan Luo; Mengyi Shan; Eli Shechtman; Jeong Park; Ira Kemelmacher", "journal": "", "ref_id": "b70", "title": "Stylesdf: High-resolution 3d-consistent image and geometry generation", "year": "2022" }, { "authors": "Xingang Pan; Xudong Xu; Chen Change Loy; Christian Theobalt; Bo Dai", "journal": "NeurIPS", "ref_id": "b71", "title": "A shading-guided generative implicit model for shape-accurate 3d-aware image synthesis", "year": "2021" }, { "authors": "Dario Pavllo; David Joseph Tan; Marie-Julie Rakotosaona; Federico Tombari", "journal": "", "ref_id": "b72", "title": "Shape, pose, and appearance from a single image via bootstrapped radiance field inversion", "year": "2022" }, { "authors": "Songyou Peng; Michael Niemeyer; Lars Mescheder; Marc Pollefeys; Andreas Geiger", "journal": "", "ref_id": "b73", "title": "Convolutional occupancy networks", "year": "2020" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b74", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b75", "title": "Hierarchical textconditional image generation with CLIP latents", "year": "2022" }, { "authors": "René Ranftl; Katrin Lasinger; David Hafner; Konrad Schindler; Vladlen Koltun", "journal": "IEEE TPAMI", "ref_id": "b76", "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer", "year": "2020" }, { "authors": "Mark Daniel Rebain; Kwang Moo Matthews; Dmitry Yi; Andrea Lagun; Tagliasacchi", "journal": "", "ref_id": "b77", "title": "Lolnerf: Learn from one look", "year": "2021" }, { "authors": "Danilo Jimenez Rezende; Shakir Mohamed; Daan Wierstra", "journal": "", "ref_id": "b78", "title": "Stochastic backpropagation and approximate inference in deep generative models", "year": "2014" }, { "authors": "Elad Richardson; Yuval Alaluf; Or Patashnik; Yotam Nitzan; Yaniv Azar; Stav Shapiro; Daniel Cohen-Or", "journal": "", "ref_id": "b79", "title": "Encoding in style: a stylegan encoder for image-to-image translation", "year": "2021" }, { "authors": "Daniel Roich; Ron Mokady; H Amit; Daniel Bermano; Cohen-Or", "journal": "ACM TOG", "ref_id": "b80", "title": "Pivotal tuning for latent-based editing of real images", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b81", "title": "Highresolution image synthesis with latent diffusion models", "year": "2021" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; Sara Mahdavi; Rapha Gontijo Lopes", "journal": "", "ref_id": "b82", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "S M Mehdi; Olivier Sajjadi; Mario Bachem; Olivier Lucic; Sylvain Bousquet; Gelly", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b83", "title": "Assessing generative models via precision and recall", "year": "2018" }, { "authors": "Tim Salimans; Jonathan Ho", "journal": "", "ref_id": "b84", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2022" }, { "authors": "Kyle Sargent; Jing Yu Koh; Han Zhang; Huiwen Chang; Charles Herrmann; Pratul Srinivasan; Jiajun Wu; Deqing Sun", "journal": "", "ref_id": "b85", "title": "Vq3d: Learning a 3d-aware generative model on imagenet", "year": "2023-10" }, { "authors": "Axel Sauer; Katja Schwarz; Andreas Geiger", "journal": "ACM Trans. on Graphics", "ref_id": "b86", "title": "Stylegan-xl: Scaling stylegan to large diverse datasets", "year": "2022" }, { "authors": "Katja Schwarz; Yiyi Liao; Michael Niemeyer; Andreas Geiger", "journal": "NeurIPS", "ref_id": "b87", "title": "GRAF: generative radiance fields for 3d-aware image synthesis", "year": "2020" }, { "authors": "Katja Schwarz; Axel Sauer; Michael Niemeyer; Yiyi Liao; Andreas Geiger", "journal": "NeurIPS", "ref_id": "b88", "title": "Voxgraf: Fast 3d-aware image synthesis with sparse voxel grids", "year": "2022" }, { "authors": "Zifan Shi; Yujun Shen; Yinghao Xu; Sida Peng; Yiyi Liao; Sheng Guo; Qifeng Chen; Dit-Yan Yeung", "journal": "CVPR", "ref_id": "b89", "title": "Learning 3d-aware image synthesis with unknown pose distribution", "year": "2023" }, { "authors": "J Ryan Shue; Eric Ryan Chan; Ryan Po; Zachary Ankner; Jiajun Wu; Gordon Wetzstein", "journal": "", "ref_id": "b90", "title": "3d neural field generation using triplane diffusion", "year": "2022" }, { "authors": "Ivan Skorokhodov; Sergey Tulyakov; Yiqun Wang; Peter Wonka", "journal": "", "ref_id": "b91", "title": "Epigraf: Rethinking training of 3d gans", "year": "2022" }, { "authors": "Ivan Skorokhodov; Aliaksandr Siarohin; Yinghao Xu; Jian Ren; Hsin-Ying Lee; Peter Wonka; Sergey Tulyakov", "journal": "", "ref_id": "b92", "title": "3d generation on imagenet", "year": "2023" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b93", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b94", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Yang Song; Stefano Ermon", "journal": "", "ref_id": "b95", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b96", "title": "Score-based generative modeling through stochastic differential equations", "year": "2020" }, { "authors": "Omer Tov; Yuval Alaluf; Yotam Nitzan; Or Patashnik; Daniel Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b97", "title": "Designing an encoder for stylegan image manipulation", "year": "2021" }, { "authors": "Arash Vahdat; Karsten Kreis; Jan Kautz", "journal": "NeurIPS", "ref_id": "b98", "title": "Score-based generative modeling in latent space", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b99", "title": "Attention is all you need", "year": "2017" }, { "authors": "Pascal Vincent", "journal": "Neural Computation", "ref_id": "b100", "title": "A connection between score matching and denoising autoencoders", "year": "2011" }, { "authors": "Haochen Wang; Xiaodan Du; Jiahao Li; Raymond A Yeh; Greg Shakhnarovich", "journal": "", "ref_id": "b101", "title": "Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation", "year": "2022" }, { "authors": "Tengfei Wang; Bo Zhang; Ting Zhang; Shuyang Gu; Jianmin Bao; Tadas Baltrusaitis; Jingjing Shen; Dong Chen; Fang Wen; Qifeng Chen; Baining Guo", "journal": "", "ref_id": "b102", "title": "Rodin: A generative model for sculpting 3d digital avatars using diffusion", "year": "2022" }, { "authors": "Daniel Watson; William Chan; Ricardo Martin-Brualla; Jonathan Ho; Andrea Tagliasacchi; Mohammad Norouzi", "journal": "", "ref_id": "b103", "title": "Novel view synthesis with diffusion models", "year": "2022" }, { "authors": "Jianfeng Xiang; Jiaolong Yang; Yu Deng; Xin Tong", "journal": "", "ref_id": "b104", "title": "Gram-hd: 3d-consistent image generation at high resolution with generative radiance manifolds", "year": "2023-10" }, { "authors": "Jianfeng Xiang; Jiaolong Yang; Binbin Huang; Xin Tong", "journal": "", "ref_id": "b105", "title": "3d-aware image generation using 2d diffusion models", "year": "2023-10" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "NeurIPS", "ref_id": "b106", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Jiaxin Xie; Hao Ouyang; Jingtan Piao; Chenyang Lei; Qifeng Chen", "journal": "", "ref_id": "b107", "title": "High-fidelity 3d GAN inversion by pseudo-multi-view optimization", "year": "2022" }, { "authors": "Xudong Xu; Xingang Pan; Dahua Lin; Bo Dai", "journal": "NeurIPS", "ref_id": "b108", "title": "Generative occupancy fields for 3d surface-aware image synthesis", "year": "2021" }, { "authors": "Yinghao Xu; Sida Peng; Ceyuan Yang; Yujun Shen; Bolei Zhou", "journal": "CVPR", "ref_id": "b109", "title": "3d-aware image synthesis via learning structural and textural representations", "year": "2022" }, { "authors": "Fei Yin; Yong Zhang; Xuan Wang; Tengfei Wang; Xiaoyu Li; Yuan Gong; Yanbo Fan; Xiaodong Cun; Ying Shan; Cengiz Öztireli; Yujiu Yang", "journal": "", "ref_id": "b110", "title": "3d GAN inversion with facial symmetry prior", "year": "2022" }, { "authors": "Fisher Yu; Jianxiong Xiao; Thomas A Funkhouser", "journal": "", "ref_id": "b111", "title": "Semantic alignment of lidar data at city scale", "year": "2015" }, { "authors": "Zehao Yu; Songyou Peng; Michael Niemeyer; Torsten Sattler; Andreas Geiger", "journal": "NeurIPS", "ref_id": "b112", "title": "Monosdf: Exploring monocular geometric cues for neural implicit surface reconstruction", "year": "2022" }, { "authors": "Xiaohui Zeng; Arash Vahdat; Francis Williams; Zan Gojcic; Or Litany; Sanja Fidler; Karsten Kreis", "journal": "NeurIPS", "ref_id": "b113", "title": "LION: latent point diffusion models for 3d shape generation", "year": "2022" }, { "authors": "Huangying Zhan; Ravi Garg; Chamara Saroj Weerasekera; Kejie Li; Harsh Agarwal; Ian Reid", "journal": "", "ref_id": "b114", "title": "Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction", "year": "2018" }, { "authors": "Jichao Zhang; Enver Sangineto; Hao Tang; Aliaksandr Siarohin; Zhun Zhong; Nicu Sebe; Wei Wang", "journal": "", "ref_id": "b115", "title": "3d-aware semantic-guided generative model for human synthesis", "year": "2021" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b116", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Linqi Zhou; Yilun Du; Jiajun Wu", "journal": "", "ref_id": "b117", "title": "3d shape generation and completion through point-voxel diffusion", "year": "2021" }, { "authors": "Peng Zhou; Lingxi Xie; Bingbing Ni; Qi Tian", "journal": "", "ref_id": "b118", "title": "CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis", "year": "2021" }, { "authors": "Jiapeng Zhu; Yujun Shen; Deli Zhao; Bolei Zhou", "journal": "Springer", "ref_id": "b119", "title": "In-domain gan inversion for real image editing", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 198.69, 448.52, 305.98, 16.21 ], "formula_id": "formula_0", "formula_text": "arg min ω E x∼pdata,τ ∼pτ ,ϵ∼N (0,I) ∥v -F ω (x τ , τ )∥ 2 2 ,(1)" }, { "formula_coordinates": [ 4, 131.76, 161.31, 372.91, 30.32 ], "formula_id": "formula_1", "formula_text": "f r = N i=1 w i r f i r , w i r = T i r α i r , T i r = i-1 j=1 1 -α j r , α i r = 1 -exp -σ i r δ i r ,(2)" }, { "formula_coordinates": [ 6, 108, 333.45, 396, 25.03 ], "formula_id": "formula_2", "formula_text": "L 2D depth = ||(s Dlow +t)-D low || 2 ." }, { "formula_coordinates": [ 6, 213.61, 393.27, 291.06, 22.6 ], "formula_id": "formula_3", "formula_text": "L 3D depth = r (1 - i∈Kr w i r ) 2 + ( i / ∈Kr w i r ) 2 .(3)" }, { "formula_coordinates": [ 6, 132.71, 578.77, 368.09, 12.62 ], "formula_id": "formula_4", "formula_text": "V (I, P nv , λ; θ, ψ, ϕ) = f (-D ϕ (G ψ (E θ (I, D), P nv ))) + f (D ϕ (I)) -λ∥∇D ϕ (I)∥ 2 , (4" }, { "formula_coordinates": [ 6, 500.8, 582.06, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 19, 167.67, 578.12, 333.13, 30.66 ], "formula_id": "formula_6", "formula_text": "R 0 = 1 0 0 0 -1 0 0 0 -1 T 0 = 0 0 2.7 K 0 = 5.4 0 0.5 0 5.4 0.5 0 0 1.0 . (5" }, { "formula_coordinates": [ 19, 500.8, 589.7, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 20, 181.31, 293.32, 323.36, 29.04 ], "formula_id": "formula_8", "formula_text": "x c = x rin r , if ||x|| ≤ r (1 -r in ) 1 - 1 ||x||-r+1 + r in x ||x|| , otherwise(6)" }, { "formula_coordinates": [ 20, 222.66, 428.06, 282, 11.03 ], "formula_id": "formula_9", "formula_text": "[µ, σ 2 ] = F P N (I), Z ∼ N (µ, σ 2 ),(7)" }, { "formula_coordinates": [ 21, 108, 320.67, 157.19, 12.09 ], "formula_id": "formula_10", "formula_text": "L px = | Î -I|, a perceptual loss L V GG" }, { "formula_coordinates": [ 21, 168.47, 347.92, 336.2, 32.21 ], "formula_id": "formula_11", "formula_text": "L rec = λ px L px + λ V GG L V GG + λ 2D depth L 2D depth + λ 3D depth L 3D depth(8) min θ" }, { "formula_coordinates": [ 21, 209.61, 697.11, 290.91, 10.76 ], "formula_id": "formula_13", "formula_text": "εs ω (x τ , c) = ϵ ω (x τ ) + s (ϵ ω (x τ , c) -ϵ ω (x τ )) ,(10" }, { "formula_coordinates": [ 22, 250.98, 261.91, 249.54, 30.32 ], "formula_id": "formula_14", "formula_text": "D(r) = 1 N j=1 w j r N i=1 w i r d i r (11" }, { "formula_coordinates": [ 22, 500.52, 272.64, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 22, 263.38, 376.55, 241.29, 22.61 ], "formula_id": "formula_16", "formula_text": "s,t r∈R s D(r) + t -D low (r) .(12)" }, { "formula_coordinates": [ 22, 220.93, 434.09, 283.73, 32.54 ], "formula_id": "formula_17", "formula_text": "h = r d r d T r -1 r d r D low (r) .(13)" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b21", "b49", "b0", "b51", "b2", "b34", "b64", "b69", "b55", "b32", "b10", "b59", "b3", "b19", "b22", "b68", "b42", "b4", "b25", "b15", "b37", "b42", "b4", "b25", "b6", "b43", "b46", "b17", "b67", "b13", "b35", "b0", "b1" ], "table_ref": [], "text": "3D avatars present an opportunity to create experiences that are exceptionally authentic and immersive in telepresence [10], augmented reality (AR) [22], and virtual reality (VR) [50]. These applications [1,52,3,35] require the capture of human expressiveness, including poses, gestures, expressions, and others, to enable photo-realistic generation [65,70], animation [56], and interaction [33] in virtual environments.\nTraditional methods [11,60,4,20,23] typically create virtual avatars based on template registration or expensive multi-camera light stages in well-controlled environments. Recent efforts [69,43,5,26,16] have explored the use of generative models to produce 3D human bodies and clothing based on input parameters, such as SMPL [38], without the need of 3D supervision. Despite these advancements, current approaches are limited in their ability to handle expressive attributes of the human body, such Figure 1: XAGen can synthesize realistic 3D avatars with detailed geometry, while providing disentangled control over expressive attributes, i.e., facial expressions, jaw, body, and hand poses.\nas facial expressions and hand poses, as they primarily focus on body pose and shape conditions. Yet, there exist scenarios where fine-grained control ability is strongly desired, e.g., performing social interactions with non-verbal body languages in Metaverse, or driving digital characters to talk with various expressions and gestures, etc. Due to the lack of comprehensive modeling of the full human body, existing approaches [43,5,26] fail to provide control ability beyond the sparse joints of major body skeleton, leading to simple and unnatural animation.\nIn this work, our objective is to enhance the fine-grained control capabilities of GAN-based human avatar generation model. To achieve this, we introduce the first eXpressive 3D human Avatar Generation model (XAGen) that can (1) synthesize high-quality 3D human avatars with diverse realistic appearances and detailed geometries; (2) provide independent control capabilities for finegrained attributes, including body poses, hand poses, jaw poses, shapes, and facial expressions.\nXAGen is built upon recent unconditional 3D-aware generation models for static images [7,44]. One straightforward approach to implement fully animatable avatar generation is extending 3D GAN models to condition on expressive control signals, such as SMPL-X [47]. Though conceptually simple, such a direct modification of conditioning signal cannot guarantee promising appearance quality and control ability, particularly for two crucial yet challenging regions, i.e., the face and hands. This is because (1) Compared with body, face and hands contain similar or even more articulations. In addition, their scales are much smaller than arms, torso, and legs in a human body image, which hinders the gradient propagation from supervision. (2) Face and hands are entangled with the articulated human body and thus will be severely affected by large body pose deformation, leading to optimization difficulty when training solely on full-body image collections.\nTo address the above challenges, we decompose the learning process of body, face, and hands by adopting a multi-scale and multi-part 3D representation and rendering multiple parts independently using their respective observation viewpoints and control parameters. The rendered images are passed to multi-part discriminators, which provide multi-scale supervision during the training process. With these careful designs, XAGen can synthesize photo-realistic 3D human avatars that can be animated effectively by manipulating the corresponding control parameters for expressions and poses, as depicted in Figure 1. We conduct extensive experiments on a variety of benchmarks [18,68,14,36], demonstrating the superiority of XAGen over state-of-the-arts in terms of appearance, geometry, and controllability. Moreover, XAGen supports various downstream applications such as text-guided avatar creation and audio-driven animation, expanding its potential for practical scenarios.\nOur contributions are three-fold: (1) To the best of our knowledge, XAGen is the first 3D GAN model for fully animatable human avatar generation. (2) We propose a novel framework that incorporates multi-scale and multi-part 3D representation together with multi-part rendering technique to enhance the quality and control ability, particularly for the face and hands. (3) Experiments demonstrate XAGen surpasses state-of-the-art methods in terms of both quality and controllability, which enables various downstream applications, including text-guided avatar synthesis and audio-driven animation." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b26", "b27", "b50", "b6", "b43", "b54", "b58", "b62", "b41", "b28", "b63", "b25", "b42", "b68", "b56", "b15", "b70", "b31", "b37", "b30", "b8", "b68", "b70", "b42", "b6", "b25", "b37", "b61", "b44", "b1", "b18", "b11", "b46", "b45", "b16", "b29", "b57", "b40", "b41", "b52", "b60", "b47", "b7", "b14", "b53" ], "table_ref": [], "text": "Generative models for avatar creation. Generative models [27,28,51] have demonstrated unprecedented capability for synthesizing high-resolution photo-realistic images. Building upon these generative models, follow-up works [7,44,55,59,63] have focused on extending 2D image generation to the 3D domain by incorporating neural radiance field [42] or differentiable rasterization [29]. Although enabling 3D-aware generation, these works fail to provide control ability to manipulate the synthesized portrait images. To address this limitation, recent research efforts [64,26,43,69,57,16,71] have explored animatable 3D avatar generation leveraging parametric models for face [32] and body [38]. These works employ inverse [31] or forward [9] skinning techniques to control the facial attributes or body poses of the generated canonical avatars [69,71]. For human body avatars, additional challenges arise due to their articulation properties. Consequently, generative models for human avatars have explored effective 3D representation designs. Among them, ENARF [43] divides an efficient 3D representation [7] into multiple parts, with each part representing one bone. EVA3D [26] employs a similar multi-part design by developing a compositional neural radiance field. Despite enabling body control, such representation fails to generate the details of human faces or hands since these parts only occupy small regions in the human body images.\nOur method differs in two aspects. First, existing works can either control face or body, whereas ours is the first 3D avatar generation model with simultaneous fine-grained control over the face, body, and hands. Second, we devise a multi-scale and multi-part 3D representation, allowing for generating human body with high fidelity even for small regions like face and hands.\nExpressive 3D human modeling. Existing 3D human reconstruction approaches can be categorized into two main categories depending on whether explicit or implicit representations are used. Explicit representations mainly utilize the pre-defined mesh topology, such as statistical parametric models [38,62,45,2] or personalized mesh registrations [19,12], to model naked human bodies with various poses and shapes. To enhance the expressiveness, recent works have developed expressive statistical models capable of representing details beyond major human body [47,46,17] or introduced the surface deformation to capture fine-grained features [30,58]. On the other hand, leveraging the remarkable advances in implicit neural representations [41,42], another line of research has proposed to either rely purely on implicit representations [53] or combine it with statistical models [61,48,8] to reconstruct expressive 3D human bodies. The most recent work [15,54] proposed to learn a single full-body avatar from multi-part portrait videos or 3D scans. In contrast, our approach focuses on developing 3D generative model for fully animatable human avatars, which is trainable on only unstructured 2D image collections." }, { "figure_ref": [ "fig_1" ], "heading": "Method", "publication_ref": [ "b33", "b6", "b46", "b38" ], "table_ref": [], "text": "In this section, we introduce XAGen, a 3D generative model for synthesizing photo-realistic human avatars with expressive and disentangled controllability over facial expression, shape, jaw pose, body pose, and hand pose. Figure 2 depicts the pipeline of our method.\nGiven a random noise z sampled from Gaussian distribution, XAGen first synthesizes a human avatar with canonical body, face, and hand configurations. In this work, we use X-pose [34] and neutral shape, face, and hand as canonical configurations. We leverage Tri-plane [7] as the fundamental building block of 3D representation in our canonical generator. To increase the capability of 3D representation for the smaller-scale face and hands, we introduce multi-part and multi-scale designs into the canonical Tri-plane (Sec. 3.1). A mapping network first encodes z and the camera viewpoint of body c b into latent code w. The canonical generator then synthesizes three Tri-planes F k conditioned on w, where k ∈ {b, f, h} which stands for {body, face, hand}.\nBased on the generated canonical avatar, we deform it from canonical space to observation space under the guidance of control signal p b parameterized by an expressive statistical full body model, i.e., SMPL-X [47]. We adopt volumetric rendering [39] to synthesize the full body image. However, due to the scale imbalance between the face/hands and body, rendering only the full body image cannot guarantee quality for these detailed regions. To address this issue, we propose a multi-part rendering technique (Sec. 3.2). Specifically, we employ part-aware deformation and rendering based on the control parameters (p f and p h ) and cameras (c f and c h ). Accordingly, to ensure the plausibility and controllability of the generated avatars, we develop multi-part discriminators to critique the rendered images (Sec. 3.3).\n{p b , p f , p h } F b F f F h F b F f F h F b F f F h" }, { "figure_ref": [ "fig_1" ], "heading": "Multi-scale and Multi-part Representation", "publication_ref": [ "b6" ], "table_ref": [], "text": "XAGen is designed for expressive human avatars with an emphasis on the high-quality face and hands. However, the scale imbalance between face/hands and body may hamper the fidelity of the corresponding regions. To address this issue, we propose a simple yet effective multi-scale and multi-part representation for expressive human avatar generation. Our multi-scale representation builds upon the efficient 3D representation, i.e., Tri-plane [7], which stores the generated features on three orthogonal planes. Specifically, we design three Tri-planes for body, face, and hands, denoted as\nF b ∈ R Wb×Wb×3C , F f ∈ R Wf×Wf×3C\n, and F h ∈ R Wh×Wh×3C , respectively. The size of the face and hand Tri-planes is set to half of the body Tri-plane, with\nW f = W h = W b /2.\nAs depicted in Figure 2, our canonical generator first synthesizes a compact feature map F ∈ R Wb×Wb×9C/2 , where C represents the number of channels. We then separate and reshape\nF into F k ,\nwhere k ∈ {b, f, h}, representing the canonical space of the generated human avatar. Furthermore, to save computation cost, we exploit the symmetry property of hands to represent both left and right hands using one single F h through a horizontal flip operation (refer to Appendix for details)." }, { "figure_ref": [], "heading": "Multi-part Rendering", "publication_ref": [ "b16", "b30", "b5", "b43", "b25", "b68", "b25", "b68", "b36" ], "table_ref": [], "text": "Our method is trainable on unstructured 2D human images. Although this largely reduces the difficulty and cost to obtain data, the training is highly under-constrained due to the presence of diverse poses, faces, and clothes. To facilitate the training process and improve the appearance quality, we propose a multi-part rendering strategy. This strategy allows XAGen to learn each part based on the independent camera poses, which further enhances the geometry quality of the face and hands.\nSpecifically, for each training image, we utilize a pretrained model [17] to estimate SMPL-X parameters {p b , p f , p h } and camera poses {c b , c f , c h } for body, face, and hands, respectively. In the rendering stage, we shoot rays using {c b , c f , c h } and sample points {x b o , x f o , x h o } along the rays in the observation space. To compute the feature for each point, we employ inverse linear-blend skinning [31], which finds the transformation of each point from observation space to canonical space produced by the canonical generator. Based on the parameter p k , where k ∈ {b, f, h}, SMPL-X yields an expressive human body model (v, w), where v ∈ R N ×3 represents N vertices, and w ∈ R N ×J represents the skinning weights of each vertex with respect to joint J. For each point x k,i o , where i = 1 • • • M k and M k is the number of sampled points, we find its nearest neighbour n from vertices v. We then compute the corresponding transformation from observation space to canonical space\nT k,i = ( j w n j R j t j 0 1 I ∆ n 0 1 ) -1 ,(1)\nwhere j = 1 • • • J, R j and t j are derived from p k with Rodrigues formula [6], and ∆ n represents the offset caused by pose and shape for vertex n, which is calculated by SMPL-X. Based on this inverse transformation, we can calculate the coordinates for each point in canonical space x k,i c as\nx k,i c = T k,i x k,i o ,(2)\nwhere we apply homogeneous coordinates for the calculation.\nFor the face and hands rendering, i.e., k ∈ {f, h}, we directly interpolate their corresponding Triplane F f and F h to compute the feature f f,i c and f h,i c . Regarding the body rendering, we first define three bounding boxes B f , B lh , B rh for face, left and right hands in canonical body space. Then, we query canonical body points that are outside these bounding boxes from body Tri-plane F b , while the canonical points inside these boxes from F f and F h . The query process for body point x b,i c is mathematically formulated as\nf b,i c =    Q(x b,i c , F f ), if x b,i c ∈ B f , Q(x b,i c , F h ), if x b,i c ∈ {B rh , B lh }, Q(x b,i c , F b ), if x b,i c / ∈ {B f , B lh , B rh },(3)\nwhere Q denotes querying the feature for the given point from the corresponding Tri-planes.\nOnce the features f k,i c are obtained, they are encoded into color c and geometry d via two lightweight multi-layer perceptrons (MLP), where c = MLP c (f k,i c ). Inspired by prior works [44,26,69], we employ signed distance field (SDF) as a proxy to model geometry. Additionally, following [26,69], we also query a base SDF d c in the canonical space, and predict delta SDF, such that\nd = d c + MLP d (f k,i c , d c ).\nWe then convert the SDF value into density σ = 1 α Sigmoid( -d α ) for volume rendering, where α is a learnable parameter.\nTo handle the body features queried from multiple Tri-planes, we apply feature composition on RGB and density using a window function [37] for smoothness transition. Specifically, if point x k,i c,b is located in the overlapping region between the body and other parts (face, right hand, and left hand), their features are sampled from both Tri-planes and linearly blended together. More details on the feature composition can be found in the Appendix. Finally, volume rendering is applied to synthesize raw images for body, face, and hands, denoted as {I raw b , I raw f , I raw h }. These raw images are then upsampled into high-resolution images {I b , I f , I h } by a super-resolution module." }, { "figure_ref": [ "fig_1" ], "heading": "Multi-part Discriminators", "publication_ref": [], "table_ref": [], "text": "Based on the images synthesized by XAGen generator, we design a discriminator module to critique the generation results. To ensure both the fine-grained fidelity of appearance and geometry as well as disentangled control over the full body, including face and hands, we introduce multi-part discriminators to encode images {I b , I f , I h } into real-fake scores for adversarial training. As depicted in Figure 2, these discriminators are conditioned on the respective camera poses to encode 3D priors, resulting in improved geometries as demonstrated in our experiments. To enhance the control ability of the face and hands, we further condition face discriminator on expression and shape parameters [p ψ f , p β f ], and condition hand discriminator on hand pose p θ h . We encode the camera pose and condition parameters into intermediate embeddings by two separate MLPs and pass them to the discriminators. The multi-part discriminator is formulated as\ns k = D k (I k , MLP c k (c k ) + MLP p k (p ′ k )), where p ′ k =    ∅, if k = b [p ψ f , p β f ], if k = f p θ h , if k = h . (4\n)\nHere s k denotes the probability of each image I k being sampled from real data, and D k refers to the discriminator corresponding to the specific body part k. For body part, no conditioning parameters are used because we empirically find that the condition for body hinders the learning of appearance." }, { "figure_ref": [], "heading": "Training Losses", "publication_ref": [ "b20", "b39", "b43", "b68" ], "table_ref": [], "text": "The non-saturating GAN loss [21] is computed for each discriminator, resulting in L b , L f , and L h . We also regularize these discriminators using R1 regularization loss [40] L R1 . To improve the plausibility and smoothness of geometry, we compute minimal surface loss L Minsurf , Eikonal loss L Eik , and human prior regularization loss L Prior as suggested in previous works [44,69].\nDue to the occlusion in the full body images, some training samples may not contain visible faces or hands. Thus, we balance the loss terms for both generator and discriminator based on the visibility of face M f and hands M h , which denote whether face and hands are detected or not. The overall loss term of XAGen is formulated as\nL G = L G b + λ f M f ⊙ L G f + λ h M G h ⊙ L h + λ Minsurf L Minsurf + λ Eik L Eik + λ Prior L Prior , L D = L D b + L b R1 + λ f M f ⊙ (L D f + L f R1 ) + λ h M h ⊙ (L D h + L h R1 ),(5)\nwhere ⊙ means instance-wise multiplication, and λ * are the weighting factors for each term." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b35", "b67", "b13", "b17" ], "table_ref": [], "text": "We evaluate the performance of XAGen on four datasets, i.e., DeepFashion [36], MPV [68], UBC [14], and SHHQ [18]. These datasets contain diverse full body images of clothed individuals. For each image in the dataset, we process it to obtain aligned body, face and hand crops, and their corresponding camera poses and SMPL-X parameters. Please refer to Appendix for more details." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Comparisons", "publication_ref": [ "b42", "b25", "b68", "b15", "b37", "b23", "b12", "b63", "b16", "b35", "b13", "b42", "b29" ], "table_ref": [ "tab_0", "tab_2" ], "text": "Baselines. We compare XAGen with four state-of-the-art 3D GAN models for animatable human image generation: ENARF [43], EVA3D [26], AvatarGen [69], and AG3D [16]. All these methods utilize 3D human priors to enable the controllability of body pose. ENARF conditions on sparse skeletons, while others condition on SMPL [38] model. Additionally, AvatarGen and AG3D incorporate an extra face discriminator to enhance face quality. We adopt the official implementations of ENARF and EVA3D, and cite results from AG3D directly. As for AvatarGen, it is reproduced and conditioned on SMPL-X to align with the setup of our model.\nQuantitative comparisons. The fidelity of synthesized image is measured by Frechet Inception Distance (FID) [24] computed between 50K generated images and all the available real images in each dataset. To study the appearance quality for face and hands, we further crop face (resolution 64 2 ) and hands (resolution 48 2 ) regions from the generated and real images to compute FID f and FID h . To evaluate pose control ability, we compute Percentage of Correct Keypoints (PCK) between 5K real images and images generated using the same pose condition parameters of real images under a distance threshold of 0.1. To evaluate this ability in face and hand regions, we also report PCK f and PCK h . Another critical evaluation for a fully controllable generative model is the disentangled control of fine-grained attributes. Inspired by previous works [13,64], we select one attribute from {expression, shape, jaw pose, body pose, hand pose}, and modify the selected attribute while keeping others fixed for each synthesis. We then estimate the SMPL-X parameters for 1K generated images using a pre-trained 3D human reconstruction model [17] and compute the Mean Square Error (MSE) for the selected attribute between the input and estimated parameters.\nTable 1 summarizes the results for appearance quality and pose control ability for body, face, and hands. It demonstrates that XAGen outperforms existing methods w.r.t. all the evaluation metrics, indicating its superior performance in generating controllable photo-realistic human images with high-quality face and hands. Notably, XAGen shows significant improvements over the most recent method AG3D, achieving more than 20% improvement in FID and FID f on both DeepFashion and UBC datasets. Additionally, XAGen achieves state-of-the-art pose control ability, with substantial performance boost in PCK f , e.g., a relative improvement of 40.90% on MPV dataset against baseline.\nTable 2 presents the results for the disentangled control ability of XAGen compared to the baseline methods. It is worth noting that ENARF and EVA3D are not fully controllable, but we still report all the evaluation metrics for these two methods to show the controllability lower bound. Notably, the generated images of ENARF are blurry. Thus, our pose estimator cannot estimate precise jaw poses, which leads to an outlier on UBC jaw pose. In general, XAGen demonstrates state-of-the-art performance for fine-grained controls, particularly in expression, jaw, and hand pose, improving upon baseline by 38.29%, 25.93%, and 33.87% respectively on SHHQ dataset which contains diverse facial expressions and hand gestures. These results highlight the effectiveness of XAGen in enabling disentangled control over specific attributes of the generated human avatar images.\nTable 2: Quantitative comparisons with baselines in terms of disentangled control ability measured by MSE. We report Jaw×10 -4 and others ×10 -2 for simplicity, with best results in bold. * We implement AvatarGen by conditioning it on SMPL-X.\nDeepFashion [36] MPV [14] Exp↓ Shape↓ Jaw↓ Body↓ Hand↓ Exp↓ Shape↓ Jaw↓ Body↓ Hand↓ ENARF [43] 13.47 6. 30 Qualitative comparisons. Figure 3 provides qualitative comparisons between XAGen and baselines. From the results, we observe that ENARF struggles to produce reasonable geometry or realistic images due to the limitations of low training resolution. While EVA3D and AvatarGen achieve higher quality, they still fail to synthesize high-fidelity appearance and geometry for the face and hands. In contrast, XAGen demonstrates superior performance with detailed geometries for face and hands regions, resulting in more visually appealing human avatar images. The improvement of XAGen against baseline models is also confirmed by the perceptual user study, which is summarized in Table 3. Notably, XAGen achieves the best perceptual preference scores for both image appearance (≥ 57.2%) and geometry (≥ 48.3%) on all the benchmark datasets.\nFigure 4 showcases qualitative results for fine-grained control ability. We first observe that ENARF fails to generate a correct arm for the given body pose. Although EVA3D demonstrates a better pose condition ability, its shape conditioning ability is limited and the generated face suffers from unrealistic scaling. On the other hand, AvatarGen shows comparable results for pose and shape control. However, when it comes to expression, jaw pose, and hand pose controls, ours significantly outperforms AvatarGen, e.g., AvatarGen produces distortion in mouth region and blurred fingers while XAGen demonstrates natural faces and correct hand poses." }, { "figure_ref": [ "fig_6" ], "heading": "Ablation studies", "publication_ref": [], "table_ref": [ "tab_4", "tab_4", "tab_4" ], "text": "To verify the design choices in our method, we conduct ablation studies on SHHQ dataset, which contains diverse appearances, i.e., various human body, face, and hand poses as well as clothes.\nRepresentation. XAGen adopts a multi-scale and multi-part representation to improve the quality for face and hands regions. We study the necessity of this design by removing Tri-planes for face and hands. Table 4a provides the results, indicating that using only a single full-body Tri-plane (without any specific Tri-planes for face or hands) results in a significant degradation in appearance quality. Adding either face or hand Tri-plane can alleviate this issue and all the FID metrics drop slightly. The best results are achieved when both face and hand Tri-planes are enabled, demonstrating the importance of our multi-scale and multi-part representation.\nMulti-part rendering. In our model, we render multiple parts independently in the forward process to disentangle the learning of body, face, and hands. Table 4b demonstrates that independent rendering for face is crucial, as it significantly improves both fidelity (FID f : 20.63 vs. 10.06) and control ability (Exp: 6.58 vs. 5.56, Jaw: 7.26 vs. 6.57) for face. Similarly, without rendering for hand, FID h increases from 18.85 to 25.94, and MSE increases from 3.28 to 4.55 (Table 4c). The effectiveness of multi-part rendering is further supported by the qualitative results shown in Figure 5. Without independent rendering, the geometry quality degrades. For example, the eyes and mouth are collapsed without face rendering, and the model also fails to synthesize geometric details for hand when hand rendering is disabled. These highlight the importance of multi-part rendering in facilitating the learning of 3D geometries for different body parts." }, { "figure_ref": [ "fig_6" ], "heading": "Discriminators.", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "To study the effect of multi-part discriminators, we disable each of them during training. As shown in Table 4b, without face discriminator, the overall appearance quality deteriorates. Despite the slight improvement in face appearance, there is a drop in the control ability, as evidenced by the increase in the MSE values for expression and jaw pose. A similar observation can be made for hand discriminator in Table 4c. Furthermore, the qualitative results shown in Figure 5 provide visual evidence of the impact of the face and hand discriminators on the 3D geometries. When they are removed, the geometries for face and hand collapse." }, { "figure_ref": [ "fig_7", "fig_7" ], "heading": "Applications", "publication_ref": [ "b24", "b68", "b66", "b48", "b65", "b65" ], "table_ref": [], "text": "Text-guided avatar synthesis. Inspired by recent works [25,69,67] on text-guided avatar generation, we leverage a pretrained vision-language encoder CLIP [49] to guide the generation process using the given text prompt. The text-guided avatar generation process involves randomly sampling a latent code z and a control parameter p b from the dataset, and optimizing z by maximizing the CLIP similarities between the synthesized image and text prompt. As shown in Figure 6a, the generated human avatars exhibit the text-specified attributes, i.e., hair and clothes adhere to the given text prompt (e.g., brown hair and red T-shirt). The generated avatar can be re-targeted by novel SMPL-X parameters, allowing for additional control and customization of the synthesis.\nAudio-driven animation. The ability of XAGen to generate fully animatable human avatars with fine-grained control (Figure 1) opens up possibilities for audio-driven animation. The 3D avatars can be driven by arbitrary SMPL-X motion sequences generated by recent works such as [66] given audio inputs. Specifically, we sample an audio stream and SMPL-X sequence from TalkSHOW [66] and use it to animate the generated avatars. As shown in Figure 6b, XAGen is able to synthesize temporally consistent video animations where the jaw poses of the avatars are synchronized with the audio stream (highlighted in red box). Additionally, the generated avatars are generalizable given novel body poses and hand gestures, allowing diverse and expressive animations." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b8" ], "table_ref": [], "text": "Although XAGen is able to synthesize photo-realistic and fully animatable human avatars, there are still areas where improvements can be made: (1) XAGen relies on pre-estimated SMPL-X parameters, the inaccurate SMPL-X may introduce potential errors into our model, which can lead to artifacts and degraded body images. Please refer to Sup. Mat. for the experimental analysis of this issue. We believe our method can benefit from a more accurate SMPL-X estimation method or corrective operations. (2) SMPL-X only represents naked body. Thus, methods built upon SMPL-X could struggle with modeling loose clothing, which is a long-standing challenge for 3D human modeling.\nWe believe an advanced human body prior or independent clothing modeling approach is helpful to alleviate this issue. (3) Face and hand images in existing human body datasets lack diversity and sharpness, which affects the fidelity of our generation results, particularly for the novel hand poses that are out-of-distribution. A more diverse dataset with high-quality face and hand images could help tackle this problem. (4) XAGen utilizes inverse blend skinning to deform the points from canonical space to the observation space. However, this process could introduce errors, particularly when computing nearest neighbors for query points located in the connection or interaction regions. Thus, exploring more robust and accurate techniques, such as forward skinning [9], could open up new directions for future work." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work introduces XAGen, a novel 3D avatar generation framework that offers expressive control over facial expression, shape, body pose, jaw pose, and hand pose. Through the use of multi-scale and multi-part representation, XAGen can model details for small-scale regions like faces and hands. By adopting multi-part rendering, XAGen disentangles the learning process and produces realistic details for appearance and geometry. With multi-part discriminators, our model is capable of synthesizing high-quality human avatars with disentangled fine-grained control ability. The capabilities of XAGen open up a range of possibilities for downstream applications, such as text-guided avatar synthesis and audio-driven animation." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This project is supported by the National Research Foundation, Singapore under its NRFF Award NRF-NRFF13-2021-0008, and the Ministry of Education, Singapore, under the Academic Research Fund Tier 1 (FY2022)." } ]
Recent advances in 3D-aware GAN models have enabled the generation of realistic and controllable human body images. However, existing methods focus on the control of major body joints, neglecting the manipulation of expressive attributes, such as facial expressions, jaw poses, hand poses, and so on. In this work, we present XAGen, the first 3D generative model for human avatars capable of expressive control over body, face, and hands. To enhance the fidelity of small-scale regions like face and hands, we devise a multi-scale and multi-part 3D representation that models fine details. Based on this representation, we propose a multi-part rendering technique that disentangles the synthesis of body, face, and hands to ease model training and enhance geometric quality. Furthermore, we design multi-part discriminators that evaluate the quality of the generated avatars with respect to their appearance and fine-grained control capabilities. Experiments show that XAGen surpasses state-of-the-art methods in terms of realism, diversity, and expressive control abilities. Code and data will be made available at https://showlab.github.io/xagen.
XAGen: 3D Expressive Human Avatars Generation
[ { "figure_caption": "c h p h c f p f c h p h c f p f c b p b", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Pipeline of XAGen. Given a random noise z, the canonical generator synthesizes the avatar in the format of canonical multi-part and multi-scale Tri-planes given the corresponding camera pose c b . We then deform the canonical avatar under the guidance of control parameters p * to render multi-part images using respective camera poses c * and upsample the images using a super-resolution module. Discriminators encode the output images, camera poses, and control parameters into real or fake probabilities to critique the rendered images. IS represents inverse skinning.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparisons against baselines in terms of appearance and 3D geometry. Our method produces photo-realistic human images with superior detailed geometries.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": ") 67.3 48.3 67.8 63.9 57.2 61.7 60.5 55.9", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Qualitative comparisons in terms of disentangled control ability. Our method exhibits state-of-the-art control abilities for body pose, shape, expression, jaw pose, and hand pose.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative results for the ablations on multi-part rendering and discriminators.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Downstream applications of our method.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Quantitative comparisons with baselines in terms of appearance and overall control ability, with best results in bold. F.Ctl. indicates whether the approach generates fully controllable human body or not. * We implement AvatarGen by conditioning it on SMPL-X. ✓ 9.75 13.23 18.09 65.31 77.09 55.09 10.52 12.57 28.21 59.18 78.71 36.29 XAGen (Ours) ✓ 8.80 9.82 16.72 69.18 84.18 55.17 5.88 10.06 19.23 65.14 91.44 38.53", "figure_data": "DeepFashion [36]MPV [14]F.Ctl. FID↓ FIDf↓ FIDh↓ PCK↑ PCKf↑ PCKh↑ FID↓ FIDf↓ FIDh↓ PCK↑ PCKf↑ PCKh↑ENARF [43]✗ 68.62 52.17 46.86 3.54 3.79 1.34 65.97 47.71 37.08 3.06 3.55 0.67EVA3D [26]✗ 15.91 14.63 48.10 56.36 75.43 23.14 14.98 27.48 32.54 33.00 42.47 19.24AG3D [16] AvatarGen [69] * ✓ 9.53 13.96 27.68 60.12 73.38 46.50 10.06 13.08 19.75 38.32 45.26 30.75 ---------✗ 10.93 14.79 -XAGen (Ours)✓ 8.55 10.69 24.26 66.04 87.06 47.56 7.94 12.07 17.35 48.84 63.77 32.01UBC [68]SHHQ [18]F.Ctl. FID↓ FIDf↓ FIDh↓ PCK↑ PCKf↑ PCKh↑ FID↓ FIDf↓ FIDh↓ PCK↑ PCKf↑ PCKh↑ENARF [43]✗ 36.39 34.27 32.72 6.90 7.44 6.37 79.29 50.19 46.97 4.43 4.62 2.71EVA3D [26]✗ 12.61 36.87 45.66 36.31 55.31 8.38 11.99 20.04 39.83 31.24 37.60 18.38AG3D [16] AvatarGen [69]✗ 11.04 15.83 ----------", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "9.59 4.50 9.34 1.22 3.01 9.01 3.99 8.87 1.52 4.99 XAGen (Ours) 5.35 2.57 4.76 0.73 1.63 5.56 3.66 6.57 1.24 3.30", "figure_data": "5.79 3.14 9.87 11.21 4.91 8.36 2.75 12.90EVA3D [26]6.03 2.87 5.11 1.78 3.68 9.97 4.14 13.83 1.80 4.65AvatarGen [69] * 4.92 3.06 5.05 1.23 3.17 8.98 3.88 15.22 1.11 3.47XAGen (Ours) 4.46 2.77 3.67 1.26 2.95 6.31 3.88 7.43 0.94 2.23UBC [68]SHHQ [18]Exp↓ Shape↓ Jaw↓ Body↓ Hand↓ Exp↓ Shape↓ Jaw↓ Body↓ Hand↓ENARF [43]10.70 6.11 3.62 1.07 8.19 14.51 6.43 8.16 3.27 9.83EVA3D [26]7.00 2.98 5.36 1.00 2.78 7.43 4.15 9.26 1.93 5.15AvatarGen [69]", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "We conduct a perceptual human study and report participants' preferences on images and geometries generated by our method and baselines. It is measured by preference rate (%), with best results in bold. RGB represents image, and Geo represents geometry.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablations of our method on SHHQ dataset. We vary our representation, rendering method, and discriminators to investigate their effectiveness.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Zhongcong Xu; Jianfeng Zhang Bytedance; Jun Hao Liew; Jiashi Feng; Mike Zheng Shou
[ { "authors": "O Alexander; M Rogers; W Lambeth; J.-Y Chiang; W.-C Ma; C.-C Wang; P Debevec", "journal": "IEEE Computer Graphics and Applications", "ref_id": "b0", "title": "The digital emily project: Achieving a photorealistic digital actor", "year": "2010" }, { "authors": "D Anguelov; P Srinivasan; D Koller; S Thrun; J Rodgers; J Davis", "journal": "", "ref_id": "b1", "title": "Scape: shape completion and animation of people", "year": "2005" }, { "authors": "T Bagautdinov; C Wu; T Simon; F Prada; T Shiratori; S.-E Wei; W Xu; Y Sheikh; J Saragih", "journal": "ACM Trans. on Graphics", "ref_id": "b2", "title": "Driving-signal aware full-body avatars", "year": "2021" }, { "authors": "T Beeler; F Hahn; D Bradley; B Bickel; P Beardsley; C Gotsman; R W Sumner; M Gross", "journal": "", "ref_id": "b3", "title": "High-quality passive facial performance capture using anchor frames", "year": "2011" }, { "authors": "A Bergman; P Kellnhofer; W Yifan; E Chan; D Lindell; G Wetzstein", "journal": "", "ref_id": "b4", "title": "Generative neural articulated radiance fields", "year": "2022" }, { "authors": "C Bregler; J Malik; K Pullen", "journal": "Int'l. J. Computer Vision", "ref_id": "b5", "title": "Twist based acquisition and tracking of animal and human kinematics", "year": "2004" }, { "authors": "E R Chan; C Z Lin; M A Chan; K Nagano; B Pan; S De Mello; O Gallo; L J Guibas; J Tremblay; S Khamis", "journal": "", "ref_id": "b6", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "M Chen; J Zhang; X Xu; L Liu; Y Cai; J Feng; S Yan", "journal": "", "ref_id": "b7", "title": "Geometry-guided progressive nerf for generalizable and efficient neural human rendering", "year": "2022" }, { "authors": "X Chen; Y Zheng; M J Black; O Hilliges; A Geiger", "journal": "", "ref_id": "b8", "title": "Snarf: Differentiable forward skinning for animating non-rigid neural implicit shapes", "year": "2021" }, { "authors": "H Chu; S Ma; F De La Torre; S Fidler; Y Sheikh", "journal": "", "ref_id": "b9", "title": "Expressive telepresence via modular codec avatars", "year": "2020" }, { "authors": "A Collet; M Chuang; P Sweeney; D Gillett; D Evseev; D Calabrese; H Hoppe; A Kirk; S Sullivan", "journal": "ACM Trans. on Graphics", "ref_id": "b10", "title": "High-quality streamable free-viewpoint video", "year": "2015" }, { "authors": "E De Aguiar; C Stoll; C Theobalt; N Ahmed; H.-P Seidel; S Thrun", "journal": "", "ref_id": "b11", "title": "Performance capture from sparse multi-view video", "year": "2008" }, { "authors": "Y Deng; J Yang; D Chen; F Wen; X Tong", "journal": "", "ref_id": "b12", "title": "Disentangled and controllable face image generation via 3d imitative-contrastive learning", "year": "2020" }, { "authors": "H Dong; X Liang; X Shen; B Wang; H Lai; J Zhu; Z Hu; J Yin", "journal": "", "ref_id": "b13", "title": "Towards multi-pose guided virtual try-on network", "year": "2019" }, { "authors": "J Dong; Q Fang; Y Guo; S Peng; Q Shuai; X Zhou; H Bao", "journal": "", "ref_id": "b14", "title": "Totalselfscan: Learning full-body avatars from self-portrait videos of faces, hands, and bodies", "year": "2022" }, { "authors": "Z Dong; X Chen; J Yang; M J Black; O Hilliges; A Geiger", "journal": "", "ref_id": "b15", "title": "Ag3d: Learning to generate 3d avatars from 2d image collections", "year": "2023" }, { "authors": "Y Feng; V Choutas; T Bolkart; D Tzionas; M J Black", "journal": "", "ref_id": "b16", "title": "Collaborative regression of expressive bodies using moderation", "year": "2021" }, { "authors": "J Fu; S Li; Y Jiang; K.-Y Lin; C Qian; C C Loy; W Wu; Z Liu", "journal": "", "ref_id": "b17", "title": "Stylegan-human: A data-centric odyssey of human generation", "year": "2022" }, { "authors": "J Gall; C Stoll; E De Aguiar; C Theobalt; B Rosenhahn; H.-P Seidel", "journal": "", "ref_id": "b18", "title": "Motion capture using joint skeleton tracking and surface estimation", "year": "2009" }, { "authors": "A Ghosh; G Fyffe; B Tunwattanapong; J Busch; X Yu; P Debevec", "journal": "ACM Trans. on Graphics", "ref_id": "b19", "title": "Multiview face capture using polarized spherical gradient illumination", "year": "2011" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b20", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "K Grauman; A Westbury; E Byrne; Z Chavis; A Furnari; R Girdhar; J Hamburger; H Jiang; M Liu; X Liu", "journal": "", "ref_id": "b21", "title": "Ego4d: Around the world in 3,000 hours of egocentric video", "year": "2022" }, { "authors": "K Guo; P Lincoln; P Davidson; J Busch; X Yu; M Whalen; G Harvey; S Orts-Escolano; R Pandey; J Dourgarian", "journal": "ACM Trans. on Graphics", "ref_id": "b22", "title": "The relightables: Volumetric performance capture of humans with realistic relighting", "year": "2019" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "NeurIPS", "ref_id": "b23", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "F Hong; M Zhang; L Pan; Z Cai; L Yang; Z Liu", "journal": "ACM Trans. on Graphics", "ref_id": "b24", "title": "Avatarclip: Zero-shot text-driven generation and animation of 3d avatars", "year": "2022" }, { "authors": "F Hong; Z Chen; Y Lan; L Pan; Z Liu", "journal": "", "ref_id": "b25", "title": "EVA3d: Compositional 3d human generation from 2d image collections", "year": "2023" }, { "authors": "T Karras; S Laine; T Aila", "journal": "", "ref_id": "b26", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "T Karras; M Aittala; S Laine; E Härkönen; J Hellsten; J Lehtinen; T Aila", "journal": "", "ref_id": "b27", "title": "Alias-free generative adversarial networks", "year": "2021" }, { "authors": "H Kato; Y Ushiku; T Harada", "journal": "", "ref_id": "b28", "title": "Neural 3d mesh renderer", "year": "2018" }, { "authors": "N Kolotouros; G Pavlakos; K Daniilidis", "journal": "", "ref_id": "b29", "title": "Convolutional mesh regression for single-image human shape reconstruction", "year": "2019" }, { "authors": "J P Lewis; M Cordner; N Fong", "journal": "", "ref_id": "b30", "title": "Pose space deformation: a unified approach to shape interpolation and skeleton-driven deformation", "year": "2000" }, { "authors": "T Li; T Bolkart; M J Black; H Li; J Romero", "journal": "ACM Trans. on Graphics", "ref_id": "b31", "title": "Learning a model of facial shape and expression from 4D scans", "year": "2017" }, { "authors": "J.-W Liu; Y.-P Cao; T Yang; Z Xu; J Keppo; Y Shan; X Qie; M Z Shou", "journal": "", "ref_id": "b32", "title": "Hosnerf: Dynamic human-object-scene neural radiance fields from a single video", "year": "2023" }, { "authors": "L Liu; M Habermann; V Rudnev; K Sarkar; J Gu; C Theobalt", "journal": "ACM Trans. on Graphics", "ref_id": "b33", "title": "Neural actor: Neural free-view synthesis of human actors with pose control", "year": "2021" }, { "authors": "T Liu; J Zhang; X Nie; Y Wei; S Wei; Y Zhao; J Feng", "journal": "IEEE Transactions on Image Processing", "ref_id": "b34", "title": "Spatial-aware texture transformer for high-fidelity garment transfer", "year": "2021" }, { "authors": "Z Liu; P Luo; S Qiu; X Wang; X Tang", "journal": "", "ref_id": "b35", "title": "Deepfashion: Powering robust clothes recognition and retrieval with rich annotations", "year": "2016" }, { "authors": "S Lombardi; T Simon; G Schwartz; M Zollhoefer; Y Sheikh; J Saragih", "journal": "ACM Trans. on Graphics", "ref_id": "b36", "title": "Mixture of volumetric primitives for efficient neural rendering", "year": "2021" }, { "authors": "M Loper; N Mahmood; J Romero; G Pons-Moll; M J Black", "journal": "ACM Trans. on Graphics", "ref_id": "b37", "title": "Smpl: A skinned multiperson linear model", "year": "2015" }, { "authors": "N Max", "journal": "IEEE Trans. on Visualization and Computer Graphics", "ref_id": "b38", "title": "Optical models for direct volume rendering", "year": "1995" }, { "authors": "L Mescheder; A Geiger; S Nowozin", "journal": "ICLR", "ref_id": "b39", "title": "Which training methods for gans do actually converge?", "year": "2018" }, { "authors": "L Mescheder; M Oechsle; M Niemeyer; S Nowozin; A Geiger", "journal": "", "ref_id": "b40", "title": "Occupancy networks: Learning 3d reconstruction in function space", "year": "2019" }, { "authors": "B Mildenhall; P P Srinivasan; M Tancik; J T Barron; R Ramamoorthi; R Ng", "journal": "", "ref_id": "b41", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "A Noguchi; X Sun; S Lin; T Harada", "journal": "", "ref_id": "b42", "title": "Unsupervised learning of efficient geometry-aware neural articulated representations", "year": "2022" }, { "authors": "R Or-El; X Luo; M Shan; E Shechtman; J J Park; I Kemelmacher-Shlizerman", "journal": "", "ref_id": "b43", "title": "Stylesdf: High-resolution 3d-consistent image and geometry generation", "year": "2022" }, { "authors": "A A Osman; T Bolkart; M J Black", "journal": "", "ref_id": "b44", "title": "Star: Sparse trained articulated human body regressor", "year": "2020" }, { "authors": "A A Osman; T Bolkart; D Tzionas; M J Black", "journal": "", "ref_id": "b45", "title": "Supr: A sparse unified part-based human representation", "year": "2022" }, { "authors": "G Pavlakos; V Choutas; N Ghorbani; T Bolkart; A A A Osman; D Tzionas; M J Black", "journal": "", "ref_id": "b46", "title": "Expressive body capture: 3d hands, face, and body from a single image", "year": "2019" }, { "authors": "S Peng; Y Zhang; Y Xu; Q Wang; Q Shuai; H Bao; X Zhou", "journal": "", "ref_id": "b47", "title": "Neural body: Implicit neural representations with structured latent codes for novel view synthesis of dynamic humans", "year": "2021" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "", "ref_id": "b48", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "E Remelli; T Bagautdinov; S Saito; C Wu; T Simon; S.-E Wei; K Guo; Z Cao; F Prada; J Saragih", "journal": "", "ref_id": "b49", "title": "Drivable volumetric avatars using texel-aligned features", "year": "2022" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b50", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "J Romero; D Tzionas; M J Black", "journal": "ACM Trans. on Graphics", "ref_id": "b51", "title": "Embodied hands: modeling and capturing hands and bodies together", "year": "2017" }, { "authors": "S Saito; Z Huang; R Natsume; S Morishima; A Kanazawa; H Li", "journal": "", "ref_id": "b52", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "K Shen; C Guo; M Kaufmann; J J Zarate; J Valentin; J Song; O Hilliges", "journal": "", "ref_id": "b53", "title": "X-avatar: Expressive human avatars", "year": "2023" }, { "authors": "Y Shi; D Aggarwal; A K Jain", "journal": "", "ref_id": "b54", "title": "Lifting 2d stylegan for 3d-aware face generation", "year": "2021" }, { "authors": "A Siarohin; O J Woodford; J Ren; M Chai; S Tulyakov", "journal": "", "ref_id": "b55", "title": "Motion representations for articulated animation", "year": "2021" }, { "authors": "J Sun; X Wang; L Wang; X Li; Y Zhang; H Zhang; Y Liu", "journal": "", "ref_id": "b56", "title": "Next3d: Generative neural texture rasterization for 3d-aware head avatars", "year": "2023" }, { "authors": "S Tang; F Tan; K Cheng; Z Li; S Zhu; P Tan", "journal": "", "ref_id": "b57", "title": "A neural network for detailed human depth estimation from a single image", "year": "2019" }, { "authors": "T Wang; B Zhang; T Zhang; S Gu; J Bao; T Baltrusaitis; J Shen; D Chen; F Wen; Q Chen", "journal": "", "ref_id": "b58", "title": "A generative model for sculpting 3d digital avatars using diffusion", "year": "2023" }, { "authors": "D Xiang; F Prada; T Bagautdinov; W Xu; Y Dong; H Wen; J Hodgins; C Wu", "journal": "ACM Trans. on Graphics", "ref_id": "b59", "title": "Modeling clothing as a separate layer for an animatable human avatar", "year": "2021" }, { "authors": "Y Xiu; J Yang; D Tzionas; M J Black", "journal": "", "ref_id": "b60", "title": "Icon: implicit clothed humans obtained from normals", "year": "2022" }, { "authors": "H Xu; E G Bazavan; A Zanfir; W T Freeman; R Sukthankar; C Sminchisescu", "journal": "", "ref_id": "b61", "title": "Ghum & ghuml: Generative 3d human shape and articulated pose models", "year": "2020" }, { "authors": "H Xu; G Song; Z Jiang; J Zhang; Y Shi; J Liu; W Ma; J Feng; L Luo", "journal": "", "ref_id": "b62", "title": "Omniavatar: Geometry-guided controllable 3d head synthesis", "year": "2023" }, { "authors": "H Xu; G Song; Z Jiang; J Zhang; Y Shi; J Liu; W Ma; J Feng; L Luo", "journal": "", "ref_id": "b63", "title": "Omniavatar: Geometry-guided controllable 3d head synthesis", "year": "2023" }, { "authors": "Z Xu; J Zhang; J Liew; W Zhang; S Bai; J Feng; M Z Shou", "journal": "", "ref_id": "b64", "title": "Pv3d: A 3d generative model for portrait video generation", "year": "2023" }, { "authors": "H Yi; H Liang; Y Liu; Q Cao; Y Wen; T Bolkart; D Tao; M J Black", "journal": "", "ref_id": "b65", "title": "Generating holistic 3d human motion from speech", "year": "2023" }, { "authors": "K Youwang; K Ji-Yeon; T.-H Oh", "journal": "", "ref_id": "b66", "title": "Clip-actor: Text-driven recommendation and stylization for animating human meshes", "year": "2022" }, { "authors": "P Zablotskaia; A Siarohin; B Zhao; L Sigal", "journal": "", "ref_id": "b67", "title": "Dwnet: Dense warp-based network for pose-guided human video generation", "year": "2019" }, { "authors": "J Zhang; Z Jiang; D Yang; H Xu; Y Shi; G Song; Z Xu; X Wang; J Feng", "journal": "", "ref_id": "b68", "title": "Avatargen: a 3d generative model for animatable human avatars", "year": "2023" }, { "authors": "J Zhang; H Yan; Z Xu; J Feng; J H Liew", "journal": "", "ref_id": "b69", "title": "Magicavatar: Multimodal avatar generation and animation", "year": "2023" }, { "authors": "X Zhang; J Zhang; C Rohan; H Xu; G Song; Y Yang; J Feng", "journal": "", "ref_id": "b70", "title": "Getavatar: Generative textured meshes for animatable human avatars", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 119.77, 85.65, 85.94, 123.07 ], "formula_id": "formula_0", "formula_text": "{p b , p f , p h } F b F f F h F b F f F h F b F f F h" }, { "formula_coordinates": [ 4, 118.89, 468.98, 150.42, 17.94 ], "formula_id": "formula_1", "formula_text": "F b ∈ R Wb×Wb×3C , F f ∈ R Wf×Wf×3C" }, { "formula_coordinates": [ 4, 350.52, 480.79, 78.04, 10.47 ], "formula_id": "formula_2", "formula_text": "W f = W h = W b /2." }, { "formula_coordinates": [ 4, 462.89, 508.93, 42.35, 17.29 ], "formula_id": "formula_3", "formula_text": "F into F k ," }, { "formula_coordinates": [ 5, 223.35, 111.73, 281.31, 27.27 ], "formula_id": "formula_4", "formula_text": "T k,i = ( j w n j R j t j 0 1 I ∆ n 0 1 ) -1 ,(1)" }, { "formula_coordinates": [ 5, 273.2, 182.18, 231.47, 12.69 ], "formula_id": "formula_5", "formula_text": "x k,i c = T k,i x k,i o ,(2)" }, { "formula_coordinates": [ 5, 211.62, 286.08, 293.05, 43.9 ], "formula_id": "formula_6", "formula_text": "f b,i c =    Q(x b,i c , F f ), if x b,i c ∈ B f , Q(x b,i c , F h ), if x b,i c ∈ {B rh , B lh }, Q(x b,i c , F b ), if x b,i c / ∈ {B f , B lh , B rh },(3)" }, { "formula_coordinates": [ 5, 108, 379.31, 397.93, 23.21 ], "formula_id": "formula_7", "formula_text": "d = d c + MLP d (f k,i c , d c )." }, { "formula_coordinates": [ 5, 156.07, 647.67, 344.72, 38.49 ], "formula_id": "formula_8", "formula_text": "s k = D k (I k , MLP c k (c k ) + MLP p k (p ′ k )), where p ′ k =    ∅, if k = b [p ψ f , p β f ], if k = f p θ h , if k = h . (4" }, { "formula_coordinates": [ 5, 500.8, 662, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 6, 138.1, 192.27, 366.57, 34.58 ], "formula_id": "formula_10", "formula_text": "L G = L G b + λ f M f ⊙ L G f + λ h M G h ⊙ L h + λ Minsurf L Minsurf + λ Eik L Eik + λ Prior L Prior , L D = L D b + L b R1 + λ f M f ⊙ (L D f + L f R1 ) + λ h M h ⊙ (L D h + L h R1 ),(5)" } ]
10.1145/2701413
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b23", "b39", "b5" ], "table_ref": [], "text": "Humans, as beings innately attuned to their surroundings, traverse a world where conversations, decisions, behaviors, and understanding are deeply embedded in the underlying fabric of situation. Their engagement with the world entails commonsense (background) knowledge about entities-properties, spatial relations, events, causes and effects, and other social norms ((McCarthy, 1959); (Winograd, 1972); (Davis & Marcus, 2015)). The importance of situational awareness is starkly evident in our daily tasks, where choosing objects for specific activities showcases our adaptability to different settings. Consider the straightforward task of cutting a cake-how do we determine which object is suitable for this task? When a person needs to select an object to accomplish this task, there can be an array of factors that might affect our choice. For example: we must choose something that is capable of cutting (Utility), suitable for cutting a cake (contextual appropriateness), and likely in an appropriate physical condition to be used (physical state). These considerations would be to ensure the appropriateness, ease, and safety of those cutting the cake as well as who will eat the cake. These considerations although might seem trivial and intuitive to us humans, are still an important aspect to consider when developing embodied household agents. Such reasoning capabilities can be potentially leveraged by embodied agents to generate action plans for human requirements represented in natural language. In this work, we propose a CommonSense Object Affordance Task: a textual physical commonsense task to evaluate most appropriate object selection capabilities in the presence of various alternative objects. • Evaluation of Large Language Model baselines on these datasets, accompanied by a detailed analysis of their performance in multi-step abstract reasoning scenarios." }, { "figure_ref": [], "heading": "Dataset Creation", "publication_ref": [ "b33", "b17" ], "table_ref": [ "tab_7" ], "text": "To systematically investigate the capacity of LLM to conduct human-style physical commonsense reasoning and preferences across three crucial factors, we have devised an experimental framework centered around 75 household tasks, carefully curated to span 22 distinct utilities. The experiment involves a diverse inventory of 100 objects sourced from the AI2Thor Simulator (Speer et al., 2017), ensuring relevance and diversity within a household context.\n1. Tasks: are high-level household activities that could be accomplished by a human or embodied agent. Example: Cutting a Cake. See Task List 2. Utilities: are extremely low-level utilities that could be assigned to a High-level task. Like, for the example of Cutting Cake, the utility that gets activated is Cutting. See Table 2a 3. Objects: are a subset of objects available in AI2Thor (Kolve et al., 2022) Simulator. See The following section gives an overview of the Annotation Tasks and the process of creation of CommonSense Reasoning Datasets." }, { "figure_ref": [], "heading": "Human Preference Collection", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Utility", "publication_ref": [ "b1" ], "table_ref": [], "text": "Incorporating GPT3.5-turbo (Brown et al., 2020) along with human commonsense annotations, we meticulously established a mapping between utilities and objects. These are called Utility Objects. Notably, each object may be associated with multiple utilities, and conversely, a single utility can be linked to various objects. Table 8 provides an overview of the utilities along with their associated objects utilized in our experiments. More Information about the annotation process can be found in Appendix D" }, { "figure_ref": [], "heading": "Contextual Appropriateness", "publication_ref": [], "table_ref": [], "text": "In evaluating object utility, it is crucial to recognize that suitability for specific tasks can vary significantly. Take, for example, the multifaceted use of a candle. While it possesses the inherent ability to generate heat, employing a candle for the purpose of heating soup introduces a range of practical limitations. This observation underscores the complexity of human preference and decision-making in the context of object utility. Key factors influencing these choices include efficiency (as illustrated by the impracticality of using a candle for heating soup), safety considerations (such as the risks associated with standing on an armchair), social norms and constructs (exemplified by the unconventional choice of serving wine in a bowl), and the overall appropriateness of an action (e.g., the disposal of eggshells in a sink basin). To systematically explore these dynamics, we engaged human annotators in a study designed to assess the selection of appropriate objects for specified tasks and utilities" }, { "figure_ref": [], "heading": "Physical State", "publication_ref": [ "b19", "b11" ], "table_ref": [], "text": "The selection of objects for specific tasks is influenced not only by intangible factors such as safety and social constructs but also by the object's current physical state. Prior research, including the works of Li et al. (2023) and Gao et al. (2023), has employed various physical parameters to examine Language Learning Models' (LLMs) comprehension of an object's physical attributes. In our study, we shift the focus to task planning under non-ideal conditions, necessitating reasoning about potential substitute objects. To this end, we have developed five distinct variables, each represented by abstract symbolic terms. These variables have been derived directly from the AI2Thor Simulator, facilitating their broader applicability and potential integration into the burgeoning field of Embodied AI. Ranking Object Configurations In our study, we not only provided configurations that occur commonly but also tasked the annotators with categorizing the configurations of an object into three distinct classes: Ideal, Moderate, and Bad. This classification was predicated on their assessment of the anticipated time required for an agent to commence the task with a given object configuration. Utilizing these categorizations, we constructed two comprehensive datasets comprising 130,000 questions specifically designed to assess the physical commonsense reasoning capabilities of Large Language Models. Further details on this process are elaborated in Appendix D" }, { "figure_ref": [ "fig_2" ], "heading": "CommonSense QnA Datasets", "publication_ref": [], "table_ref": [], "text": "Based on Contextual Appropriateness and Physical State. We created 3 CommonSense QA datasets.\n1. Task-01 : This experiment was based on pruning Objects based on contextual factors affecting the appropriateness of an object for a particular task. We utilized Object Level Dataset for its evaluation.\n2. Task-1 & Task-2: These experiments were based on pruning out Object Configurations (Physical State) represented by 5 symbolic variables. We utilized Variable Level Datasets for its evaluation.\nFigure 3: A mapping created between Tasks and Concepts was utilized to sample out <Task,Utility> combination to frame a question for all 3 datasets (1 Object Level, 2 Variable Level)" }, { "figure_ref": [], "heading": "Object Level Dataset", "publication_ref": [], "table_ref": [], "text": "To evaluate the reasoning capabilities of LLM across choosing objects over contextual factors we curate an Object Level QA dataset. Here, previously recorded Context Mappings were kept as Ground Truth. (See Annotation Task 2.1.2)\nQuestion Every question can be assigned a <Task, Utility> combination and was framed in the way shown below:" }, { "figure_ref": [], "heading": "Question", "publication_ref": [], "table_ref": [], "text": "What object would you be choosing for <utility> when you are tasked to <task>?\nOptions Based on the sampling strategy and the number of options provided in the prompt, we created 4 variations of object level dataset. An example of such variation is shown below.\n1. Variation-1 : For each question, we randomly sampled 1 context object and 1 utility object both belonging to the same utility. Here, a distinct approach is employed compared to the Object level Dataset, where Context Objects were previously sampled randomly based on a combination of the question's Task and Utility. In this study, we have classified the configurations of Context Objects into three broad categories: \"Ideal,\" \"Moderate,\" and \"Bad.\" Each category is defined by specific annotation variables that delineate their characteristics. The \"Ideal\" category represents configurations in their optimal states, facilitating the specified task without the need for additional adjustments. In contrast, the \"Moderate\" category includes configurations that deviate from these ideal states, resulting in both time and material costs for their utilization. The models assess these options based on their estimated penalties. Lastly, the \"Bad\" category comprises configurations that render the Context Objects ineffective, even when considering potential penalties. Both \"Moderate\" and \"Bad\" configurations are grouped under Sub-Optimal Configurations, offering a nuanced understanding of the varying degrees of object usability.\nBy sampling options from these 3 sets of configurations [2.1.3], we divide our effort into 2 datasets:\nA. Ideal Configuration Dataset In alignment with its name, the \"Ideal Configuration\" dataset involves questions with the correct answer as Ideal Configuration of Context Object of the question's associated <Task,Utility> combination. To systematically analyze the behavior of models, we introduce 12 distinct variations of this dataset. The creation of these variations is designed to progressively augment the complexity of the datasets, facilitating a comprehensive analysis of model behaviors. Each of the 12 variations comprises approximately 5,000 question-answer pairs, with differing counts of options-ranging from 5 options to 2 options per question. Along with the varying number of options, we also ablated on various sampling techniques. This deliberate variation in the number of options aims to evaluate the impact on success rates of Large Language Models (LLMs) as the level of reasoning complexity increases. Whereas, the different sampling techniques help us study their behavior concerning different object distribution.\nProcess: To create these 12 variation datasets, we sampled a Task for n number of times, where n is proportional to the total count of all Commonly Occurring Configurations of its Utility Objects. [Annotation Task 2.1.1] For a given Question's <Task, Utility> Combination, we randomly sample a Context Object from the pool of Context objects. (obtained from 2.1.2). An example of sampling the remaining options is explained below:\nFor 5 option datasets:\n1. Variation-1 : randomly selected Context Object's Ideal Configuration + 4 randomly sampled sub-optimal configurations of the same Context Object 2. Variation-2 : randomly selected Context Object's Ideal Configuration + 2 randomly sampled suboptimal configurations of the same Context Object + 2 randomly sampled sub-optimal configurations of different Context Object belonging to the same <Task,Utility> combination3 " }, { "figure_ref": [], "heading": "Example for Task 1 Variation 1", "publication_ref": [], "table_ref": [], "text": "Question ID: 1, Utility: heating(source), Question: Which of the following objects would be best suited for the purpose of \"heating(source)\" when tasked to \"reheating coffee\"? Options:\n( " }, { "figure_ref": [], "heading": "B. Sub-Optimal Configuration Dataset", "publication_ref": [], "table_ref": [], "text": "The process of selecting an ideal configuration, while challenging for language models, typically does not require intricate multi-step reasoning or the consideration of a wide range of factors. To more rigorously evaluate their reasoning abilities, particularly when faced with only sub-optimal options, we have intentionally excluded all ideal configurations from our sampling methodology. This deliberate exclusion necessitates that the models engage in more sophisticated reasoning, considering various physical state variables, thereby highlighting their capacity for abstract reasoning. By focusing exclusively on sub-optimal configurations, this methodological shift enables a more thorough investigation into the language models' ability to navigate and reason through complex scenarios in the absence of clear-cut ideal solutions.\nProcess: To comprehensively assess language models' abstract reasoning capabilities when confronted with sub-optimal configurations, we create another Variable Level QA dataset and introduce 14 variations of this dataset. Like the previous dataset, each dataset is constructed using distinct sampling strategies and by varying number of options. Across all 14 datasets, we maintain a consistent structure of nearly 5,000 questions.\nEach question in this dataset variation is associated with a Task and Utility combination. While the set of questions remains consistent with previous datasets, the sampling of each task is now proportional to the count of \"Moderate Configurations + Bad Configurations\" (i.e the count of Sub-Optimal Configurations for that question's associated <Task, Utility> combination). An example of 2 sampling techniques used for generating variation datasets is explained below:\nFor 5 option dataset We evaluate and compare the performances of various large Language Models on the following metrics:\n1. Accuracy: The fraction of a number of questions answered correctly by the Language Model.\n2. Bad Rate: The fraction of questions in which the chosen answer belonged to the \"Bad\" configuration pool. " }, { "figure_ref": [], "heading": "Dataset Summary", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "bad-configuration", "publication_ref": [], "table_ref": [], "text": "An inoperative state of an object with irreparable issues, marked by the presence of \"Bad Variable\" values. sub-optimal configuration group of \"Moderate\" and \"Bad\" configurations" }, { "figure_ref": [ "fig_4" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_8", "tab_9" ], "text": "Task 0 Analysis: We observe from Table 3; that the performance of GPT3.5-Turbo and PaLM outperform other models with a much smaller number of parameters. This may be attributed to their size as well as the amount of internet data they've been trained on. They both showcased similar performance suggesting similar object-level reasoning capabilities suggesting an impressive object-level commonsense Reasoning. Even though the performance of every model was observed to be impressive; Mistral-7B outshone all other models of similar size as well as both 13B models. Upon analyzing the trend of average accuracy across various datasets for Task-0[Figure 4] we note an important trend implying a drop in accuracy as we increase the number of options. This suggests degradation in reasoning capabilities as the number of comparisons increases. This trend was observed in Task 1 and Task 2 as well5 . Table 3 andFigure 4 summarizes the performance accuracy of different models on Task-1 Datasets where models were tasked to reason based on Physical Configuration of Objects (using Ideal Configuration Datasets). This task was aimed at judging if language models have an understanding of the difference between Ideal Configuration and Sub-Optimal Configurations. Here as well we witness the superior reasoning capabilities of GPT3.5-Turbo and PaLM, with the latter outperforming the former on each dataset by an average of 8.8%. Amongst the smaller models, we see Mistral7B dominating all other 7B and 6B models with Vicuna7B and ChatGLM-6B performing very close to random performances. For 13B models, LLama2-13B showcased its superior reasoning capabilities and was on average 7.6% more accurate than Vicuna13B. Here apart from the falling average accuracy with increasing options, we also notice some interesting behaviors when we increased the Object Diversity (i.e an increase in the number of sub-optimal configurations of different context object (of same <Task, Utility> Combination) other than the object who's Ideal Configuration is already in the options as the correct answer). options. This sheds light on the existing bias towards using a commonly used object rather than choosing an object after reasoning over every object's complete physical state. However, for big models like PaLM and GPT3.5-Turbo, we notice an improvement in accuracy with the Object Diversity at the extreme. Thus we could conclude that even though there was a drop in accuracy with more diverse options in PaLM and GPT3.5-Turbo; unlike the small models they were not answering excessively based on their bias towards the commonly used object. Task 2 Analysis: Table 5 summarizes the performance of various models on Task-2, where the models were asked to reason over the best choice of object configurations from the Sub-Optimal Configuration Datasets. This task could be interpreted as finding the option that would be the least time-consuming and most appropriate amongst a variety of Sub-optimal Configurations of Context Objects of the question's <Task,Utility> combination. Here we sampled some moderate configurations (neither Ideal nor Bad) and some Bad Configurations. The best amongst the moderate ones was kept as the Ground Truth.\n[Refer Appendix D] Our observations reveal consistent superiority of GPT-3.5-Turbo and PaLM across all models. Notably, GPT-3.5-Turbo consistently lags behind PaLM by an average margin of 3.7%. Despite their commendable comparative performance, both models exhibit limitations in comparing across various physical variables of moderate configurations, resulting in a significant performance downturn. Even this time, we observed Vicuna7B and ChatGLM-6B exhibiting random erratic behaviors reflected in their consistent random outputs. While LLama2-13B performed superior to all other small-scale models, the general observed order was ChatGLM2-6B ∼ Mistral-7B < Vicuna13B < LLama2-13B < GPT3.5-Turbo < PaLM. In addition to the drop in average accuracy with increasing options, Figure 6 shows the trend of enhanced performance as we increase the count of Bad Configurations within a type of dataset. This could be attributed to the ability of models to differentiate bad configurations from moderate configurations. To delve deeper and analyze what fraction of the responses were correct and what fraction were from the \"Bad Configurations\" we make use of another metric: Bad Rate.\nTable 6 shows the percentage of questions where a \"Bad Configuration\" was predicted as the correct answer.\nIn our evaluations, this would mean that the model went wrong with reasoning in these questions. To probe LLM reasoning when presented with a varied amount of \"bad configurations\", we went on to increase the fraction of bad configurations present in each question's options as we moved from left to right ([v1→v5], [v6→v9], [v10→v12]). With an increased fraction of options belonging to \"bad configurations\", we expected an increase in the bad rate. A good model would have a smaller magnitude of the bad rate as well as a larger gap between the fraction of bad options line (dotted line) and their bad rate value. Figure [7, 8a 8b] further showcases the trend of observed bad rates and the trend of the increase of the fraction of bad options we inserted in the prompt. While PaLM and GPT2.5-Turbo showed the least rise in bad rates, we observed LLama2-13B outperforming all other models and consistently trying to achieve PaLM and GPT3. Accurately reasoning over an object's current physical state is an important aspect of developing robust embodied agents that can accomplish tasks even if the ideal objects are not available. We created a 3 step framework to break the decision-making process that we humans go through mentally while choosing an object for task completion. We further created 3 major datasets to evaluate object-level reasoning capabilities and physical state-level reasoning capabilities in Large Language Models. We found that even small models have a fair amount of object-based reasoning capabilities [R1] with their performance decreasing over increasing the number of options provided. While evaluating commonsense reasoning over an object's physical state we noticed the Language Model's impressive abilities to segregate out Ideal Configurations and Bad Configurations. However, their reasoning capabilities for analyzing Moderate Configurations seem to take a downturn. Here also, along with the decrease in performance with increasing options; for small models we noticed a decreasing accuracy within each option ideal level dataset(task-1) as we increased the object diversity. This brings forth the internal bias of these small models to stick to an object commonly used for a task, even if it is not in an Ideal State or a condition to be readily used. However, for larger models, we observed a lesser degradation in accuracy. Further, as we varied the number of bad configurations within each type of moderate-level dataset (task-2), we noticed the abilities of larger models like GPT3.5-Turbo and PaLM to not get confused into choosing \"bad configurations\" with an increase in such options. However, when the bad rates were compared across smaller models, we observed Llama2-13B showing small bad rates consistently across all variations. In view of these observations, we can safely conclude that Language Models like GPT3.5-Turbo, PaLM, and Llama2-13B can prune out appropriate Objects [Task 0] and Extremes (Ideal and Bad Configurations) to an impressive extent. However, they face a certain level of difficulty in comparing against moderate Configurations which require a certain amount of abstract reasoning equipped with a commonsense understanding of the world around them [R2]. However, smaller language models perform sub-optimally over [R1] and [R2] both showcasing poor commonsense reasoning capabilities and poor generalization beyond the data it was trained on.\nOur work opens up an avenue for improving the language model's abstract multi-step reasoning for estimating the physical affordance of everyday objects that are used in household activities. Future efforts would be directed towards integrating these datasets to train Embodied Language agents and proving their competence of our 3-step architecture in successful task completion when situations aren't ideal. Judging the variable values in the real world could be a tricky affair, thus although the current work focused on handcrafted variables, calculating these variables and learning new latent variables from multi-modal inputs for effective analysis and reasoning about an object's applicability seems a foreseeable domain to explore." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This work focuses on dealing with contextual connotations associated with an object when deciding whether to use it as a substitute for task execution. We further considered abstract physical variable level analysis to highlight the evolution of usability with various physical abstractions. While determining the values of these variables may appear straightforward in the Ai2Thor Simulator, achieving the same in real-life scenarios requires a resilient model. Even if we are able to calculate the variables, there is a limitation to which an object's state could be represented using abstract physical variables. When comparing objects, sometimes we need to understand their exact situation to make a decision about their usability. To develop robust embodied agents capable of dealing with such explicit reasoning along with abstract commonsense reasoning capabilities, further work needs to be directed along with integrating multi-modal reasoning capabilities in addition to commonsense reasoning. In addition to this, in this study, we assumed that all the objects were allowed to be used by the agent. In some cases, it might be possible that the human companion of the agent might have kept an object because of a certain way and didn't want it disturbed. Thus the agent might need to re-calculate the object use preference in accordance with this newly imposed human preference. Further works along this line would enable us to move an inch closer toward Embodied agents capable of such constrained planning capabilities in addition to multi-modal commonsense reasoning." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b30", "b28", "b21", "b28", "b29", "b4", "b38", "b9", "b26", "b13", "b0", "b31", "b34", "b16", "b12", "b40", "b24", "b14", "b8", "b10", "b32" ], "table_ref": [], "text": "Previous work has been done in the domains related to the scope of this paper. In this section, we summarize some of them:\nProbing Language Models Understanding what LMs know after large-scale pre-training is an active research area (Rogers et al., 2020). Various probing methods have been developed (Tenney et al., 2019b); (Petroni et al., 2019), and investigations show that LMs capture linguistic (Tenney et al., 2019a); (Liu et al., 2019), factual (Petroni et al., 2019); (Roberts et al., 2020); (Dai et al., 2022), commonsense knowledge (Wang et al., 2019); (Forbes et al., 2019), and even acquire grounded concepts (Patel & Pavlick, 2021).\nCommonSense QA Datasets Evaluating to what level commonsense world understanding LMs possess has been explored by many. (Gu et al., 2023) analyses mental models of LLMs and aligns them with improved models about everyday things; (Bisk et al., 2019) consisted of questions requiring physical commonsense reasoning. Recently, there has been a lot of work in NLP to utilize commonsense for QA, NLI, etc. (Sap et al., 2019); (Talmor et al., 2019). Many of these approaches seek to effectively utilize ConceptNet by reducing the noise retrieved from it (Lin et al., 2019) (Kapanipathi et al., 2020) There have been several other QA Datasets to benchmark CommonSense Reasoning abilities in Language Models, Some of them include: (Geva et al., 2021); (Yang et al., 2018); (Mihaylov et al., 2018);\nReasoning in LLMs Reasoning is a crucial aspect of intelligence, influencing decision-making, problemsolving, and other cognitive abilities. (Huang & Chang, 2023) presents the current state of research on LLMs' reasoning abilities, exploring approaches to improve and evaluate their reasoning skills. (Dziri et al., 2023) investigates problems associated with multistep reasoning with LLMs Some of the works dealing with tackling reasoning in small models are: (Magister et al., 2023) (Fu et al., 2023) (Shridhar et al., 2023) 1. Variation-11: Random context object's Ideal Configuration + 1 randomly sampled sub-optimal configurations of the same context object. We sample 1 option from the Moderate Configurations of the context objects of that particular <Task, Utility> combination, we allow sampling equivalent options as long as either of them is not the correct answer. We also sample 4 options from the Bad Configurations of context objects of that particular <Task, Utility> combination 4 option dataset 1. Variation-6: We sample 4 options from the Moderate Configurations of the context objects of that particular <Task, Utility> combination, here we allow sampling equivalent options as long as either of them is not the correct answer.\n2. Variation-7: We sample 3 options from the Moderate Configurations of the context object of that particular <Task,Utility> combination, we allow sampling equivalent options as long as either of them is not the correct answer. We also sample 1 option from the Bad Configurations of the random context objects of that particular <Task, Utility> combination 3. Variation-8: We sample 2 options from the Moderate Configurations of the context object of that particular <Task,Utility> combination, we allow sampling equivalent options as long as either of them is not the correct answer. We also sample 2 options from the Bad Configurations of the random context objects of that particular <Task, Utility> combination 4. Variation-9: We sample 1 option from the Moderate Configurations of the context objects of that particular <Task, Utility> combination, here we allow sampling equivalent options as long as either of them is not the correct answer. We also sample 3 options from the Bad Configurations of context objects of that particular <Task, Utility> combination 3 option dataset 1. Variation-10: We sample 3 options from the Moderate Configurations of the context objects of that particular <Task, Utility> combination, here we allow sampling equivalent options as long as either of them is not the correct answer." }, { "figure_ref": [ "fig_7" ], "heading": "D Annotation Process", "publication_ref": [], "table_ref": [], "text": "The entire annotation process was text-based and was executed by circulating a text-based questionnaire.\nFigure 11 summarizes the entire annotation process for generating Ground Truths for all 3 datasets. Creation of utility-object mappings that were further used as the backbone for all the tasks and datasets, involved the use of GPT3.5-Turbo and Human Annotation. The annotators were asked to label 100 objects with utilities from a list of 22 utilities. The inter-annotator agreement was calculated by formulating this as a multi-annotator-multi-label scenario where each annotator could annotate a variable number of labels per object. The annotator agreement was 89.2%, suggesting a high degree of agreement within the annotators. The consolidated utility-object mappings are found Link" }, { "figure_ref": [], "heading": "D.2 Human Annotations: Task-Object Mappings", "publication_ref": [], "table_ref": [], "text": "To curate ground truth task-object mappings also called Context Mappings; we ask the annotators to choose objects appropriate for a <Task, Utility> combination amongst the utility objects. As one question can have more than 1 possible correct object, we calculated inter-annotator agreement by modeling this as a process similar to the previous annotation task. The annotator agreement was observed to be: 81.0%, suggesting a high degree of agreement amongst the annotators. The question posed to the annotators was similar to the ones used to curate Task 0 (Object Level Dataset) and the obtained responses were used as Ground Truth for Task 0 Dataset. The processed GT can be found here: Link" }, { "figure_ref": [], "heading": "D.3 Human Annotations: Common Object-Variables Mappings", "publication_ref": [], "table_ref": [], "text": "To get the common variable values for all the objects; we further ask the annotators to provide all commonly occurring variable values of each object. Using these we created all possible configurations. Upon calculating the inter-annotator agreement as earlier annotation tasks, we observed an inter-annotator agreement of 89.9 when averaged across all 5 variables. The processed output can be found here: Link" }, { "figure_ref": [], "heading": "D.4 Human Annotations: Ideal Object Configurations", "publication_ref": [ "b18" ], "table_ref": [], "text": "Further, we ask the annotators to categorize variable values into 3 categories: Ideal, Moderate, and Bad. \"Ideal\" refers to an ideal state of the object; \"moderate\" means you have to spend some time getting the object in an ideal state before it can be used whereas \"bad\" means the object is unusable. Some variable values are obvious; like: \"free\" would be ideal whereas; \"reversible-using\" would be moderate; and \"irreversible-using\" would be bad. So we only ask them to give preference for variables like Material. The observed Krippendorff's reliability alpha (Krippendorff, 2011) among the raters for classifying material variable values into categorical variables: \"Ideal\", \"Moderate\" and \"Bad\" was 0.87, suggesting a high degree of agreement amongst the annotators. The Ideal Configurations can be found Link." }, { "figure_ref": [], "heading": "D.5 Human Annotations: Moderate Configurations", "publication_ref": [], "table_ref": [], "text": "After classifying the variable values into these 3 categories, we asked them to arrange the values in increasing order of their appropriateness for a given <Task,Utility> combination. For ranks as ordinal variables, we observed Krippendorff's alpha value to be 0.89, showing a high agreement amongst the annotators. Further, we set a penalty for each moderate variable value and consequently generate 2 penalty scores for each configuration: material penalty and time penalty. It is using these penalties we further arrange the configurations based on a time penalty and then a material penalty. This helps us create a relative ranking within the moderate configurations and enables us to sample \"moderate\" options when curating Task 2 Dataset." }, { "figure_ref": [], "heading": "D.6 Human Annotations: Bad Configurations", "publication_ref": [], "table_ref": [], "text": "For the Bad Configurations, we set abnormally high values for material and time penalties. These configurations help us sample \"bad\" options when curating Task 2 Dataset. The sub-optimal configurations including \"moderate\" and \"bad\" configurations can be found here: Link" }, { "figure_ref": [], "heading": "E Prompts used", "publication_ref": [], "table_ref": [], "text": "Prompts used for various models can be found at this link. Link" }, { "figure_ref": [], "heading": "Mistral-7B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: Based on the given physical variables, object A (Microwave) would be the best suited for the purpose of heating(source) when tasked to reheating coffee. The microwave is readily available and clean, which means it can be used immediately without any additional time required for cleaning or preparation. Additionally, microwaves are generally safe for use in heating food and are less likely to break down compared to other materials. The microwave temperature is also at a safe level for heating food. While other options such as stove burners and coffee machines may also be suitable for heating coffee, they may require additional time and effort to clean and prepare for use, and their availability may be limited due to current use or maintenance requirements. Therefore, the microwave is the best choice for reheating coffee in a household setting." }, { "figure_ref": [], "heading": "ChatGLM-6B", "publication_ref": [], "table_ref": [], "text": "Answer: D Reason: The best option for reheating coffee would be the Microwave object with id 5, which has already been mentioned to be reversible and in use, but also has a lower mass compared to other options, making it easier to handle and transport. Additionally, it is a metal microwave, which can provide better heating efficiency compared to other options made of different materials." }, { "figure_ref": [], "heading": "ChatGLM2-6B", "publication_ref": [], "table_ref": [], "text": "Answer: B Reason: The stove burner is suitable for heating coffee as it has a high temperature and is made of metal, which is conductive and can efficiently transfer heat. Additionally, it is in a state of reversible using, which means it is temporarily unavailable but can be used after some time. The condition is dirty, which is not relevant to the task of reheating coffee." }, { "figure_ref": [], "heading": "Task 2", "publication_ref": [], "table_ref": [], "text": "Question ID: 1, Utility: heating(source), Question: Which of the following objects would be best suited for the purpose of \"heating(source)\" when tasked to \"reheating coffee\"?, Options:\n(A) object name: CoffeeMachine, mass: heavy, temperature: RoomTemp, material: Metal, already in use: reversible-using, condition: dirty " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Dataset Specifics (a) Plot showing number of objects(x) for each utility(y), as obtained after Utility based pruning 2.1.1 (b) Plot showing number of tasks(x) for each utility(y)" }, { "figure_ref": [], "heading": "A.2 Variables", "publication_ref": [], "table_ref": [], "text": "Here we describe the variables used to describe an object's physical state. We kept it at an abstract level to judge basic commonsense reasoning capabilities.\n1. Mass: Based on an estimate of the weight of an object: (i) light[0,1 Kg] (ii) medium [1,5 Kg] (iii) heavy[5,10 Kg] (iv) super-heavy[> 10 Kg]" }, { "figure_ref": [], "heading": "2.", "publication_ref": [], "table_ref": [], "text": "Material: what material is used to make that object 3. Temperature: the surface temperature of the object: Cold/Hot/RoomTemp 4. already in use: tells us about the availability of the object: reversible-using/irreversible-using/free 5. condition: tells us about the condition of the object: broken/clean/dirty" }, { "figure_ref": [], "heading": "A.3 Human Annotations: Object-Utility Mappings", "publication_ref": [], "table_ref": [], "text": "Table 2.1.1 summarizes the collected and refined Object-Utility pairings. Throughout this work, we have referred to these as Utility Mappings." }, { "figure_ref": [], "heading": "B Dataset Creation", "publication_ref": [], "table_ref": [], "text": "In addition to the variations explained in 2.2.2, we further create 3 more types of datasets for each of the 3 tasks. These would be each consisting of 4, 3, and 2 options. The method we choose for sampling these options enables us to analyze the performance and reasoning capabilities of LLMs in zero-shot commonsense reasoning. The datasets are:\n2. Variation-11: We sample 2 options from the Moderate Configurations of the context objects of that particular <Task, Utility> combination, here we allow sampling equivalent options as long as either of them is not the correct answer. We also sample 1 option from the Bad Configurations of context objects of that particular <Task,Utility> combination 3. Variation-12: We sample 1 option from the Moderate Configurations of the context object of that particular <Task,Utility> combination. We also sample 2 options from the Bad Configurations of the random context objects of that particular <Task, Utility> combination 2 option dataset 1. Variation-13: We sample 2 options from the Moderate Configurations of the context objects of that particular <Task, Utility> combination, here we allow sampling equivalent options as long as either of them is not the correct answer. " }, { "figure_ref": [], "heading": "F Example Responses", "publication_ref": [], "table_ref": [], "text": "Task 0\nQuestion ID: 1, Utility: heating(source), Question: Which of the following objects would be best suited for the purpose of \"heating(source)\" when tasked to \"reheating coffee\"? Options:\n(A) object name: Toaster, " }, { "figure_ref": [], "heading": "GPT3.5-Turbo", "publication_ref": [], "table_ref": [], "text": "Answer: B Reason: Option B, StoveBurner, would be best suited for the purpose of reheating coffee in terms of safety, feasibility, minimum effort, and minimum time. Using a stove burner allows for precise control of heat, enabling a gradual and controlled reheating of the coffee. It's a common method for reheating liquids and is safer than using a toaster, which is not designed for reheating liquids and could pose a safety risk or result in spillage or damage to the toaster." }, { "figure_ref": [], "heading": "LLama2-13B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: Toaster is the best option for reheating coffee as it is designed specifically for heating bread and other toastable items, and it can heat up quickly and evenly. Using a stove burner would require more effort and time to heat the coffee, and it may not heat it evenly. Additionally, using a toaster is generally safer than using an open flame on a stove burner, as it has built-in safety features such as a cool-touch exterior and automatic shut-off." }, { "figure_ref": [], "heading": "Vicuna7B", "publication_ref": [], "table_ref": [], "text": "Answer: B Reason: Option B, a stove burner, is the best choice for reheating coffee as it is a safe and efficient heating source that requires minimal effort and time. It allows for precise temperature control and can be easily used to reheat coffee without causing any damage to the coffee or creating any safety hazards." }, { "figure_ref": [], "heading": "ChatGLM2-6B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: Toaster is a device that heats up food items such as coffee. It is a more convenient and safer option compared to a stove burner. The toaster has a simple operation, low maintenance and can be used to heat up a single serving of coffee at a time. The time it takes to heat up the coffee is minimal and there is no need for additional effort to operate it." }, { "figure_ref": [], "heading": "Task 1", "publication_ref": [], "table_ref": [], "text": "Question ID: 1, Utility: heating(source), Question: Which of the following objects would be best suited for the purpose of \"heating(source)\" when tasked to \"reheating coffee\"?, Options:\n( " }, { "figure_ref": [], "heading": "PaLM", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: Microwaves are safe and easy to use, and they can heat coffee quickly and evenly. They are also relatively lightweight and easy to move, so they can be used in a variety of settings." }, { "figure_ref": [], "heading": "GPT3.5-Turbo", "publication_ref": [], "table_ref": [], "text": "Answer: D Reason: Microwave (option D) is the best choice as it is available (reversible-using), in clean condition, and suitable for reheating coffee. It is not irreversibly occupied and can be used for the task without a time penalty." }, { "figure_ref": [], "heading": "LLama2-13B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: Microwaves are designed specifically for heating and cooking, making them a safer choice compared to stovetop burners or coffee machines. Additionally, microwaves are generally made of durable materials and have a more delicate design, which makes them less fragile than stovetop burners or coffee machines. Furthermore, microwaves are quick and efficient when it comes to heating up liquids, taking less time than stovetop burners or coffee machines. Finally, option A has already been used (already in use: \"free\") and its condition is clean, which suggests that it is ready for immediate use." }, { "figure_ref": [], "heading": "Vicuna7B", "publication_ref": [], "table_ref": [], "text": "Answer: D Reason: This is because a microwave is a common and convenient appliance for reheating coffee quickly and efficiently, and it can be used multiple times without any issues. Additionally, a microwave is generally safe and easy to use, and it does not require any time penalty or additional setup to get started with the task." }, { "figure_ref": [], "heading": "Vicuna13B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: A microwave is the best suited object for the purpose of reheating coffee as it is a fast and efficient way to heat liquids. It is also a reversible use object as it can be used again after the task is completed. Additionally, a microwave is a safe and convenient option as it eliminates the need for hot surfaces or open flames, which can be hazardous. The mass and material properties of a microwave also make it a safe and practical choice for this task." }, { "figure_ref": [], "heading": "PaLM", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: The coffee machine is the most efficient way to reheat coffee, and it is also the safest option." }, { "figure_ref": [], "heading": "GPT3.5-Turbo", "publication_ref": [], "table_ref": [], "text": "Answer: E Reason: StoveBurner is available, clean, and can be used without time penalty (reversible-using). It is suitable for heating and reheating tasks and is generally quick to heat up." }, { "figure_ref": [], "heading": "LLama2-13B", "publication_ref": [], "table_ref": [], "text": "Answer: C Reason: The Coffee Machine is the best configuration for reheating coffee because it is the least time-consuming and most reliable option. Although it is currently dirty, it can be cleaned after use, making it a more reliable choice than the Microwave or StoveBurner, which may have permanent damage if used with dirty or wet objects. Additionally, the Coffee Machine is already available and does not require any time to \"get started\" like the Microwave or StoveBurner, which need time to heat up. Therefore, the Coffee Machine is the best option for reheating coffee based on commonsense reasoning related to the provided physical variables." }, { "figure_ref": [], "heading": "Vicuna-7B", "publication_ref": [], "table_ref": [], "text": "Answer: C Reason: The Coffee Machine is the best option for the purpose of \"heating(source)\" when tasked to \"reheating coffee\" because it is already available and not in use, and it is also in a dirty condition. Therefore, it is the most suitable option to use for this task." }, { "figure_ref": [], "heading": "Vicuna-13B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: The best option for reheating coffee is the 'Coffee Machine' because it is specifically designed for making and heating coffee. It has a 'reversible-using' availability status, meaning it can be used after some time. It is also made of metal, which is a safe and durable material for heating purposes. Additionally, it is already dirty, which means it can be cleaned after use without any extra time penalty. Overall, the 'Coffee Machine' is the most suitable option for reheating coffee." }, { "figure_ref": [], "heading": "Mistral-7B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: The Coffee Machine with already in use = reversible using and condition = dirty is the best option for reheating coffee. It is readily available and can be easily cleaned before use, making it safe and efficient" }, { "figure_ref": [], "heading": "ChatGLM-6B", "publication_ref": [], "table_ref": [], "text": "Answer: A Reason: The Coffee Machine is the best suited for the purpose of heating(source) when tasked to reheating coffee, as it is lightweight, easy to handle and can be used multiple times without any loss of heat. Additionally, it is metal and can be easily heated to a high temperature, making it suitable for heating coffee." }, { "figure_ref": [], "heading": "ChatGLM2-6B", "publication_ref": [], "table_ref": [], "text": "Answer: C Reason: The Coffee Machine has a free state which means it can be readily used without time penalty. Additionally, it has a dirty condition which is temporary and can be overcome. It is also made of metal which may not be safe for food contact. It is also a heavy object which may be difficult to move." }, { "figure_ref": [], "heading": "Utility Objects", "publication_ref": [], "table_ref": [], "text": "Heating " } ]
In this study, we explore the sophisticated domain of task planning for robust household embodied agents, with a particular emphasis on the intricate task of selecting substitute objects. We introduce the CommonSense Object Affordance Task (COAT), a novel framework designed to analyze reasoning capabilities in commonsense scenarios. This approach is centered on understanding how these agents can effectively identify and utilize alternative objects when executing household tasks, thereby offering insights into the complexities of practical decision-making in real-world environments.Drawing inspiration from human decision-making, we explore how large language models tackle this challenge through three meticulously crafted commonsense question-and-answer datasets, featuring refined rules and human annotations. Our evaluation of state-of-the-art language models on these datasets sheds light on three pivotal considerations: 1) aligning an object's inherent utility with the task at hand, 2) navigating contextual dependencies (societal norms, safety, appropriateness, and efficiency), and 3) accounting for the current physical state of the object. To maintain accessibility, we introduce five abstract variables reflecting an object's physical condition, modulated by human insights to simulate diverse household scenarios. Our contributions include insightful Object-Utility mappings addressing the first consideration and two extensive QA datasets (15k and 130k questions) probing the intricacies of contextual dependencies and object states. The datasets, along with our findings, are accessible at: https://github.com/com-phy-affordance/COAT. This research not only advances our understanding of physical commonsense reasoning in language models but also paves the way for future improvements in household agent intelligence.
Physical Reasoning and Object Planning for Household Embodied Agents
[ { "figure_caption": "Figure 1 :1Figure 1: We divide the whole decision-making process into 3 phases. Pruning out options firstly based on Utility then Contextual Appropriateness and finally on Physical State. This shows human adeptness in comparing appropriateness across an array of factors and coming up with a substitute object even in the absence of the ideal object [Cake Knife]. Our work provides QA datasets about this type of commonsense reasoning", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "A) object name: Microwave, mass: super-heavy, temperature: RoomTemp, material: Metal, already in use: free, condition: clean (B) object name: StoveBurner, mass: super-heavy, temperature: RoomTemp, material: Metal already in use: irreversible-using, condition: dirty (C) object name: CoffeeMachine, mass: heavy, temperature: RoomTemp, material: Metal already in use: irreversible-using, condition: dirty (D) object name: Microwave, mass: super-heavy, temperature: RoomTemp, material: Metal already in use: reversible-using, condition: clean (E) object name: Microwave, mass: super-heavy, temperature: RoomTemp, material: Metal, already in use: irreversible-using, condition: broken Correct Answer: A", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Table 3 :3Figure 4: Average Accuracy of Various models on Task 0 as we increase option count", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 55Figure5illustrates the decreasing performance of all small models as we increase the Object Diversity in the", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparative Plot showcasing the variations in Task:2 performances as we keep increasing the Count of Bad Configurations in Options from left to right.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Bad rates for various 5 option datasets as we increase the count of bad options. The difference tells us the models's ability to not get confused as we increase the count of bad options from left to right", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "2 . 3 :23 Random context object's Ideal Configuration + 1 randomly sampled sub-optimal configurations of the different context object belonging to the same <Task,Utility> combination We sample 3 options from the Moderate Configurations and 2 options from the Bad Configurations of the same <Task, Utility> combination's context objects 2. Variation-4: We sample 2 options from the Moderate Configurations and 3 options from the Bad Configurations of the same <Task, Utility> combination's context objects 3. Variation-5:", "figure_data": "", "figure_id": "fig_6", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Figure summarizing our annotation process", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "(B) object name: CoffeeMachine, mass: heavy, temperature: RoomTemp, material: Metal, already in use: irreversible-using, condition: clean (C) object name: CoffeeMachine, mass: heavy, temperature: RoomTemp, material: Metal, already in use: free, condition: dirty (D) object name:Microwave, mass: super-heavy, temperature: RoomTemp, material: Metal, already in use: reversible-using, condition: dirty (E) object name:StoveBurner, mass: super-heavy, temperature: RoomTemp, material: Metal, already in use: irreversible-using, condition: clean Correct Answer: C", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "UtilitiesObjectsCarryingComfortHeating(vessel)BowlBedPanCleaningWashingMixing(tool)DishSponge SinkBasinSpoonDisposingCuttingMixing (vessel)GarbageCan KnifeCupStorageEntertainment Heating(source)FridgeLaptopMicrowaveReadingBreakingIncreasing HeightNewspaperBaseballBat ChairEatingWritingPhysical ActivityApplePenDumbellDecoration Light SourceSurface SupportStatueFloor Lamp CounterTop(a) A representational subset of utilized Utilities", "figure_id": "tab_0", "figure_label": "2b", "figure_type": "table" }, { "figure_caption": "Abstract Values for Various VariablesGathering Common Object Configuration In the context of this study, a Configuration denotes the physical state of an object characterized by five variables. While a wax chair might be conceivable in the realm of Madame Tussauds, it remains highly improbable in everyday household scenarios. Thus to ensure the relevance of configurations to common household scenes, human annotators were tasked with selecting plausible and frequently occurring variable values for each object. (See Appendix D)", "figure_data": "", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Question ID: 1,Utility: heating(source),Question: Which of the following objects would be best suited for the purpose of \"heating(source)\"when tasked to \"reheating coffee\"?Options:(A) Toaster(B) StoveBurnerCorrect Answer: B", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "on Common Configurations generated in the annotation task[2.1.3], we create 2 Variable Level QA datasets to analyze the reasoning capabilities of Language Models on pruning out options based on their current physical state. The 2 datasets differ in the level of difficulty and the level of reasoning required to answer the questions correctly. We describe the creation process in this section. The question in both datasets remains the same as that of Object Level Dataset However, unlike the first dataset where the options were objects, this time we give various Configurations of Context Objects as options.", "figure_data": "Configurationobject name: Microwave, mass: super-heavy, temperature: RoomTemp, material: Metal already in use: free condition:clean", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Summary of Datasets Used for Experiments an object is ready to perform a task without requiring additional time or effort, marked by the simultaneous occurrence of all ideal variable values in its description moderate-configuration A state requiring additional effort to reach an ideal condition for task performance, marked by the presence of moderate variable values.", "figure_data": "3.2 GlossaryTermDefinitionobjectsa set of 100 household objectsutilitiesa set of 22 abstract utilitiestasksa set of 75 household activitiesquestiona <Task,Utility> combination indicating the specific task aspect to emphasizevariablea symbolic variable used to explain an object's physical stateconfigurationcomplete description of an object using 5 symbolic variablesutility-mappingmapping between utility and object; facilitates Utility-Objectscontext-mappingmapping between task and object; facilitates Context-Objectsideal-configurationA state where", "figure_id": "tab_7", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance Accuracy for Various Models when evaluated on Task 2 (Suboptimal Configuration Dataset)", "figure_data": "ModelAccuracy for Task_2 Variations ⇑5-option4-option3-option2-optionv1v2v3v4v5v6v7v8v9v10v11v12v13v14PaLM32.4 38.0 46.3 55.3 64.1 40.8 47.657.763.2 52.4 64.8 69.0 70.2 80.7GPT3.5-Turbo28.3 30.6 37.5 46.4 61.6 34.6 40.1 50.72 61.2 46.1 56.7 71.3 61.1 80.2vicuna13B22.5 23.9 28.0 32.0 32.8 27.7 31.035.344.2 37.3 42.9 50.0 54.8 68.4LLama2-13B23.0 24.4 33.5 42.2 44.9 31.6 32.043.453.9 39.9 50.5 66.2 57.4 75.7Mistral-7B20.7 22.4 27.8 25.8 27.8 25.8 29.032.337.6 35.0 40.6 47.7 52.6 63.7ChatGLM2-6B 21.6 22.2 26.5 28.2 29.0 25.6 30.633.636.3 36.5 41.9 50.7 53.7 61.4Vicuna-7B20.3 21.6 21.6 21.7 22.7 26.4 25.927.628.2 33.3 35.6 38.3 48.5 50.8ChatGLM-6B21.5 22.4 22.6 23.9 23.5 25.0 27.329.229.2 33.1 34.8 36.3 48.2 53.6", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "5-Turbo's performance. Based on these figures and analyses we can safely conclude that most of the models had a sense of what a Bad Configuration is but showed limited reasoning capabilities to evaluate moderate configurations based on physical abstract variables. Thus through Task 1 and Task 2, we were able to evaluate and analyze commonsense reasoning capabilities of Language Models over physical state variables [R2]", "figure_data": "ModelBad Rate For Task_2 Variations ⇓5-option4-option3-option2-optionv1v2v3v4v5v6v7v8v9v10 v11v12 v13 v14PaLM-4.210.3 21.2 35.9-6.015.3 36.8-8.831-19.3GPT3.5-Turbo-5.512.5 24.4 38.4-7.717.5 38.8-9.728.7-19.8vicuna13B-11.2 23.2 39.0 67.2-15.0 33.3 55.8-18.750-31.6LLama2-13B-7.218.8 29.3 55.1-9.624.6 46.1-13.0 33.8-24.3Mistral-7B-15.8 28.7 48.2 72.2-17.0 35.7 62.4-19.5 52.3-36.3ChatGLM2-6B-15.0 31.5 48.6 71.0-17.0 34.6 63.7-20.4 49.3-38.6", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" } ]
Ayush Agrawal; Raghav Prabhakar; Anirudh Goyal; Dianbo Liu
[ { "authors": "Yonatan Bisk; Rowan Zellers; Le Ronan; Jianfeng Bras; Yejin Gao; Choi", "journal": "", "ref_id": "b0", "title": "Piqa: Reasoning about physical commonsense in natural language", "year": "2019" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b2", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023-03" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Emily Vinodkumar Prabhakaran; Nan Reif; Ben Du; Reiner Hutchinson; James Pope; Jacob Bradbury; Michael Austin; Guy Isard; Pengcheng Gur-Ari; Toju Yin; Anselm Duke; Sanjay Levskaya; Sunipa Ghemawat; Henryk Dev; Xavier Michalewski; Vedant Garcia; Kevin Misra; Liam Robinson; Denny Fedus; Daphne Zhou; David Ippolito; Hyeontaek Luan; Barret Lim; Alexander Zoph; Ryan Spiridonov; David Sepassi; Shivani Dohan; Mark Agrawal; Andrew M Omernick; Thanumalayan Dai; Marie Sankaranarayana Pillai; Aitor Pellat; Erica Lewkowycz; Rewon Moreira; Oleksandr Child; Katherine Polozov; Zongwei Lee; Xuezhi Zhou; Brennan Wang; Mark Saeta; Orhan Diaz; Michele Firat; Jason Catasta; Kathy Wei; Douglas Meier-Hellstern; Jeff Eck; Slav Dean; Noah Petrov; Fiedel", "journal": "", "ref_id": "b3", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Yuqian Dai; Marc De Kamps; Serge Sharoff", "journal": "European Language Resources Association", "ref_id": "b4", "title": "BERTology for machine translation: What BERT knows about linguistic difficulties for translation", "year": "2022-06" }, { "authors": "Ernest Davis; Gary Marcus", "journal": "Commun. ACM", "ref_id": "b5", "title": "Commonsense reasoning and commonsense knowledge in artificial intelligence", "year": "2015-08" }, { "authors": "Zhengxiao Du; Yujie Qian; Xiao Liu; Ming Ding; Jiezhong Qiu; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b6", "title": "Glm: General language model pretraining with autoregressive blank infilling", "year": "2022" }, { "authors": "Zhengxiao Du; Yujie Qian; Xiao Liu; Ming Ding; Jiezhong Qiu; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b7", "title": "Glm: General language model pretraining with autoregressive blank infilling", "year": "2022" }, { "authors": "Nouha Dziri; Ximing Lu; Melanie Sclar; Lorraine Xiang; Liwei Li; Bill Jiang; Peter Yuchen Lin; Chandra West; Bhagavatula; Le Ronan; Jena D Bras; Soumya Hwang; Sean Sanyal; Xiang Welleck; Allyson Ren; Zaid Ettinger; Yejin Harchaoui; Choi", "journal": "", "ref_id": "b8", "title": "Faith and fate: Limits of transformers on compositionality", "year": "2023" }, { "authors": "Maxwell Forbes; Ari Holtzman; Yejin Choi", "journal": "", "ref_id": "b9", "title": "Do neural language representations learn physical commonsense?", "year": "2019" }, { "authors": "Yao Fu; Hao Peng; Litu Ou; Ashish Sabharwal; Tushar Khot", "journal": "", "ref_id": "b10", "title": "Specializing smaller language models towards multi-step reasoning", "year": "2023" }, { "authors": "Jensen Gao; Bidipta Sarkar; Fei Xia; Ted Xiao; Jiajun Wu; Brian Ichter; Anirudha Majumdar; Dorsa Sadigh", "journal": "", "ref_id": "b11", "title": "Physically grounded vision-language models for robotic manipulation", "year": "2023" }, { "authors": "Mor Geva; Daniel Khashabi; Elad Segal; Tushar Khot; Dan Roth; Jonathan Berant", "journal": "", "ref_id": "b12", "title": "Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies", "year": "2021" }, { "authors": "Yuling Gu; Bhavana Dalvi Mishra; Peter Clark", "journal": "", "ref_id": "b13", "title": "Do language models have coherent mental models of everyday things", "year": "2023-07" }, { "authors": "Jie Huang; Kevin Chen; -Chuan Chang", "journal": "", "ref_id": "b14", "title": "Towards reasoning in large language models: A survey", "year": "2023" }, { "authors": "Albert Q Jiang; Alexandre Sablayrolles; Arthur Mensch; Chris Bamford; Devendra Singh Chaplot; Diego De Las Casas; Florian Bressand; Gianna Lengyel; Guillaume Lample; Lucile Saulnier; Renard Lélio; Marie-Anne Lavaud; Pierre Lachaux; Teven Stock; Thibaut Le Scao; Thomas Lavril; Timothée Wang; William El Lacroix; Sayed", "journal": "Mistral", "ref_id": "b15", "title": "", "year": "2023" }, { "authors": "Pavan Kapanipathi; Veronika Thost; Sankalp Siva; Spencer Patel; Ibrahim Whitehead; Avinash Abdelaziz; Maria Balakrishnan; Kshitij Chang; Chulaka Fadnis; Bassem Gunasekara; Nicholas Makni; Kartik Mattei; Achille Talamadupula; Fokoue", "journal": "", "ref_id": "b16", "title": "Infusing knowledge into the textual entailment task using graph convolutional networks", "year": "2020-04" }, { "authors": "Eric Kolve; Roozbeh Mottaghi; Winson Han; Eli Vanderbilt; Luca Weihs; Alvaro Herrasti; Matt Deitke; Kiana Ehsani; Daniel Gordon; Yuke Zhu; Aniruddha Kembhavi; Abhinav Gupta; Ali Farhadi", "journal": "", "ref_id": "b17", "title": "Ai2-thor: An interactive 3d environment for visual ai", "year": "2022" }, { "authors": "Klaus Krippendorff", "journal": "", "ref_id": "b18", "title": "Computing krippendorff's alpha-reliability", "year": "2011" }, { "authors": "Lei Li; Jingjing Xu; Qingxiu Dong; Ce Zheng; Qi Liu; Lingpeng Kong; Xu Sun", "journal": "", "ref_id": "b19", "title": "Can language models understand physical concepts?", "year": "2023" }, { "authors": "Xinyue Bill Yuchen Lin; Jamin Chen; Xiang Chen; Ren", "journal": "", "ref_id": "b20", "title": "Kagnet: Knowledge-aware graph networks for commonsense reasoning", "year": "2019" }, { "authors": "Nelson F Liu; Matt Gardner; Yonatan Belinkov; Matthew E Peters; Noah A Smith", "journal": "", "ref_id": "b21", "title": "Linguistic knowledge and transferability of contextual representations", "year": "2019-06" }, { "authors": "Charlotte Lucie; Jonathan Magister; Jakub Mallinson; Eric Adamek; Aliaksei Malmi; Severyn", "journal": "", "ref_id": "b22", "title": "Teaching small language models to reason", "year": "2023" }, { "authors": "John Mccarthy", "journal": "", "ref_id": "b23", "title": "Programs with common sense", "year": "1959" }, { "authors": "Todor Mihaylov; Peter Clark; Tushar Khot; Ashish Sabharwal", "journal": "", "ref_id": "b24", "title": "Can a suit of armor conduct electricity? a new dataset for open book question answering", "year": "2018" }, { "authors": " Openai", "journal": "", "ref_id": "b25", "title": "", "year": "2023" }, { "authors": "Roma Patel; Ellie Pavlick", "journal": "", "ref_id": "b26", "title": "Mapping language models to grounded conceptual spaces", "year": "2021" }, { "authors": "Baolin Peng; Chunyuan Li; Pengcheng He; Michel Galley; Jianfeng Gao", "journal": "", "ref_id": "b27", "title": "Instruction tuning with gpt-4", "year": "2023" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander H Miller; Sebastian Riedel", "journal": "", "ref_id": "b28", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Adam Roberts; Colin Raffel; Noam Shazeer", "journal": "", "ref_id": "b29", "title": "How much knowledge can you pack into the parameters of a language model", "year": "2020-11" }, { "authors": "Anna Rogers; Olga Kovaleva; Anna Rumshisky", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b30", "title": "A primer in BERTology: What we know about how BERT works", "year": "2020" }, { "authors": "Maarten Sap; Le Ronan; Emily Bras; Chandra Allaway; Nicholas Bhagavatula; Hannah Lourie; Brendan Rashkin; Noah A Roof; Yejin Smith; Choi", "journal": "", "ref_id": "b31", "title": "Atomic: An atlas of machine commonsense for if-then reasoning", "year": "2019-07" }, { "authors": "Kumar Shridhar; Alessandro Stolfo; Mrinmaya Sachan", "journal": "", "ref_id": "b32", "title": "Distilling reasoning capabilities into smaller language models", "year": "2023" }, { "authors": "Robyn Speer; Joshua Chin; Catherine Havasi", "journal": "AAAI Press", "ref_id": "b33", "title": "Conceptnet 5.5: An open multilingual graph of general knowledge", "year": "2017" }, { "authors": "Alon Talmor; Jonathan Herzig; Nicholas Lourie; Jonathan Berant", "journal": "", "ref_id": "b34", "title": "Commonsenseqa: A question answering challenge targeting commonsense knowledge", "year": "2019" }, { "authors": "Ian Tenney; Dipanjan Das; Ellie Pavlick", "journal": "", "ref_id": "b35", "title": "BERT rediscovers the classical NLP pipeline", "year": "2019-07" }, { "authors": "Ian Tenney; Patrick Xia; Berlin Chen; Alex Wang; Adam Poliak; Thomas Mccoy; Najoung Kim; Benjamin Van Durme; Samuel R Bowman; Dipanjan Das; Ellie Pavlick", "journal": "", "ref_id": "b36", "title": "What do you learn from context? probing for sentence structure in contextualized word representations", "year": "2019" }, { "authors": "Hugo Touvron; Louis Martin; Kevin Stone; Peter Albert; Amjad Almahairi; Yasmine Babaei; Nikolay Bashlykov; Soumya Batra; Prajjwal Bhargava; Shruti Bhosale; Dan Bikel; Lukas Blecher; Cristian Canton Ferrer; Moya Chen; Guillem Cucurull; David Esiobu; Jude Fernandes; Jeremy Fu; Wenyin Fu; Brian Fuller; Cynthia Gao; Vedanuj Goswami; Naman Goyal; Anthony Hartshorn; Saghar Hosseini; Rui Hou; Hakan Inan; Marcin Kardas; Viktor Kerkez; Madian Khabsa; Isabel Kloumann; Artem Korenev; Punit Singh Koura; Marie-Anne Lachaux; Thibaut Lavril; Jenya Lee; Diana Liskovich; Yinghai Lu; Yuning Mao; Xavier Martinet; Todor Mihaylov; Pushkar Mishra; Igor Molybog; Yixin Nie; Andrew Poulton; Jeremy Reizenstein; Rashi Rungta; Kalyan Saladi; Alan Schelten; Ruan Silva; Eric Michael Smith; Ranjan Subramanian; Ellen Xiaoqing; Binh Tan; Ross Tang; Adina Taylor; Jian Williams; Puxin Xiang Kuan; Zheng Xu; Iliyan Yan; Yuchen Zarov; Angela Zhang; Melanie Fan; Sharan Kambadur; Aurelien Narang; Robert Rodriguez; Sergey Stojnic; Thomas Edunov; Scialom", "journal": "", "ref_id": "b37", "title": "Llama 2: Open foundation and fine-tuned chat models", "year": "2023" }, { "authors": "Cunxiang Wang; Shuailong Liang; Yue Zhang; Xiaonan Li; Tian Gao", "journal": "", "ref_id": "b38", "title": "Does it make sense? and why? a pilot study for sense making and explanation", "year": "2019-07" }, { "authors": "Terry Winograd", "journal": "Cognitive Psychology", "ref_id": "b39", "title": "Understanding natural language", "year": "1972" }, { "authors": "Zhilin Yang; Peng Qi; Saizheng Zhang; Yoshua Bengio; William W Cohen; Ruslan Salakhutdinov; Christopher D Manning", "journal": "", "ref_id": "b40", "title": "Hotpotqa: A dataset for diverse, explainable multi-hop question answering", "year": "2018" }, { "authors": "Renrui Zhang; Jiaming Han; Chris Liu; Peng Gao; Aojun Zhou; Xiangfei Hu; Shilin Yan; Pan Lu; Hongsheng Li; Yu Qiao", "journal": "", "ref_id": "b41", "title": "Llama-adapter: Efficient fine-tuning of language models with zero-init attention", "year": "2023" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b42", "title": "Minigpt-4: Enhancing visionlanguage understanding with advanced large language models", "year": "2023" }, { "authors": "B ", "journal": "", "ref_id": "b43", "title": "1 Task 0 1. Variation-2: For each question, we sampled 1 context object and 2 utility objects belonging to the same utility", "year": "" }, { "authors": "", "journal": "", "ref_id": "b44", "title": "Variation-3: For each question, we sampled 1 context object and 3 utility objects belonging to the same utility", "year": "" }, { "authors": "", "journal": "", "ref_id": "b45", "title": "Variation-4: For each question, we sampled 1 context object and 4 utility objects belonging to the same utility", "year": "" }, { "authors": "B ", "journal": "", "ref_id": "b46", "title": "2 Task 1 5 option datasets : 1. Variation-3: Random context object's Ideal Configuration + 4 randomly sampled sub-optimal configurations of same Task and Utility's different context object 4 option datasets : 1. Variation-4 : Random context object's Ideal Configuration + 3 randomly sampled sub-optimal configurations of the same context object", "year": "" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "Variation-5: Random context Object's Ideal Configuration + 2 randomly sampled sub-optimal configurations of the same context object + 1 randomly sampled sub-optimal configurations of different context object belonging to the same <Task,Utility> combination 3. Variation-6: Random context Object's Ideal Configuration + 1 randomly sampled sub-optimal configurations of the same context object + 2 randomly sampled sub-optimal configurations of different context object belonging to the same <Task,Utility> combination 4. Variation-7: Random context object's Ideal Configuration + 3 randomly sampled sub-optimal configurations of the different context object belonging to the same <Task,Utility> combination 3 option datasets : 1. Variation-8 : Random context object's Ideal Configuration + 2 randomly sampled sub-optimal configurations of the same context object", "year": "" }, { "authors": "", "journal": "", "ref_id": "b48", "title": "Variation-9: Random context object's Ideal Configuration + 1 randomly sampled sub-optimal configuration of the same context object + 1 randomly sampled sub-optimal configuration of different context object belonging to the same <Task,Utility> combination 3. Variation-10: Random context object's Ideal Configuration + 2 randomly sampled sub-optimal configurations of the different context object belonging to the same <Task,Utility> combination Vicuna-13B Answer: B Reason", "year": "" }, { "authors": "Mistral7b Answer", "journal": "", "ref_id": "b49", "title": "B Reason: While a toaster is a convenient option", "year": "" } ]
[]
10.18653/v1/D19-1633
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b0", "b0", "b12", "b8", "b2", "b7", "b2", "b7", "b4", "b5", "b9", "b1", "b9", "b1", "b13" ], "table_ref": [], "text": "Since the Transformer architecture was proposed (Vaswani et al. [2023]), large language models have achieved impressive results across natural language processing benchmarks (Brown et al. [2020]). However, these remarkable achievements were only made possible because of dramatic increases in the number of parameters or model sizes (Brown et al. [2020], Wei et al. [2022]), resulting in considerable memory requirements and greater processing times. This problem is further exacerbated by the fact that, at inference time, transformers are used auto-regressively: a new model call is needed for each generated token. This is especially problematic due to the memory-bandwidth cost of recurrently loading the model parameters and the past keys and values tensors (Shazeer [2019]).\nRecently, several works (Chen et al. [2023], Leviathan et al. [2023]) have proposed to reduce inference time by leveraging a smaller model to approximate generation from a larger model at a faster pace. The small model produces a few potential tokens, and the larger model evaluates all of the tokens at once in a single forward step. Importantly, the generation quality of the original large model is guaranteed by the rejection scheme that keeps only tokens that are generated with an identical distribution than the large model (Chen et al. [2023], Leviathan et al. [2023]). While effective in practice, this approach requires to deploy simultaneously two models that share the same vocabulary, creating memory and running time bottlenecks.\nAn alternative solution is to directly leverage the large model to generate multiple tokens at once, instead of generating them auto-regressively. This solution, called parallel decoding, can be implemented as a masked language model (Ghazvininejad et al. [2019]) or by copying the encoder input in the decoder in the context of encoder-decoder architectures (Gu et al. [2018]). These solutions have the advantage over speculative sampling of avoiding the need for a second model, but they require substantial changes to the Transformer architecture that make them not suitable as such for accelerating the decoding of a given pre-trained language model. In this work, we propose to combine the best of both directions in a variant of the speculative sampling that we call Parallel Speculative Sampling (PaSS). The idea is to generate candidate tokens via parallel decoding by adding a small number of \"look-ahead embeddings\" and generate output for each of these additional embeddings. This solution does not require a second model, nor modifications to the large language model. By design, our approach also generates at each step at least one token auto-regressively, guaranteeing the same loss-less quality of generations as speculating sampling methods. The memory overhead of adding the extra embeddings is O(d emb ) new weights, that need to be trained. This is several orders of magnitude smaller than any small model added by existing speculative sampling solutions. The most similar works to ours are Stern et al. [2018] and Cai et al. [2023] where they add look-ahead classification heads instead of embeddings, leading to a worse memory overhead of O(d emb K) where K is the vocabulary size. Additionally, Stern et al. [2018] focus solely on greedy decoding, and Cai et al. [2023] do not guarantee a loss-less decoding. Similarly, Zhang et al. [2023] also drop the second model, but still decode auto-regressively." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b2", "b7" ], "table_ref": [], "text": "First, in section 2.1, we briefly review the existing speculative sampling algorithm, as introduced by Chen et al. [2023] and Leviathan et al. [2023]. We then introduce our approach in section 2.2." }, { "figure_ref": [], "heading": "Speculative Sampling", "publication_ref": [ "b2", "b7" ], "table_ref": [], "text": "The goal of speculative sampling is to speed up the inference time of a target LLM. The core idea behind this algorithm is that it is significantly faster to compute a single forward pass on n tokens in parallel than n forward passes sequentially. To fulfil this objective, a second smaller and faster model, the drafter, is used to generate a candidate sequence of tokens. The length of the sequence is a hyper-parameter of the algorithm. The target LLM is then presented with all of the candidate tokens at once, in a single pass. The rejection scheme proposed by Chen et al. [2023] and Leviathan et al. [2023] guarantees that the distribution of the drafted tokens is the same as if they were generated in the first place by the target LLM. Additionally, due to the rejection scheme, one more token can be sampled after the sequence of accepted tokens from the logits gathered during the iteration model call. This ensures that, even if all draft tokens were rejected, the model call would still be of use. The steps are presented in detail in algorithm 2 of the Appendix." }, { "figure_ref": [], "heading": "Parallel Speculative Sampling", "publication_ref": [], "table_ref": [], "text": "We propose a modified version of speculative sampling based on parallel decoding, that does not require a second model. The steps of our method are detailed in algorithm 1, but, importantly, each iteration of our algorithm requires two calls of the LLM:\n• Drafting phase: we call the model once to produce multiple tokens simultaneously using parallel decoding through look-ahead embeddings (see Sec. 2.2.1). The first generated token is not part of the draft to match the distribution in case of rejection. • Validation phase: we call the model a second time to validate the draft (see Sec. 2.1). We sample a new token at the end of the sequence of accepted tokens, with no new model call.\nThe key behind our algorithm is that every call to the LLM adds at least one token to the final sequence of generated tokens. This guarantees that the algorithm is at least as fast as generating from the LLM directly, and it also guarantees that we produce a correct sequence of tokens even in the case where additional tokens are rejected. On top of this, our algorithm can produce and accept at each iteration, multiple additional tokens, leading to a guaranteed speed up. Overall, the standard auto-regressive wall time of the target LLM is a lower bound to our method, while the upper bound is a speed-up of (L + 2)/2×, where L is the number of tokens generated in the drafting phase. Our approach leverages the fact that the time required to process a single token or a small sequence of tokens does not differ significantly. This is because auto-regressive generation is mostly memory-bound, and processing additional tokens is thus negligible. " }, { "figure_ref": [], "heading": "Look-ahead embeddings", "publication_ref": [], "table_ref": [], "text": "The target LLM is not trained to predict multiple tokens at once, and expect as input, the previously generated token. In order to build this ability in the target LLM, we introduce \"look ahead\" tokens, [LA] i for each look ahead position i for 1 to L. A sequence of these tokens is added at the end of the input sequence and defines the number of steps that the model will predict ahead. In other words, we replace the original input sequence of tokens (w 1 , . . . , w T ) by a sequence with L additional tokens, that is (w 1 , . . . , w T , [LA] 1 , . . . , [LA] L ). This sequence is then processed with a single forward pass of the target model and produces for each new position a token from the original dictionary, i.e., without the extra [LA] i tokens. This approach only requires learning the embeddings associated with the new tokens on a small training dataset and has a memory overhead of Ld emb parameters." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b10" ], "table_ref": [], "text": "We test our method on two tasks: text and code completion. For each task, we use different nonoverlapping datasets for the training of the look-ahead embeddings and the evaluation of our approach (Sec. 3.1). We also briefly describe baselines in Sec. 3.2 and report main results in Sec. 3.3. All the experiments are run with a re-implementation of the 7B LLaMa model (Touvron et al. [2023])." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b6", "b3" ], "table_ref": [], "text": "We use the 2023/02/20 English Wikipedia dump (Wikimedia Foundation) for text completion and the Python split of The Stack corpus (Kocetkov et al. [2022]) for code completion. We divide each dataset into training and test split. For the evaluation, we randomly sample 200 examples from the test split and use the 32 first tokens as prompts. The maximum length for the generation is fixed at 512 tokens for all our experiments. We use TOP-K SAMPLING, with k = 10 and a temperature of 0.8 unless said otherwise. For code completion, we also evaluate on the HumanEval benchmark [Chen et al., 2021], to validate that, as expected, our algorithm does not degrade the quality of generation." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "We compare our method with two baselines. Autoregressive generation: this baseline consists of autoregressively generating tokens from the LLM. We sample one token at a time using the KV cache for speedup.\n[UNK] as look-ahead token: we apply our method with a fixed [UNK] token instead of trained look-ahead embeddings. We use the KV cache and update it after every model call according to the number of drafted tokens and the number of accepted tokens." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "Impact for different sampling schemes. On the left panel of Table 1, we compare the running time of our approach with the two baselines for different sampling schemes. We vary the temperature of the sampler from high variance in the generations (high temperature) or low variance (low temperature).\nAs expected, the speed-up is more important for lower temperatures, where the distribution of tokens is more peaky and easier to predict with an approximated scheme like PaSS. We observe almost no gain compared to auto-regressive generation when using [UNK]. Compared to the speed-up provided by PaSS, this shows that the finetuning of the look-ahead embeddings captures important information to predict future tokens.\nImpact of the number of look ahead embeddings. On the right panel of Table 1, we measure the impact of the number of look-head steps on our approach. Running time decreases steadily up to 6 look-ahead steps, but more look-ahead steps annihilate the benefits of this approach.\nImpact of PaSS on final performance. Finally, in Table 2, we confirm that our decoding does not impact the performance of the model on 2 different generating tasks. We only observe changes in performance that are below the margin of error, while improving the running time by up to 30%.\nTable 2: Average time for generating one completion on the HumanEval dataset, as well as the PASS@N metric. Following previous work, we use a temperature of 0.1 for PASS@1 and a temperature of 0.8 for PASS@10. We use k = 25 for PASS@10. For PaSS, we use 4 look-ahead tokens.\nPASS@1 PASS@10 Time Perf. Time Perf.\nAuto-regressive 10.52 sec 13.2 % 10.15 sec 22.5 % PaSS 7.17 sec 13.4 % 8.17 sec 22.5 %" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We presented the parallel speculative sampling (PaSS) algorithm, a variant of the speculative sampling algorithm that does not require a second draft model: tokens are drafted in parallel through the use of masked-decoding via fine-tuned look-ahead embeddings. We showed that our method achieves significant speed-ups (up to 30%) by only learning as little as O(d emb ) additional weights. In future work, we want to explore how to improve the quality of parallel generation with look-ahead tokens, as we believe this is the most promising direction to improve performance of the PaSS algorithm." }, { "figure_ref": [], "heading": "Training details", "publication_ref": [], "table_ref": [], "text": "For all our trainings (and evaluations), we load models in bfloat16. We freeze all the models parameters except for the new embeddings. For each batch, we randomly select a position where to insert the look-ahead embeddings and compute the loss only on the corresponding outputs. Before training, we initialize the new embeddings with the UNK token embedding. We use the AdamW optimizer, with a learning rate of 0.01 and a batch size of 8 sequences. We train for 10k additional steps, with 2k warmup steps and a cosine learning rate schedule." } ]
Scaling the size of language models to tens of billions of parameters has led to impressive performance on a wide range of tasks. At generation, these models are used auto-regressively, requiring a forward pass for each generated token, and thus reading the full set of parameters from memory. This memory access forms the primary bottleneck for generation and it worsens as the model size increases. Moreover, executing a forward pass for multiple tokens in parallel often takes nearly the same time as it does for just one token. These two observations lead to the development of speculative sampling, where a second smaller model is used to draft a few tokens, that are then validated or rejected using a single forward pass of the large model. Unfortunately, this method requires two models that share the same tokenizer and thus limits its adoption. As an alternative, we propose to use parallel decoding as a way to draft multiple tokens from a single model with no computational cost, nor the need for a second model. Our approach only requires an additional input token that marks the words that will be generated simultaneously. We show promising performance (up to 30% speed-up) while requiring only as few as O(d emb ) additional parameters.
PaSS: Parallel Speculative Sampling
[ { "figure_caption": "Algorithm 1 Parallel Speculative Sampling (PaSS) with Parallel Look-ahead Embeddings Given L look-ahead tokens [LA] 1 , . . . , [LA] L and minimum target sequence length T . Given auto-regressive target model q(.|.) and initial prompt sequence x0, . . . , xt. Initialise n ← t. while n < T do In parallel, sample the next token xn+1 and L draft tokens x1, . . ., xL: If all L tokens xn+1, . . . , xn+L are accepted, sample extra token xn+L+1 ∼ q(x|x1, . . . , xn+L) and set n ← n + 1. end while", "figure_data": "xn+1 ∼ q(x|x1, . . . , xn), x1 ∼ q(x|x1, . . . , xn, [LA] 1 ), . . . , xL ∼q(x|x1, . . . , xn, [LA] 1 , . . . , [LA] L )Set n ← n + 1In parallel, compute L + 1 sets of logits from drafts x1, . . ., xL:q(x|x1, . . . , xn), q(x|x1, . . . , xn, x1), . . . , q(x|x1, . . . , xn, x1, . . . , xL)for t = 1 : L doSample r ∼ U [0, 1] from a uniform distribution.q(xt|x1, . . . , xn-1, . . . , xn+t-1) q(xt|x1, . . . , xn-1, [LA] 1 , . . . , [LA] and Exit for loop. if r < min 1,end ifend for", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Time for generating a sequence of length 512 tokens, given a prompt of 32 tokens, as a function of temperature (left) and number of look-ahead tokens (right). We use 4 look-ahead tokens unless said otherwise. The results reported in the right table are on The Stack data.", "figure_data": "The StackWikipedia# LA tokens TimeTemperature0.80.50.20.80.50.2210.03Auto-regressive [UNK] look-ahead 12.25 12.43 12.26 12.30 12.16 11.88 12.52 12.69 12.72 12.45 12.30 12.55 PaSS 9.79 9.46 8.96 10.23 9.78 9.434 6 89.79 9.66 9.94", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Giovanni Monea; Edouard Grave
[ { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mccandlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Tianle Cai; Yuhong Li; Zhengyang Geng; Hongwu Peng; Tri Dao", "journal": "", "ref_id": "b1", "title": "Medusa: Simple framework for accelerating llm generation with multiple decoding heads", "year": "2023" }, { "authors": "Charlie Chen; Sebastian Borgeaud; Geoffrey Irving; Jean-Baptiste Lespiau; Laurent Sifre; John Jumper", "journal": "", "ref_id": "b2", "title": "Accelerating large language model decoding with speculative sampling", "year": "2023" }, { "authors": "Mark Chen; Jerry Tworek; Heewoo Jun; Qiming Yuan; Henrique Ponde De Oliveira Pinto; Jared Kaplan; Harri Edwards; Yuri Burda; Nicholas Joseph; Greg Brockman", "journal": "", "ref_id": "b3", "title": "Evaluating large language models trained on code", "year": "2021" }, { "authors": "Marjan Ghazvininejad; Omer Levy; Yinhan Liu; Luke Zettlemoyer", "journal": "", "ref_id": "b4", "title": "Mask-predict: Parallel decoding of conditional masked language models", "year": "2019-11" }, { "authors": "Jiatao Gu; James Bradbury; Caiming Xiong; O K Victor; Richard Li; Socher", "journal": "", "ref_id": "b5", "title": "Non-autoregressive neural machine translation", "year": "2018" }, { "authors": "Denis Kocetkov; Raymond Li; Loubna Ben Allal; Jia Li; Chenghao Mou; Carlos Muñoz Ferrandis; Yacine Jernite; Margaret Mitchell; Sean Hughes; Thomas Wolf; Dzmitry Bahdanau; Leandro Von Werra; Harm De Vries", "journal": "", "ref_id": "b6", "title": "The stack: 3 tb of permissively licensed source code", "year": "2022" }, { "authors": "Yaniv Leviathan; Matan Kalman; Yossi Matias", "journal": "", "ref_id": "b7", "title": "Fast inference from transformers via speculative decoding", "year": "2023" }, { "authors": "Noam Shazeer", "journal": "", "ref_id": "b8", "title": "Fast transformer decoding: One write-head is all you need", "year": "2019" }, { "authors": "Mitchell Stern; Noam Shazeer; Jakob Uszkoreit", "journal": "", "ref_id": "b9", "title": "Blockwise parallel decoding for deep autoregressive models", "year": "2018" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b10", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b11", "title": "Attention is all you need", "year": "2023" }, { "authors": "Jason Wei; Yi Tay; Rishi Bommasani; Colin Raffel; Barret Zoph; Sebastian Borgeaud; Dani Yogatama; Maarten Bosma; Denny Zhou; Donald Metzler; Ed H Chi; Tatsunori Hashimoto; Oriol Vinyals; Percy Liang; Jeff Dean; William Fedus", "journal": "Wikimedia Foundation", "ref_id": "b12", "title": "Emergent abilities of large language models", "year": "2022" }, { "authors": "Jun Zhang; Jue Wang; Huan Li; Lidan Shou; Ke Chen; Gang Chen; Sharad Mehrotra", "journal": "", "ref_id": "b13", "title": "Draft & verify: Lossless large language model acceleration via self-speculative decoding", "year": "2023" } ]
[]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b28", "b35", "b30", "b16", "b11" ], "table_ref": [], "text": "Data volumes have grown exponentially in recent years, causing deep neural networks (DNNs) to become one of the main components of machine learning and artificial intelligence in a diverse range of settings. Recent advancements in complex DNN architectures have pushed the state of the art beyond what was previously thought possible for applications in natural language processing, recommendation systems, and computer vision. Neural scaling laws pre-dict that increased performance can come from dramatic increases in data size, model size, training cost and other parameters (Alabdulmohsin et al.).\nHowever, the dramatic increase in data scale has created a computational bottleneck in terms of time, energy, and storage. It is costly to train even a simple model on datasets of the scale typically encountered in scientific and industrial settings. Many applications require dedicated, specialized infrastructure to train and run models. Consider a standard click-through prediction task, where a model must predict whether a user will click on an advertisement. Industry research teams report that such tasks can easily reach the scale of a billion events per day (McMahan et al., 2013). Training a model on the complete dataset is infeasible without considerable resources and expense.\nData selection is a popular approach to handle this problem. The idea has been independently studied in many contexts. For example, active learning seeks to define a selection process where data are selectively labeled (Settles, 2012). Coresets and sketches seek to reduce the scale of the data while preserving important metrics -such as the loss -within an ϵ approximation (Phillips, 2017). In statistics, a process known as importance sampling can substantially reduce the sample complexity of estimating an unknown quantity. A sought-after goal of the optimization literature has been to use importance sampling to accelerate SGD (Zhao & Zhang, 2015). Recently, Sorscher et al. (2022) demonstrated that data pruning can break the barrier of the neural scaling laws. Their central observation is that neural network training can be significantly accelerated by a sampling process that ranks training data examples by a high-quality \"pruning metric.\"\nA variety of pruning metrics have been investigated by the community. We observe that these metrics mainly reduce to approximations of the gradient norm as the importance score. This is unsurprising, given that the optimal SGD sampling distribution is known to be proportional to the gradient norm. However, this introduces a problem: the gradient depends on the model parameters. We are presented with two options. We may downsample statically, scoring each point independently of the network parameters, or dynamically, by scoring points according to metrics derived from the current network state. Dynamic sampling naturally results in better accuracy and better iteration-wise convergence. How-ever, these approaches are prohibitively expensive and can degrade the end-to-end performance.\nWe seek a way to sample from the subset of high-gradient points at a given training iteration. Fortunately, the gradient norm correlates strongly with the loss, leading to several related approaches. For example, selective backpropagation computes the loss of every point on the full network, but only performs the gradient computation for points with loss exceeding a threshold (Jiang et al., 2019). Linear regression models have recently been proposed to predict the loss of each point for use in the sampling process, with excellent results (Ganapathiraman et al., 2022). We view these approaches as extremes on a computation-accuracy tradeoff between our ability to estimate the loss and the end-to-end cost of doing so. In this work, we propose a technique that greatly enhances representation capability while reducing cost when compared with forward propagation through the network." }, { "figure_ref": [], "heading": "Our Contributions:", "publication_ref": [], "table_ref": [], "text": "We make the following concrete contributions.\n1. We pose the problem of score estimation as a regression task, where we wish to learn a model that assigns a score to each point in the data.\n2. We develop a novel, sketch-based approximation of the Nadaraya-Watson estimator which we call the Nadaraya-Watson sketch (NWS). This sketch may be of independent interest, as it provably approximates the kernel regression model with O(N d) training and O(1) inference complexity.\n3. Using the NWS, we develop an importance sampling distribution that predicts the loss of the network. By scheduling updates to the NWS, our distribution adapts to the changing network parameters throughout the dynamics of training.\n4. We demonstrate in experiments that our scheme is adaptive and outperforms the baseline in terms of accuracy and wall-clock time on four datasets." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "To develop our proposal, we combine recent ideas from density estimation and randomized algorithms with classical techniques in nonparametric regression. In this section, we provide a brief exposition of the components of our proposal." }, { "figure_ref": [], "heading": "Nonparametric Regression", "publication_ref": [], "table_ref": [], "text": "We consider the classical nonparametric regression setting where we are presented with data {x 1 , ...x N } and outputs {y 1 , ...y N } generated according to\ny i = f (x i ) + ϵ i\nwhere ϵ 1 , ...ϵ N are independent residuals with E[ϵ i ] = 0. We wish to estimate f from the data, which we can do by computing\nE[y|x] because E[y i |x i ] = E[f (x i )] + E[ϵ i ] = f (x i ).\nThe conditional probability p(y|x) can be expressed in terms of the joint and marginal probabilities, as follows.\nE[y|x] = y p(x, y) p(x)" }, { "figure_ref": [], "heading": "dy", "publication_ref": [ "b23", "b13", "b14" ], "table_ref": [], "text": "The classical Nadaraya-Watson estimator (Nadaraya, 1964) is obtained by using kernel density estimation to approximate the distributions p(x, y) and p(x). Given a kernel k(x, y), we estimate f using a ratio of weighted kernel sums.\nf (x) = i y i k(x, x i ) i k(x, x i )(1)\nThe Nadaraya-Watson estimator is known to be pointwise consistent when E[Y 2 ] < ∞ and the kernel satisfies the properties specified by Greblicki et al. (1984). Specifically, the kernel k(x, y) must have a bandwidth h such that as N → ∞, h N → 0 and N h d → ∞. Stronger guarantees are possible given further assumptions on the problem. For example, if the kernel (or dataset) have compact support then we can attain uniform consistency (Györfi et al., 2002)." }, { "figure_ref": [], "heading": "Locality-Sensitive Hashing", "publication_ref": [ "b15", "b4", "b12" ], "table_ref": [], "text": "We will estimate the numerator and denominator of the Nadaraya-Watson kernel estimator using recent techniques from randomized algorithms for kernel density estimation. These techniques rely on a particular kind of hash function known as a locality-sensitive hash (LSH).\nLSH Functions: An LSH family F is a family of functions l(x) : R d → Z that map similar points to same hash value (Indyk & Motwani, 1998). We say that a collision occurs whenever two points have the same hash code, i.e. l(x) = l(y). Definition 1.1. A hash family F is locality-sensitive with collision probability k(•, •) if for any two points x and y, l(x) = l(y) with probability k(x, y) under a uniform random selection of l(•) from F.\nLSH Kernels: When the collision probability k(x, y) is a monotone decreasing function of the distance metric dist(x, y), it is well-known that k is a radial kernel function (Coleman & Shrivastava, 2020). We say that a kernel function k(x, y) is an LSH kernel if it forms the collision probability for an LSH family (i.e. it satisfies the conditions described by Chierichetti & Kumar (2012)). A number of well-known LSH families induce useful kernels (Gionis et al., 1999)." }, { "figure_ref": [], "heading": "RACE Sketch", "publication_ref": [ "b18", "b31", "b34", "b8", "b19", "b6" ], "table_ref": [], "text": "LSH kernels are interesting because there is a family of efficient algorithms based on histograms with randomized partitions to estimate the quantity\ng(x) = xi∈D k(x i , x)\nwhen k(x i , x) is a hashable kernel (Lei et al., 2021;Ting et al., 2021). Due to the broad utility of kernel sums in statistical estimation, these algorithms have found application in wide-ranging applications such as WiFi localization (Xu et al., 2021), and genomics (Coleman et al., 2022). However, they all implement the same core method, which we describe here.\nWe begin by constructing a sketch S ∈ Z R×W , a 2D array of integers. Each row of the sketch is indexed using a hash function that assigns a column (or histogram bucket) to an input. This array is sufficient to report an estimate of g(x) for any query x. To construct the sketch, we create R independent hash functions {h 1 , ..h R } -one for each row. For each element x i ∈ D, we increment the corresponding bucket of the sketch. The approximation of g(x) can be done via averaging over the buckets selected by {h 1 (x), ..h R (x)} (Luo & Shrivastava, 2018) or by using more complex estimation processes such as median-of-means. With the median-ofmeans estimator, we have the following guarantee (Coleman & Shrivastava, 2021).\nTheorem 1.2. Let ĝ(x) be the median-of-means estimate using the RACE sketch with R rows and let g(x) = xi∈D k(x i , x). Then with probability at least 1 -δ,\n|ĝ(x) -g(x)| ≤ 32 g2 (x) R log 1/δ 1/2" }, { "figure_ref": [], "heading": "Algorithm", "publication_ref": [], "table_ref": [], "text": "Algorithm 1 implements the Nadaraya-Watson estimator via a composition of sketches. We refer to the result as the Nadaraya-Watson sketch (NWS). We begin by describing the design of the NWS and prove error bounds on the approximation error. Then, we proceed to describe how to use the sketch as a subroutine of our importance sampling process to accelerate the training of deep learning models." }, { "figure_ref": [], "heading": "Theory", "publication_ref": [], "table_ref": [], "text": "In this section, we prove that Algorithm 1 produces a sketch that can estimate Equation 1 with exponentially-bounded error. Observe that Algorithm 1 produces two sketches using the same hash functions. The expected value of the top sketch S t is the numerator of the Nadayara-Watson estimator, while the expected value of the bottom sketch\nS b Algorithm 1 Construct NWS input Dataset D = {(x i , y i )}, LSH family F, sketch pa- rameters R and W output Sketch S ∈ Z R×W ×2 1: Initialize S t , S b ∈ Z R×W = 0 2: Construct R hash functions H = {h 1 , ...h R } ∼ F 3: for (x i , y i ) ∈ D do 4: for h r ∈ H do 5: Increment S t [r, h r (x i )] by y i 6: Increment S b [r, h r (x i )] by 1 7:\nend for 8: end for\n9: return S = [S t , S b ]\nis the denominator. We will consider bounds on the ratio S t (x)/S b (x).\nThere are a few subtle design decisions involved with this estimator. First, there are two ways to compute the ratio. One method is to apply the median-of-means process to S t and S b independently, and then divide the results. The other way is to perform these steps in reverse order by dividing each row of S t and S b and applying median-of-means to the resulting R ratios. We choose to implement the first method because the second one introduces a non-trivial bias term in estimating Equation 1. Second, division by zero can occur whenever S b = 0. However, we observe that when S b = 0, S t is also 0 allowing us to correctly return 0 in this case. Therefore, we exclude this case and consider all expectations in the following analysis to be conditioned on the event that S b > 0 (we omit the notation for the sake of readability). We also suppose that y is bounded. This assumption is standard in the literature and necessary to have bounded variance; see Theorem 3 of Coleman et al. (2020). Theorem 2.1. Let S t (x) and S t (x) be the median-of-means estimates over the sketches in Algorithm 1 and let f (x) be the Nadaraya-Watson estimator. Assuming that y ∈ [-B, B], we have the following guarantee.\nPr S t S b -f (x) ≤ ϵ ≥ 1 -e -Rϵ 2 /32B 2 (B+1+ϵ) 2\nProof. Let g t (x) be the numerator and g b (x) be the denominator of Equation 1. With R columns, we have the following two guarantees:\nPr[|S t (x) -g t (x)| > ϵ] ≤ e -Rϵ 2 /32g 2 t (x) Pr[|S b (x) -g b (x)| > ϵ] ≤ e -Rϵ 2 /32g 2 b (x)\nWe make two observations. First, note that S t (x) and S b (x) can be expressed as the inner products ⟨y, 1(x)⟩ and ⟨1, 1(x)⟩, where y = [y 1 , ...y N ] and\n1(x) = R r=1 [1 {hr(x1)==x} , ...1 {hr(x N )==x} ]\nBecause S t (x) and S b (x) are both functions of the same underlying random variable, we do not need to bound the probability for both events. In particular, if\n|S b (x) -g b (x)| < ϵ and y i ∈ [-B, B], then |S t (x) -g t (x)| ≤ B|S b (x) - g b (x)| < Bϵ. Therefore, if we satisfy |S b (x) -g b (x)| < B -1 ϵ, we will have both |S b (x) -g b (x)| < ϵ and |S b (x) - g b (x)| < ϵ.\nThis leads to the following inequality, where we omit the dependence on x for the sake of clarity.\n-ϵ < S t -g t < ϵ => -ϵ + g t < S t < ϵ + g t -ϵ < S b -g b < ϵ => -ϵ + g b < S b < ϵ + g b Pr g t -ϵ g b + ϵ ≤ S t S b ≤ g t + ϵ g b -ϵ ≤ 1 -e -Rϵ 2 /32B 2 g 2 b\nTo obtain the final inequality, we observe that\ng t -ϵ g b + ϵ = g t g b -ϵ g t + g b g 2 b + ϵg b ≥ g t g b -ϵ B + 1 g b + ϵ g t + ϵ g b -ϵ = g t g b + ϵ g t + g b g 2 b -ϵg b ≤ g t g b + ϵ B + 1 g b + ϵ\nwhere the inequalities follow from |g t (x)| ≤ Bg b (x). This leads to\nPr S t S b - g t g b ≤ ϵ ′ ≥ 1 -e -Rϵ 2 /32B 2 g 2 b , ϵ ′ = ϵ B + 1 g b + ϵ Replacing ϵ = ϵ ′ g b B+1-ϵ ′ results in Pr S t S b -f (x) ≤ ϵ ′ ≥ 1 -e -Rϵ ′2 /32B 2 (B+1-ϵ ′ ) 2\nTheorem 2.1 can be used to design a sketch for a given error ϵ and failure rate δ. Corollary 2.2 demonstrates how to set the parameters to have additive pointwise error with high probability.\nCorollary 2.2. The Nadaraya-Watson sketch must have\nR = O B 4 ϵ 2\nrows to have additive error ϵ.\nProof. We require the condition in Theorem 2.1 to hold with probability ≥ 1 -δ. Therefore\nδ ≤ e -Rϵ 2 /32B 2 (B+1+ϵ) 2\nThis implies the following inequalities.\nRϵ 2 /32B 2 (B + 1 + ϵ) 2 ≥ log 1/δ R ≥ 32B 2 (B + 1 + ϵ) 2 ϵ 2 log 1/δ R ≥ 32B 2 (B + 2) 2 ϵ 2 log 1/δ\nwhere the final inequalities holds under the assumption that ϵ < 1." }, { "figure_ref": [], "heading": "Validation Study", "publication_ref": [], "table_ref": [], "text": "In this section, our aim is to determine the extent to which the NWS approximates the output of the Nadaraya-Watson kernel regression model. We also demonstrate that the NWS is a reasonable model for regression and classification tasks." }, { "figure_ref": [ "fig_0" ], "heading": "EMPIRICAL AND THEORETICAL ERROR", "publication_ref": [ "b10" ], "table_ref": [], "text": "Theorem 2.1 suggests that |ϵ| ≤ O( 1 √ R ) with high probability. In particular:\nϵ 2 = 32B 2 (B + 1 -ϵ) log 1 δ R ≤ 32B 2 (B + 1) log 1 δ\nR therefore, we have the following error bound with probability 1 -δ:\n|ϵ| ≤ B 32 log 1 δ (B + 1) R = O( 1 √ R )\nTo empirically validate this upper bound, we conducted an error study with the Microsoft Research Paraphrase Corpus (MRPC) dataset (Dolan & Brockett, 2005). For a full description of the dataset, see the Experiments section. We calculated the ground-truth values of the Nadaraya-Watson kernel model using the training data and computed the error for each sample of the test data. We use the SRP LSH kernel with 10 bits, and we vary the sketch size R to see whether the error obeys our bound. Figure 1 shows the 99% percentile of the empirical error at each value of R (right) and the full distribution of errors (left). These results show that our sketch has the correct asymptotic behavior predicted by our theoretical results and is bounded by 1 √ R . " }, { "figure_ref": [], "heading": "NWS FOR REGRESSION TASK", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "To demonstrate that the NWS sketch is a useful model, we apply NWS to standard regression datasets. Table 1 shows the comparison of NWS with linear regression on three of the UCI regression datasets, respectively. Note that the performance of the NWS improves as we increase the sketch size R, further confirming our theoretical analysis. " }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Adaptive Sampling via the Sketch", "publication_ref": [], "table_ref": [], "text": "Our validation study demonstrates that the NWS is a reasonable and efficient learning algorithm. In this section, we use the NWS as an online algorithm to predict the importance of an example to the model training process. This is done by fitting the NWS to the sequence of losses observed during training. Because the NWS is a non-linear, non-parametric model, it is able to model the nonconvex loss landscape of the model under training. Our proposed method is a dynamic sampling scheme since it uses the model parameters to estimate the loss, yet it is computationally efficient (O(1)) and independent of the number of data points. The proposed method consists of three main steps as shown in Figure 2.\nWarm-up phase: To initialize the NWS array, we do not down-sample data for the first few iterations. As Figure 2 represents, in the warm-up step, we pass the first few batches of data through the network, compute their loss, and add their loss values to S t sketch in the numerator of the NWS. The S b sketch in the denominator of NWS also stores the number of data points. From now on, we call the S t and S b sketches, the weighted (as it stores the loss values) and unweighted sketches, respectively.\nLoss Estimation phase: After warm-up phase, we query the NWS with the incoming data batch to retrieve their weighted and unweighted scores. The estimated loss value for each data point is its weighted score divided by the unweighted score. In other words, we are estimating loss via kernel density estimation.\nSampling phase: We wish to keep samples with higher loss values and discard the ones with lower loss values, since it implies that the network has seen similar data instances. Therefore, we apply importance sampling on the estimated loss values to sample each data point with accepted prob-ability of p i , thus the associated weight of each accepted sample is w i to debias the loss." }, { "figure_ref": [], "heading": "Sampling Experiments", "publication_ref": [ "b10", "b20", "b27", "b9" ], "table_ref": [ "tab_1" ], "text": "In this section, we empirically benchmark the performance of our proposed algorithm against the baseline. The baseline is the conventional training without subsampling, and our proposed algorithm computes the kernel density estimation of loss distribution via NWS and dynamically estimates the loss values for data points. Our algorithm is dynamic and adaptive to the constant change of loss landscape, yet computationally efficient. We evaluate our framework and the baseline on four datasets with two tasks.\nDatasets: MRPC dataset (Dolan & Brockett, 2005) is an entailment task dataset which consists of a corpus of sentence pairs collected from a news article and each pair is labeled positive if they are paraphrase. Twitter-financial-news and Financial-phrasebank (Malo et al., 2014) are financial sentiment analysis task datasets. For Financial-phrasebank each sentence is classified from an investor point of view, e.g. how the news may impact the stock price, and for Twitter dataset the finance-related tweets are classified based on their sentiment. Sentinemt140 dataset is also a sentiment analysis task dataset that classifies sentiment of general tweets. The statistics of the datasets are shown in Table 2. Architecture and Hyperparameters: For Sentiment140 dataset we utilize pre-trained Distilled-Bert model (Sanh et al., 2019) and for the rest of the datasets we utilize the pre-trained Bert model (Devlin et al., 2018), and add a classifier head to adapt the model to the classification task.\nWe fine-tune the model on each dataset by retraining the whole model. The optimizer is Adam with a learning rate of 0.00002 for all datasets. To use the hash function we need a vector representation of the data. Therefore, we use the representation of each data in the output of BertPooler layer.\nWe use sign random projection (SRP) hash function with number of repetitions R = 200 for all datasets. The number of warm-up iterations for MRPC and Financial-phrasebank datasets is 50, and for Sentiment140 and Twitter dataset is 100. We update the NWS sketch with an initial update period of every iteration and then exponentially decay the updating frequency (as we need fewer updates near convergence).\nOur experiments are run on a NVIDIA V100 GPU with 32 GB memory. The weighted score divided by unweighted score estimates the loss value for the data point, without any need to explicitly compute the loss with the network. 3) Sampling phase: Based on the estimated loss, we keep the points with higher loss values and reject ones with lower loss values with a higher probability. For more details see Algorithm section." }, { "figure_ref": [], "heading": "Algorithm and Implementation Details", "publication_ref": [], "table_ref": [], "text": "We consider NWS sketch which consists of two arrays, one weighted array, and the other unweighted array. The weighted array stores the loss values associated with each sample, while the unweighted array stores the number of points that are mapped to a bucket. First, we initialize R independent LSH hash functions, where R is the number of repetitions in each array. For the sketch to obtain a general idea of the loss landscape, we use the first few iterations to add data to the NWS sketch, with no sampling. We call it the warm-up phase. After the warm-up phase, we query both sketches with the incoming batch of data, and compute scores for both arrays (weighted scores and unweighted scores). The final score of each data point is computed as weighted score unweighted score , which is equivalent to its estimated loss value. After calculating the estimated loss for each data point in the batch, we apply importance sampling such that data points with higher estimated loss values are sampled with higher probability.\nWe feed the model only the accepted samples, thus the model is trained only on the sampled data points. Then, the true loss values of the sampled data points are calculated and added back to the sketch to update the values.\nFor more details please refer to Algorithms 2, 3, 4, 7. end if 12: end for" }, { "figure_ref": [ "fig_3" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Table 3 shows the comparisons in terms of accuracy and convergence time(wall-clock time to reach baseline accuracy). According to this table, our algorithm meets baseline accuracy faster in terms of wall-clock time (lower convergence time), and eventually reaches higher accuracy level than the baseline for all datasets. Increment S[r, h r (x i )] by v 6: end for 7: end for Figure 3 shows the plots comparing accuracy and loss versus the number of iterations for our method and the baseline. Note that for the first few iterations, the loss and accuracy values are the same for our method and the baseline, this is due to the warm-up phase where we do not subsample and we only update the sketch." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b33", "b3", "b2", "b29", "b1", "b19", "b24", "b17", "b32", "b22" ], "table_ref": [], "text": "Sampling and kernel estimation have recently been the focus of a large body of work. weight i = 0 6: end if 7: return {weight i } Kernel Estimators: The problem of kernel density estimation was well-studied in the era of kernelized linear models (Vedaldi & Zisserman, 2012;Chen et al., 2012) and has recently been the focus of intense research due to various reductions of other problems (such as near-neighbor search, graph construction, and kernel matrix multiplication and eigen-decomposition) to density estimation (Coleman et al., 2020;Backurs et al., 2019;Siminelakis et al., 2019;Backurs et al., 2018). The NWS bears some resemblance to the RACE kernel density estimator (Coleman & Shrivastava, 2020;Luo & Shrivastava, 2018). However, there are a few crucial differences between this sketch and prior work. Existing work only considers the density estimation setting, a simpler problem setting where we are interested in approximating a kernel sum. To estimate the Nadaraya-Watson estimator, we must approximate the ratio of kernel sums, which is a harder quantity to evaluate. A naive application of the techniques from prior work would result in unbounded variance and an undefined estimator, since the value from the denominator of Equation 1 can become zero. To address this problem, we re-derive the Chernoff bounds for the ratio of (dependent) kernel estimators, noting that the same analysis also produces guarantees for the other hash-based kernel sum approximators. Sampling: There are many works which attempt to improve the speed of training a model by sampling inputs. Elements of the problem have been independently studied in the context of active learning, acceleration of SGD (Paul et al., 2021;Johnson & Guestrin, 2018), heuristics to reduce the cost of training large networks, and coresets (Tukan et al., 2021;Mirzasoleiman et al., 2020). In this review, we distinguish between static and dynamic methods. Static methods are those that attempt to summarize the dataset without access to the model parameters, while dynamic methods permit access to the parameters as they change during training. Dynamic algorithms typically outperform their static counterparts in terms of sample complexity but incur a higher computational cost. For the comprehensive review of the related work please refer to the supplementary material." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We developed a novel sketch-based approximation of the Nadaraya-Watson estimator (NWS) that provably approximates the kernel regression model. Then, we proposed an efficient and dynamic data selection algorithm based on NWS to improve the training of neural networks. Our algorithm utilizes model parameters at each iteration to sample data points with higher loss values, without any explicit computation of loss. We benchmarked our algorithm against no-sampling baseline on four datasets and showed that our proposal outperforms the baseline in terms of accuracy and convergence time." } ]
Data sampling is an effective method to improve the training speed of neural networks, with recent results demonstrating that it can even break the neural scaling laws. These results critically rely on high-quality scores to estimate the importance of an input to the network. We observe that there are two dominant strategies: static sampling, where the scores are determined before training, and dynamic sampling, where the scores can depend on the model weights. Static algorithms are computationally inexpensive but less effective than their dynamic counterparts, which can cause end-to-end slowdown due to their need to explicitly compute losses. To address this problem, we propose a novel sampling distribution based on nonparametric kernel regression that learns an effective importance score as the neural network trains. However, nonparametric regression models are too computationally expensive to accelerate end-to-end training. Therefore, we develop an efficient sketch-based approximation to the Nadaraya-Watson estimator. Using recent techniques from high-dimensional statistics and randomized algorithms, we prove that our Nadaraya-Watson sketch approximates the estimator with exponential convergence guarantees. Our sampling algorithm outperforms the baseline in terms of wall-clock time and accuracy on four datasets.
Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies
[ { "figure_caption": "Figure 1 .1Figure 1. left: The distribution of empirical error for test dataset for multiple sketches with different values of R. right: The blue curve is the 99% percentile of empirical error and the red curve is the theoretical error bound.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Schematic diagram of our proposal 1) (Warm-up phase) For the first few iterations, we add data to the NWS sketch. Weighted array stores the loss values, and unweighted array stores the number of data. 2) After the warm-up phase, we query NWS with the data.The weighted score divided by unweighted score estimates the loss value for the data point, without any need to explicitly compute the loss with the network. 3) Sampling phase: Based on the estimated loss, we keep the points with higher loss values and reject ones with lower loss values with a higher probability. For more details see Algorithm section.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 22Proposed Algorithm input Dataset D, Number of warm-up iterations Iter warm , NWS sketch 1: for Iter in Iterations do 2: (x, y) = Batch of data D 3: if Iter ≤ Iter warm then 4: loss = TrainModel(x, y, ) (Algorithm 3", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Algorithm 33TrainModel input Batch of data D = {(x, y, weight)} output Loss for samples of each batch loss 1: If weight is not given: weight = 1 2: forward run 3: Cross Entropy loss for each sample x i : loss i = CE(x i , y i ) 4: loss i = weight i • loss i 5: backpropagation 6: return {loss i } Algorithm 4 LossEstimation input query q , NWS output S set of scores (loss estimation values) 1: S t and S b are NWS arrays 2: NWS has R LSH functions h r 3: score weighted = Query(q, S t , h r | k=R k=1 ) (Algorithm 5) 4: score unweighted = Query(q, S b , h r | k=R k=1 ) 5: S = score weighted score unweighted 6: return S Algorithm 5 Query 1: Input: Query q, sketch S, h r as R LSH hash functions 2: Output: score 3: Compute query hash codes h r (q)| k=R k=1 , map them to buckets b r | r=R r=1 and retrieve the bucket values x br 4: score = E[x br | i=R i=1 ] {compute average over the retrieved values} 5: return score Algorithm 6 UpdateSketch 1: Input: Data D = (x, y), Value v, Sketch S 2: sketch has R hash functions H = {h 1 , ...h R } 3: for (x i , y i ) ∈ D do 4: for h r ∈ H do 5:", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 77Samplinginput Dataset D = {(x, y)}, Estimated loss of each sample loss output Sample weight weight 1: p i = Accpeted probalilty of sample i via importance sampling over loss values 2: if x i is accepted then", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure3. Comparison of our proposal against the no-sampling baseline for four datasets in terms of loss and accuracy. Top Row: represents test accuracy (y-axis) vs. number of iterations (x-axis) Bottom Row: represents test loss (y-axis) vs. number of iterations (x-axis). The sampling ratio for MRPC, Financial-phrasebank, Twitter and Sentiment140 datasets are 30%, 30%, 40% and 50%, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Mean squared error of NWS and linear regression (LR) on UCI regression datasets.", "figure_data": "LRNWSDatasetR 102050100200airfoli15574.412251.4 259.9 27.91 27.76 27.6gas222.9733.29 23.36 18.82 18.09 17.79energy9.6873.015 1.374 0.305 0.0878 0.078", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics", "figure_data": "of the datasetsDataset#Train #TestMRPC3669409Financial-phrasebank4356484Twitter-financial-news8944993Sentiment1401.44M 1.6M", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Shabnam Daghaghi; Benjamin Coleman; Benito Geordie; Anshumali Shrivastava
[ { "authors": "I Alabdulmohsin; B Neyshabur; X Zhai", "journal": "", "ref_id": "b0", "title": "Revisiting neural scaling laws in language and vision", "year": "" }, { "authors": "A Backurs; M Charikar; P Indyk; P Siminelakis", "journal": "IEEE", "ref_id": "b1", "title": "Efficient density evaluation for smooth kernels", "year": "2018" }, { "authors": "A Backurs; P Indyk; T Wagner", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Space and time efficient kernel density estimation in high dimensions", "year": "2019" }, { "authors": "Y Chen; M Welling; A Smola", "journal": "", "ref_id": "b3", "title": "Super-samples from kernel herding", "year": "2012" }, { "authors": "F Chierichetti; R Kumar", "journal": "", "ref_id": "b4", "title": "Lsh-preserving functions and their applications", "year": "2012" }, { "authors": "B Coleman; A Shrivastava", "journal": "", "ref_id": "b5", "title": "Sub-linear race sketches for approximate kernel density estimation on streaming data", "year": "2020" }, { "authors": "B Coleman; A Shrivastava", "journal": "", "ref_id": "b6", "title": "A one-pass distributed and private sketch for kernel sums with applications to machine learning at scale", "year": "2021" }, { "authors": "B Coleman; R Baraniuk; A Shrivastava", "journal": "PMLR", "ref_id": "b7", "title": "Sub-linear memory sketches for near neighbor search on streaming data", "year": "2020" }, { "authors": "B Coleman; B Geordie; L Chou; R L Elworth; T Treangen; A Shrivastava", "journal": "PMLR", "ref_id": "b8", "title": "One-pass diversified sampling with application to terabyte-scale genomic sequence streams", "year": "2022" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova; Bert", "journal": "", "ref_id": "b9", "title": "Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "W B Dolan; C Brockett", "journal": "", "ref_id": "b10", "title": "Automatically constructing a corpus of sentential paraphrases", "year": "2005" }, { "authors": "V Ganapathiraman; F C Rodriguez; A Joshi", "journal": "", "ref_id": "b11", "title": "Impon: Efficient importance sampling with online regression for rapid neural network training", "year": "2022" }, { "authors": "A Gionis; P Indyk; R Motwani", "journal": "", "ref_id": "b12", "title": "Similarity search in high dimensions via hashing", "year": "1999" }, { "authors": "W Greblicki; A Krzyżak; M Pawlak", "journal": "The annals of Statistics", "ref_id": "b13", "title": "Distributionfree pointwise consistency of kernel regression estimate", "year": "1984" }, { "authors": "L Györfi; M Kohler; A Krzyzak; H Walk", "journal": "Springer", "ref_id": "b14", "title": "A distribution-free theory of nonparametric regression", "year": "2002" }, { "authors": "P Indyk; R Motwani", "journal": "ACM", "ref_id": "b15", "title": "Approximate nearest neighbors: towards removing the curse of dimensionality", "year": "1998" }, { "authors": "A H Jiang; D L Wong; .-K Zhou; G Andersen; D G Dean; J Ganger; G R Joshi; G Kaminksy; M Kozuch; M Lipton; Z C ", "journal": "", "ref_id": "b16", "title": "Accelerating deep learning by focusing on the biggest losers", "year": "2019" }, { "authors": "T B Johnson; C Guestrin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "Training deep models faster with robust, approximate importance sampling", "year": "2018" }, { "authors": "R Lei; P Wang; R Li; P Jia; J Zhao; X Guan; C Deng", "journal": "", "ref_id": "b18", "title": "Fast rotation kernel density estimation over data streams", "year": "2021" }, { "authors": "C Luo; A Shrivastava", "journal": "", "ref_id": "b19", "title": "Arrays of (locality-sensitive) count estimators (ace) anomaly detection on the edge", "year": "2018" }, { "authors": "P Malo; A Sinha; P Korhonen; J Wallenius; P Takala", "journal": "Journal of the Association for Information Science and Technology", "ref_id": "b20", "title": "Good debt or bad debt: Detecting semantic orientations in economic texts", "year": "2014" }, { "authors": "H B Mcmahan; G Holt; D Sculley; M Young; D Ebner; J Grady; L Nie; T Phillips; E Davydov; D Golovin", "journal": "", "ref_id": "b21", "title": "Ad click prediction: a view from the trenches", "year": "2013" }, { "authors": "B Mirzasoleiman; J Bilmes; J Leskovec", "journal": "PMLR", "ref_id": "b22", "title": "Coresets for data-efficient training of machine learning models", "year": "2020" }, { "authors": "E A Nadaraya", "journal": "Theory of Probability & Its Applications", "ref_id": "b23", "title": "On estimating regression", "year": "1964" }, { "authors": "M Paul; S Ganguli; G K Dziugaite", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Deep learning on a data diet: Finding important examples early in training", "year": "2021" }, { "authors": "J M Phillips", "journal": "", "ref_id": "b25", "title": "Coresets and sketches", "year": "" }, { "authors": "Hall Chapman", "journal": "CRC", "ref_id": "b26", "title": "", "year": "2017" }, { "authors": "V Sanh; L Debut; J Chaumond; T Wolf; Distilbert", "journal": "", "ref_id": "b27", "title": "a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "B Settles", "journal": "", "ref_id": "b28", "title": "Active learning. Synthesis lectures on artificial intelligence and machine learning", "year": "2012" }, { "authors": "P Siminelakis; K Rong; P Bailis; M Charikar; P Levis", "journal": "PMLR", "ref_id": "b29", "title": "Rehashing kernel evaluation in high dimensions", "year": "2019" }, { "authors": "B Sorscher; R Geirhos; S Shekhar; S Ganguli; A S Morcos", "journal": "", "ref_id": "b30", "title": "Beyond neural scaling laws: beating power law scaling via data pruning", "year": "2022" }, { "authors": "K M Ting; T Washio; J R Wells; H Zhang", "journal": "IEEE", "ref_id": "b31", "title": "Isolation kernel density estimation", "year": "2021" }, { "authors": "M Tukan; C Baykal; D Feldman; D Rus", "journal": "Theoretical Computer Science", "ref_id": "b32", "title": "On coresets for support vector machines", "year": "2021" }, { "authors": "A Vedaldi; A Zisserman", "journal": "IEEE", "ref_id": "b33", "title": "Sparse kernel approximations for efficient classification and detection", "year": "2012" }, { "authors": "Z Xu; B Huang; B Jia", "journal": "IEEE Transactions on Vehicular Technology", "ref_id": "b34", "title": "An efficient radio map learning scheme based on kernel density function", "year": "2021" }, { "authors": "P Zhao; T Zhang", "journal": "PMLR", "ref_id": "b35", "title": "Stochastic optimization with importance sampling for regularized loss minimization", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 392.4, 87.36, 63.59, 9.65 ], "formula_id": "formula_0", "formula_text": "y i = f (x i ) + ϵ i" }, { "formula_coordinates": [ 2, 307.44, 128.42, 234, 21.61 ], "formula_id": "formula_1", "formula_text": "E[y|x] because E[y i |x i ] = E[f (x i )] + E[ϵ i ] = f (x i )." }, { "formula_coordinates": [ 2, 373.67, 178.7, 88.95, 22.34 ], "formula_id": "formula_2", "formula_text": "E[y|x] = y p(x, y) p(x)" }, { "formula_coordinates": [ 2, 380.63, 268.05, 161.48, 24.88 ], "formula_id": "formula_3", "formula_text": "f (x) = i y i k(x, x i ) i k(x, x i )(1)" }, { "formula_coordinates": [ 3, 129.77, 137.41, 85.34, 20.06 ], "formula_id": "formula_4", "formula_text": "g(x) = xi∈D k(x i , x)" }, { "formula_coordinates": [ 3, 91.63, 467.56, 161.12, 25.92 ], "formula_id": "formula_5", "formula_text": "|ĝ(x) -g(x)| ≤ 32 g2 (x) R log 1/δ 1/2" }, { "formula_coordinates": [ 3, 279.33, 69.84, 263.76, 648.07 ], "formula_id": "formula_6", "formula_text": "S b Algorithm 1 Construct NWS input Dataset D = {(x i , y i )}, LSH family F, sketch pa- rameters R and W output Sketch S ∈ Z R×W ×2 1: Initialize S t , S b ∈ Z R×W = 0 2: Construct R hash functions H = {h 1 , ...h R } ∼ F 3: for (x i , y i ) ∈ D do 4: for h r ∈ H do 5: Increment S t [r, h r (x i )] by y i 6: Increment S b [r, h r (x i )] by 1 7:" }, { "formula_coordinates": [ 3, 312.42, 217.22, 91.44, 9.72 ], "formula_id": "formula_7", "formula_text": "9: return S = [S t , S b ]" }, { "formula_coordinates": [ 3, 322.33, 570.81, 203.22, 23.22 ], "formula_id": "formula_8", "formula_text": "Pr S t S b -f (x) ≤ ϵ ≥ 1 -e -Rϵ 2 /32B 2 (B+1+ϵ) 2" }, { "formula_coordinates": [ 3, 341.59, 649.16, 165.19, 34.47 ], "formula_id": "formula_9", "formula_text": "Pr[|S t (x) -g t (x)| > ϵ] ≤ e -Rϵ 2 /32g 2 t (x) Pr[|S b (x) -g b (x)| > ϵ] ≤ e -Rϵ 2 /32g 2 b (x)" }, { "formula_coordinates": [ 4, 83.76, 90.59, 177.36, 30.2 ], "formula_id": "formula_10", "formula_text": "1(x) = R r=1 [1 {hr(x1)==x} , ...1 {hr(x N )==x} ]" }, { "formula_coordinates": [ 4, 55.44, 161.06, 234, 57.47 ], "formula_id": "formula_11", "formula_text": "|S b (x) -g b (x)| < ϵ and y i ∈ [-B, B], then |S t (x) -g t (x)| ≤ B|S b (x) - g b (x)| < Bϵ. Therefore, if we satisfy |S b (x) -g b (x)| < B -1 ϵ, we will have both |S b (x) -g b (x)| < ϵ and |S b (x) - g b (x)| < ϵ." }, { "formula_coordinates": [ 4, 71.44, 250.73, 201.01, 81.79 ], "formula_id": "formula_12", "formula_text": "-ϵ < S t -g t < ϵ => -ϵ + g t < S t < ϵ + g t -ϵ < S b -g b < ϵ => -ϵ + g b < S b < ϵ + g b Pr g t -ϵ g b + ϵ ≤ S t S b ≤ g t + ϵ g b -ϵ ≤ 1 -e -Rϵ 2 /32B 2 g 2 b" }, { "formula_coordinates": [ 4, 87.03, 374.08, 170.82, 54.57 ], "formula_id": "formula_13", "formula_text": "g t -ϵ g b + ϵ = g t g b -ϵ g t + g b g 2 b + ϵg b ≥ g t g b -ϵ B + 1 g b + ϵ g t + ϵ g b -ϵ = g t g b + ϵ g t + g b g 2 b -ϵg b ≤ g t g b + ϵ B + 1 g b + ϵ" }, { "formula_coordinates": [ 4, 55.44, 464.26, 232.13, 81.77 ], "formula_id": "formula_14", "formula_text": "Pr S t S b - g t g b ≤ ϵ ′ ≥ 1 -e -Rϵ 2 /32B 2 g 2 b , ϵ ′ = ϵ B + 1 g b + ϵ Replacing ϵ = ϵ ′ g b B+1-ϵ ′ results in Pr S t S b -f (x) ≤ ϵ ′ ≥ 1 -e -Rϵ ′2 /32B 2 (B+1-ϵ ′ ) 2" }, { "formula_coordinates": [ 4, 55.44, 643.07, 47.36, 15.11 ], "formula_id": "formula_15", "formula_text": "R = O B 4 ϵ 2" }, { "formula_coordinates": [ 4, 121.96, 704.55, 99.97, 12.44 ], "formula_id": "formula_16", "formula_text": "δ ≤ e -Rϵ 2 /32B 2 (B+1+ϵ) 2" }, { "formula_coordinates": [ 4, 354.51, 86.39, 139.48, 68.71 ], "formula_id": "formula_17", "formula_text": "Rϵ 2 /32B 2 (B + 1 + ϵ) 2 ≥ log 1/δ R ≥ 32B 2 (B + 1 + ϵ) 2 ϵ 2 log 1/δ R ≥ 32B 2 (B + 2) 2 ϵ 2 log 1/δ" }, { "formula_coordinates": [ 4, 318.4, 321.06, 209.7, 24.77 ], "formula_id": "formula_18", "formula_text": "ϵ 2 = 32B 2 (B + 1 -ϵ) log 1 δ R ≤ 32B 2 (B + 1) log 1 δ" }, { "formula_coordinates": [ 4, 346.44, 384.09, 155.99, 26.08 ], "formula_id": "formula_19", "formula_text": "|ϵ| ≤ B 32 log 1 δ (B + 1) R = O( 1 √ R )" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b32", "b40", "b29", "b30", "b38", "b39", "b45", "b11", "b16", "b29", "b30", "b39", "b32", "b12", "b3", "b34", "b18", "b15", "b52", "b11", "b6" ], "table_ref": [], "text": "Computer Vision is currently undergoing a revolution dominated by foundation models [5,10,12,25,32,33,36,41] and multi-modal large-language models [2,4,6,23,26,30,31,39,40,46,50]. These models have demonstrated remarkable performance across a wide spectrum of tasks, including segmentation [5, 12,17], object detection [24, 48, 52], understanding [2,18,23,30,31,40] , and generation [33,36,49]. However, among these tasks, the critical task of object counting has received relatively less attention.\nObject counting, the task of estimating the number of specific objects present in an image, is in high demand across numerous practical fields, such as transportation, agriculture, industry, biology, etc. Existing solutions for object counting can be broadly categorized into four types:\n• As density map regression task. A common approach [8,11,13,29,34,38] is to regress a 2D density map, the summation of which is used as the counting result.\nAlthough effective, the less intuitive visualization of the density map [8] makes it difficult for users to assess the accuracy of the counting results. • As closed-set detection task. Another straightforward solution involves employing a closed-set detector (E.g. YOLO [35]) to detect objects, where the summation of GLIP [19],Semantic-SAM [16], SEEM [53], SAM [12], UniPose [43], MQ-Det [42], OWL-ViT [27], DINOv [15]." }, { "figure_ref": [], "heading": "Detection Visual Prompting", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Open-Set Interactive", "publication_ref": [ "b13", "b43", "b8", "b3" ], "table_ref": [], "text": "the number of detected boxes serves as the counting result. However, limited by the fixed categories, this method requires data re-collection and re-training efforts for novel categories, which is time-consuming and laborintensive.\n• As open-vocabulary detection task. To overcome the limitations of closed-set detection methods, an alternative approach is to adapt open vocabulary detector (E.g. Grounding DINO [24]) to detect arbitrary objects through text prompts. However, the task of counting poses a significant challenge as many objects do not have concise textual descriptions, making object specification by text difficult. • As MLLM QA task. Multi-modal Large Language Models (MLLM) can also be used for object counting through question answering [14,44]. However, the issue of hallucination [9] in multi-modal LLMs affects the confidence level of their counting results, as users may be skeptical of a numerical output from an MLLM without additional supporting evidence. By highlighting the limitations of existing counting solutions, we argue that a practical counting system should possess the following four properties Guided by this design philosophy, we develop an detection-based counting model, called T-Rex, as shown in Fig. 2. Users can specify the object of interest by marking boxes or points on the reference image. T-Rex, in return, detects all instances with a similar pattern in the target image, and the cumulative sum of the detected boxes represents the counting result. With the visual feedback from T-Rex, users can interactively add additional prompts on missed or falsely-detected objects. This interactive process allows for continual refinement over T-Rex's prediction, empowering users to confidently access the accuracy of the counting results. Notably, this interactive process remains fast and resource-efficient, as each round of interaction only requires forwarding the decoder of T-Rex.\nT-Rex has achieved state-of-the-art results on two counting benchmarks [29,34]. To further measure its potential, we introduce a new counting benchmark, CA-44, which comprises 44 datasets across eight domains, presenting diverse and formidable challenges. Our experimental findings demonstrate that T-Rex possesses strong zero-shot counting capabilities and can achieve good performance in various scenarios. Finally, we explore a wide range of application scenes of T-Rex. With its versatile counting capabilities and interactive features, T-Rex has the potential to make substantial contributions to various domains, such as retail, transportation, agriculture, industry, livestock, medicine, etc." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b0", "b46", "b3", "b12", "b11", "b11", "b52", "b15" ], "table_ref": [], "text": "Object Counting. Methods for object counting can be divided into class-specific and class-agnostic. Class-specific approaches typically use object detection models to count specific categories, such as people [1,47], cars [7, 28], or animals [3]. These methods are limited to predefined classes and require additional data labeling for new categories, which is time-consuming and labor-intensive. To overcome the limitations of close-set object detector, classagnostic methods [8,11,29,34,38] have been developed to regress density map [13] based on the correlation feature between image and a few visual exemplars. However, these techniques often lack an intuitive visualization, making it difficult for users to verify the model's accuracy. Interactive Models. Interactive models have shown significant promise in aligning human intentions within the field of computer vision. SAM [12] presents an interactive segmentation model capable of accommodating point, box, and text-based input, achieving remarkable zero-shot segmentation by leveraging an large-scale dataset SA-1B [12]. SEEM [53] and Semantic-SAM [16] have extended to more general segmentation models that can output the semantic meaning for the segmented mask. In contrast, the field of interactive object detection is less explored. One concurrent work DINOv [15], explores visual in-context prompting in both referring and general segmentation. However, its performance is sub-optimal in processing images with densely packed objects, and it lacks the capability for multi-round interactions necessary for correcting erroneous detection results." }, { "figure_ref": [], "heading": "Overview of T-Rex", "publication_ref": [], "table_ref": [], "text": "We briefly introduce the T-Rex model. T-Rex comprises of three components, including an image encoder, a prompt encoder and a box decoder, as illustrated in Fig. 3. Given a target image input I tgt and optionally a reference image input I ref (the target image can also serve as the reference image in the absence of a separate reference image), the image encoder first extracts the visual features E tgt , E ref .\nThen, using the user-drawn boxes or points as prompts P for the target object on the reference image, the prompt encoder extracts the encoded visual prompt P enc from the reference image feature E ref . Finally, the box decoder combines the target image feature E tgt and the encoded visual prompt P enc as inputs, outputting detected boxes B along with their associated confidence scores S. A predetermined score threshold is applied to filter the detected boxes, and the remaining boxes are summed to produce the final object count.\nE tgt = ImageEncoder (I tgt )(1)\nE ref = ImageEncoder (I ref )(2)\nP enc = PromptEncoder (P, E ref )(3)\nB, S = BoxDecoder (P enc , E tgt ) (4)\n#Count = ThreshFilter(B, S)(5)" }, { "figure_ref": [], "heading": "Workflows", "publication_ref": [], "table_ref": [], "text": "T-Rex offers three major interactive workflows as shown in Fig. 4. We explain each workflow and its application below.\nPositive-only Prompt Mode. In most counting scenarios, users typically only need to click once or draw one box, and T-Rex can effectively detect all objects with a similar pattern. However, in cases involving dense and small objects, a single round of prompting may be insufficient. In such cases, users have the option to incorporate additional prompts to the missed region, based on the visual feedback from T-Rex. This iterative refinement approach allows for more accurate counting results.\nPositive with Negative Prompt Mode. In scenarios where interference from other similar objects is present, T-Rex may generate falsely detected boxes. As illustrated in Fig. 4, when a prompt is directed at a green apple, T-Rex might erroneously detect the orange tangerine due to the strong geometric resemblance between these two object types. In such cases, users can rectify the counting result by adding negative prompts to the falsely detected object. " }, { "figure_ref": [], "heading": "Image Encoder", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "T-Rex is essentially an open-set object detection model.", "publication_ref": [ "b18", "b6", "b50" ], "table_ref": [], "text": "Compared with open-vocabulary object detectors [19,24,27,51] that rely on text-based prompt, T-Rex adopts visual prompt instead. Since in many real-world counting applications, text descriptions may not sufficiently capture all object details, and employing visual prompts offers a more direct and versatile alternative.\nIn the context of the object counting task, a paramount consideration revolves around the need for highly reliable prediction from the model. Given that the counting results are represented as statistical values, even a minor discrepancy in the predicted value signifies a failure counting. Hence, we design T-Rex to be interactive, allowing users to iteratively rectify counting results based on the visual feedback, thus enhancing the counting accuracy. Regarding the structure of model, T-Rex requires only a single forward pass through the Image Encoder, while subsequent rounds of interaction involve only the Prompt Encoder and Box Decoder. This streamlined approach ensures that the entire interaction process remains lightweight and fast. " }, { "figure_ref": [ "fig_3" ], "heading": "Count Anything Benchmark", "publication_ref": [ "b20" ], "table_ref": [ "tab_0" ], "text": "To conduct a holistic performance evaluation of the T-Rex model, we develop a new object counting benchamrk named CA-44. This benchmark includes a total of 44 datasets covering eight distinct domains, as depicted in Fig. 5.\nDataset Distribution. The majority of the datasets included in CA-44 were collected from Roboflow and underwent additional filtering procedures applied. For instance, we eliminated images containing fewer than 10 instances, since object counting is mostly focused on dense scenes. The composition of the CA-44 dataset is detailed in Table 1.\nStatistics. As illustrated in Fig. 6, the CA-44 benchmark primarily features images with small and densely packed objects. These characteristics reflect the common attributes of scenes in the object counting domain. [21] , where objects with an area smaller than 32 2 are classified as small, those with an area between 32 2 and 96 2 as medium, and objects with area greater than 96 2 as large." }, { "figure_ref": [], "heading": "Experiment Results", "publication_ref": [ "b3" ], "table_ref": [], "text": "Settings. In addition to our proposed CA-44 benchmark, we also conduct evaluations on the commonly-used counting dataset FSC147 [34] and the more challenging dataset FSCD-LVIS [29]. FSC147 comprises 147 categories of objects and 1190 images in the test set and FSCD-LVIS comprises 377 categories and 1014 images in the test set. Both two datasets provide three bounding boxes of exemplar objects for each image, which we will use as the visual prompt for T-Rex.\nMetrics. We adopt two metrics for evaluation: the Mean Average Error (MAE) metric, a widely employed standards in object counting and the Normalized Mean Average Error (NMAE) metric for more intuitive results. The mathemati-" }, { "figure_ref": [], "heading": "Type", "publication_ref": [], "table_ref": [], "text": "Method MAE↓" }, { "figure_ref": [], "heading": "Density Map", "publication_ref": [ "b3", "b36", "b21" ], "table_ref": [], "text": "FamNet [34] 26.76 BMNet+ [37] 16.89 LaoNet [20] 15.78 CountTR [22] 12.06 LOCA [38] 12.53 Detection T-Rex (Ours) 10.59\nTable 2. One-shot counting evaluation on FSC147 test-set. Oneshot indicates that each image utilizes one examplar box as visual prompt." }, { "figure_ref": [], "heading": "Type Method MAE↓ Density Map", "publication_ref": [ "b3", "b36", "b44", "b21" ], "table_ref": [], "text": "FamNet [34] 22.08 BMNet+ [37] 14.62 SAFECount [45] 14.32 CountTR [22] 11.95 LOCA [38] 10.79 Detection T-Rex (Ours) 8.72\nTable 3. Three-shot counting evaluation on FSC147 test-set. Three-shot indicates that each image utilizes three examplar box as visual prompts.\ncal expressions for the two metrics are as follows:\nMAE = 1 J J j=1 c * j -c j (6) NMAE = 1 J J j=1 c * j -c j c * j (7\n)\nwhere J represents the total number of test images, c * j and c j denote the ground truth (GT) and the predicted number of objects for image j, respectively." }, { "figure_ref": [], "heading": "Results on FSC147 and FSCD-LVIS", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The results on FSC147 and FSCD-LVIS are presented in Table 2, Table 3, and Table 4. T-Rex demonstrates state-ofthe-art performance when compared to other density map regression-based methods, in both one-shot and three-shot settings. Beyond competitive performance, T-Rex also provides a user-friendly interactive counting interface. As a detection-based method, T-Rex offers intuitive visual feedback, allowing users to iteratively refine counting results and make informed judgments regarding their completeness. This interactive process enables T-Rex to achieve high reliability, contrasting with the less robust density map regression methods." }, { "figure_ref": [], "heading": "Results on CA-44", "publication_ref": [ "b29", "b43" ], "table_ref": [], "text": "Results on CA-44 are visually presented in Fig. 7 the open-ocabulary detector Grounding DINO, T-Rex with visual prompt is more competitive. This underscores the limitations of text in providing sufficient descriptions, highlighting the significance of introducing visual prompts as a method.\nWe have also conducted a comparison with the state-ofthe-art multi-modality model, GPT-4V [30], which has previously demonstrated counting capabilities [44]. For an optimized evaluation, we selected a total of 100 images from CA-44 for testing. We tested GPT-4V in two settings: a zero-shot approach, where we inform GPT-4V about the objects in the image for counting, and a one-shot approach, where we annotate the objects in the image with bounding boxes for GPT-4V to count. The One-shot approach is similar to T-Rex in that both methods rely on visual prompts. Prompts used are detailed in Fig. 8. Results presented in Fig. 9 illustrate that T-Rex outperforms GPT-4V in counting accuracy, suggesting that while large multi-modality models perform well in understanding tasks, they may still lack the nuanced capabilities for accurate object perception." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we have introduced T-Rex, an innovative model for interactive object counting, characterized by its ability to detect and count objects using visual prompt. T-Rex represents a significant advancement in visual prompting methodologies within computer vision, paralleling the successes observed in NLP with LLMs facilitating Human-AI interactions through text prompts. This parallel suggests vast possibilities, that the application of visual prompts in computer vision could herald a comparable breakthrough like those in NLP." }, { "figure_ref": [ "fig_0" ], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "T-Rex , in its intial version, has several limitations. We list a series of failure cases in Fig. 22, where T-Rex may perform poorly. Single-Target Scenes. When only a single prompt is used against the background, T-Rex tends to misidentify dense objects clusters. Dense Multi-Object Scenes. T-Rex struggles in scenes with densely populated multi-object types, often leading to false detections. Addressing this issue may require either multiple iterations of prompting or the use of negative prompts. Cross-Image Workflow. A notable limitation emerges in cross-image workflow, especially when T-Rex is applied to scenes with a single target. In such scenarios, there is a significant risk of over-fitting, where T-Rex tends to ignore the user's prompt on the referece image. For example, even when prompted on tomatoes, T-Rex may still detect silkworm eggs. We will continue to improve the performance and robustness of T-Rex." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "We would like to express our deepest gratitude to multiple teams within IDEA for their substantial support in the T-Rex project. We sincerely appreciate the CVR team, whose essential contributions and technical expertise were pivotal in realizing the project's goals. We thank Wei Liu, Xiaohui Wang, and Yakun Hu from the Product team for their strategic insights and innovative input in the development of the demo. Appreciation is also extended to Yuanhao Zhu and Ce Feng from the Front-End team for their technical excellence and dedication. The robust solutions provided by Weiqiang Hu, Xiaoke Jiang, and Zhiqiang Li from the Back-End team were also crucial in supporting the project's infrastructure. We also thank Jie Yang for helpful discussion and Ling-Hao Chen for helping in video demos." }, { "figure_ref": [], "heading": "Potential Applications", "publication_ref": [], "table_ref": [], "text": "In this section, we explore the application of T-Rex in various fields. As a detection-based model, it can be used as an object counter, as well as an automatic-annotation tool. " } ]
15 #:266 Figure 1. We introduce an interactive object counting model, T-Rex. Given boxes or points specified on the reference image, T-Rex can detect all instances on the target image that exhibit similar pattern with the specified object, which are then summed to obtain the counting result. We use SAM[12] to generate mask prompted on the detected boxes by T-Rex for better visualization.
T-Rex: Counting by Visual Prompting
[ { "figure_caption": "Figure 22Figure 2. T-Rex is an object counting model, which is characterized by four features: detection-based, visual promptable, interative, and open-set. Listed methods are: Grounding DINO [24],GLIP[19],Semantic-SAM[16], SEEM[53], SAM[12], UniPose [43], MQ-Det [42], OWL-ViT [27], DINOv [15].", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": ": a) Intuitive Visual Feedback: It should provide highly interpretable visual feedback (e.g. bounding box), allowing users to verify the accuracy of the counting results. b) Open-Set: It should be capable of counting any objects, without constraints on predefined categories. c) Visual Promptable: It should allow users to specify the objects for counting through visual examples, given the limitation of text to discripe various objects. d) Interactive: It should enable users to actively participate in the counting process to correct errors made by the model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .Figure 5 .35Figure 3. Overview of the T-Rex model. T-Rex is a detection-based model comprising an image encoder to extract image feature, a prompt encoder to encode visual prompts (points or boxes) provided by users, and a box decoder to output the detected boxes.", "figure_data": "", "figure_id": "fig_2", "figure_label": "35", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure6. Statistics of the CA-44 benchmark, highlighting the prevalence of small and dense objects. Object size categorization follows the COCO dataset[21] , where objects with an area smaller than 32 2 are classified as small, those with an area between 32 2 and 96 2 as medium, and objects with area greater than 96 2 as large.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .Figure 8 .Figure 9 .789Figure 7. Results on Full CA-44. We compare T-Rex with open-vocabulary detector Grounding DINO [24] and density map regression-based methods[34,37, 38].", "figure_data": "", "figure_id": "fig_4", "figure_label": "789", "figure_type": "figure" }, { "figure_caption": "Figure 10 Figure 11 Figure 12101112Figure 10. T-Rex applied to agriculture. To more intuitively visualize overlapping and dense objects, we prompt SAM with the boxes predicted by T-Rex to obtain segmentation masks.", "figure_data": "", "figure_id": "fig_5", "figure_label": "101112", "figure_type": "figure" }, { "figure_caption": "Figure 13 .13Figure 13. T-Rex applied to industry in cross-image reference mode. To more intuitively visualize overlapping and dense objects, we prompt SAM with the boxes predicted by T-Rex to obtain segmentation masks.", "figure_data": "", "figure_id": "fig_6", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 .Figure 15 BiologyFigure 16 OCRFigure 17 Figure 18 Figure 19 .Figure 22 .14151617181922Figure 14. T-Rex applied to livestock and wild animal. To more intuitively visualize overlapping and dense objects, we prompt SAM with the boxes predicted by T-Rex to obtain segmentation masks.", "figure_data": "", "figure_id": "fig_7", "figure_label": "14151617181922", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Dataset distribution of CA-44.", "figure_data": "Category# Datasets # Images # InstancesIndustrial61,781114,688Object43,64494,328Biology41,83463,235OCR230586,986Animal84,674122,069Human31,11944,212Aerial55,011156,824Agriculture1211,717488,719Total4430,0851,171,061", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Three-shot counting evaluation on FSCD-LVIS test-set.", "figure_data": "", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" } ]
Qing Jiang; Feng Li; Tianhe Ren; Shilong Liu; Zhaoyang Zeng; Kent Yu; Lei Zhang
[ { "authors": "Shahira Abousamra; Minh Hoai; Dimitris Samaras; Chao Chen", "journal": "", "ref_id": "b0", "title": "Localization in the crowd with topological constraints", "year": "2021" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Carlos Arteta; Victor Lempitsky; Andrew Zisserman", "journal": "Springer", "ref_id": "b2", "title": "Counting in the wild", "year": "2016" }, { "authors": "Xi Chen; Xiao Wang; Lucas Beyer; Alexander Kolesnikov; Jialin Wu; Paul Voigtlaender; Basil Mustafa; Sebastian Goodman; Ibrahim Alabdulmohsin; Piotr Padlewski", "journal": "", "ref_id": "b3", "title": "Pali-3 vision language models: Smaller, faster, stronger", "year": "2023" }, { "authors": "Bowen Cheng; Ishan Misra; Alexander G Schwing; Alexander Kirillov; Rohit Girdhar", "journal": "", "ref_id": "b4", "title": "Masked-attention mask transformer for universal image segmentation", "year": "2022" }, { "authors": "Rohit Girdhar; Alaaeldin El-Nouby; Zhuang Liu; Mannat Singh; Kalyan Vasudev Alwala; Armand Joulin; Ishan Misra", "journal": "", "ref_id": "b5", "title": "Imagebind: One embedding space to bind them all", "year": "2023" }, { "authors": "Meng-Ru Hsieh; Yen-Liang Lin; Winston H Hsu", "journal": "", "ref_id": "b6", "title": "Dronebased object counting by spatially regularized regional proposal network", "year": "2017" }, { "authors": "Yifeng Huang; Viresh Ranjan; Minh Hoai", "journal": "", "ref_id": "b7", "title": "Interactive class-agnostic object counting", "year": "2023" }, { "authors": "Ziwei Ji; Nayeon Lee; Rita Frieske; Tiezheng Yu; Dan Su; Yan Xu; Etsuko Ishii; Ye ; Jin Bang; Andrea Madotto; Pascale Fung", "journal": "ACM Computing Surveys", "ref_id": "b8", "title": "Survey of hallucination in natural language generation", "year": "2023" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b9", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Ruixiang Jiang; Lingbo Liu; Changwen Chen", "journal": "", "ref_id": "b10", "title": "Clipcount: Towards text-guided zero-shot object counting", "year": "2023" }, { "authors": "Alexander Kirillov; Eric Mintun; Nikhila Ravi; Hanzi Mao; Chloe Rolland; Laura Gustafson; Tete Xiao; Spencer Whitehead; Alexander C Berg; Wan-Yen Lo; Piotr Dollar; Ross Girshick", "journal": "", "ref_id": "b11", "title": "Segment anything", "year": "2023" }, { "authors": "Victor Lempitsky; Andrew Zisserman", "journal": "Advances in neural information processing systems", "ref_id": "b12", "title": "Learning to count objects in images", "year": "2010" }, { "authors": "Bo Li; Peiyuan Zhang; Jingkang Yang; Yuanhan Zhang; Fanyi Pu; Ziwei Liu", "journal": "", "ref_id": "b13", "title": "Otterhd: A high-resolution multimodality model", "year": "2023" }, { "authors": "Feng Li; Qiang Jiang; Hao Zhang; Tianhe Ren; Shilong Liu; Xueyan Zou; Huaizhe Xu; Hongyang Li; Jianwei Yang; Chunyuan Li; Lei Zhang; Jianfeng Gao", "journal": "", "ref_id": "b14", "title": "Visual incontext prompting", "year": "2023" }, { "authors": "Feng Li; Hao Zhang; Peize Sun; Xueyan Zou; Shilong Liu; Jianwei Yang; Chunyuan Li; Lei Zhang; Jianfeng Gao", "journal": "", "ref_id": "b15", "title": "Semantic-sam: Segment and recognize anything at any granularity", "year": "2023" }, { "authors": "Feng Li; Hao Zhang; Huaizhe Xu; Shilong Liu; Lei Zhang; Lionel M Ni; Heung-Yeung Shum", "journal": "", "ref_id": "b16", "title": "Mask dino: Towards a unified transformer-based framework for object detection and segmentation", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b17", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Liunian Harold; Li ; Pengchuan Zhang; Haotian Zhang; Jianwei Yang; Chunyuan Li; Yiwu Zhong; Lijuan Wang; Lu Yuan; Lei Zhang; Jenq-Neng Hwang", "journal": "", "ref_id": "b18", "title": "Grounded language-image pre-training", "year": "2022" }, { "authors": "Hui Lin; Xiaopeng Hong; Yabin Wang", "journal": "", "ref_id": "b19", "title": "Object counting: You only need to look at one", "year": "2021" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b20", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Chang Liu; Yujie Zhong; Andrew Zisserman; Weidi Xie", "journal": "", "ref_id": "b21", "title": "Countr: Transformer-based generalised visual counting", "year": "2022" }, { "authors": "Haotian Liu; Chunyuan Li; Qingyang Wu; Yong Jae Lee", "journal": "", "ref_id": "b22", "title": "Visual instruction tuning", "year": "2023" }, { "authors": "Shilong Liu; Zhaoyang Zeng; Tianhe Ren; Feng Li; Hao Zhang; Jie Yang; Chunyuan Li; Jianwei Yang; Hang Su; Jun Zhu", "journal": "", "ref_id": "b23", "title": "Grounding dino: Marrying dino with grounded pre-training for open-set object detection", "year": "2023" }, { "authors": "Ze Liu; Han Hu; Yutong Lin; Zhuliang Yao; Zhenda Xie; Yixuan Wei; Jia Ning; Yue Cao; Zheng Zhang; Li Dong", "journal": "", "ref_id": "b24", "title": "Swin transformer v2: Scaling up capacity and resolution", "year": "2022" }, { "authors": "Tengchao Lv; Yupan Huang; Jingye Chen; Lei Cui; Shuming Ma; Yaoyao Chang; Shaohan Huang; Wenhui Wang; Li Dong; Weiyao Luo", "journal": "", "ref_id": "b25", "title": "Kosmos-2.5: A multimodal literate model", "year": "2023" }, { "authors": "Matthias Minderer; Alexey Gritsenko; Austin Stone; Maxim Neumann; Dirk Weissenborn; Alexey Dosovitskiy; Aravindh Mahendran; Anurag Arnab; Mostafa Dehghani; Zhuoran Shen", "journal": "Springer", "ref_id": "b26", "title": "Simple open-vocabulary object detection", "year": "2022" }, { "authors": "Goran Nathan Mundhenk; Wesam A Konjevod; Kofi Sakla; Boakye", "journal": "Springer", "ref_id": "b27", "title": "A large contextual dataset for classification, detection and counting of cars with deep learning", "year": "2016" }, { "authors": "Thanh Nguyen; Chau Pham; Khoi Nguyen; Minh Hoai", "journal": "Springer", "ref_id": "b28", "title": "Few-shot object counting and detection", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b29", "title": "Gpt-4v(ision) system card", "year": "2023" }, { "authors": "Zhiliang Peng; Wenhui Wang; Li Dong; Yaru Hao; Shaohan Huang; Shuming Ma; Furu Wei", "journal": "", "ref_id": "b30", "title": "Kosmos-2: Grounding multimodal large language models to the world", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b31", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b32", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Udbhav Viresh Ranjan; Thu Sharma; Minh Nguyen; Hoai", "journal": "", "ref_id": "b33", "title": "Learning to count everything", "year": "2021" }, { "authors": "Joseph Redmon; Santosh Divvala; Ross Girshick; Ali Farhadi", "journal": "", "ref_id": "b34", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b35", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Min Shi; Hao Lu; Chen Feng; Chengxin Liu; Zhiguo Cao", "journal": "", "ref_id": "b36", "title": "Represent, compare, and learn: A similarity-aware framework for class-agnostic counting", "year": "2022" }, { "authors": "Nikola Ðukić; Alan Lukežič; Vitjan Zavrtanik; Matej Kristan", "journal": "", "ref_id": "b37", "title": "A low-shot object counting network with iterative prototype adaptation", "year": "2023" }, { "authors": "Jianfeng Wang; Zhengyuan Yang; Xiaowei Hu; Linjie Li; Kevin Lin; Zhe Gan; Zicheng Liu; Ce Liu; Lijuan Wang", "journal": "", "ref_id": "b38", "title": "Git: A generative image-to-text transformer for vision and language", "year": "2022" }, { "authors": "Wenhai Wang; Zhe Chen; Xiaokang Chen; Jiannan Wu; Xizhou Zhu; Gang Zeng; Ping Luo; Tong Lu; Jie Zhou; Yu Qiao", "journal": "", "ref_id": "b39", "title": "Visionllm: Large language model is also an open-ended decoder for vision-centric tasks", "year": "2023" }, { "authors": "Wenhai Wang; Jifeng Dai; Zhe Chen; Zhenhang Huang; Zhiqi Li; Xizhou Zhu; Xiaowei Hu; Tong Lu; Lewei Lu; Hongsheng Li", "journal": "", "ref_id": "b40", "title": "Internimage: Exploring large-scale vision foundation models with deformable convolutions", "year": "2023" }, { "authors": "Yifan Xu; Mengdan Zhang; Chaoyou Fu; Peixian Chen; Xiaoshan Yang; Ke Li; Changsheng Xu", "journal": "", "ref_id": "b41", "title": "Multimodal queried object detection in the wild", "year": "2023" }, { "authors": "Jie Yang; Ailing Zeng; Ruimao Zhang; Lei Zhang", "journal": "", "ref_id": "b42", "title": "Unipose: Detection any keypoints", "year": "2023" }, { "authors": "Zhengyuan Yang; Linjie Li; Kevin Lin; Jianfeng Wang; Chung-Ching Lin; Zicheng Liu; Lijuan Wang", "journal": "", "ref_id": "b43", "title": "The dawn of lmms: Preliminary explorations with gpt-4v (ision)", "year": "2023" }, { "authors": "Zhiyuan You; Kai Yang; Wenhan Luo; Xin Lu; Lei Cui; Xinyi Le", "journal": "", "ref_id": "b44", "title": "Few-shot object counting with similarity-aware feature enhancement", "year": "2023" }, { "authors": "Jiahui Yu; Zirui Wang; Vijay Vasudevan; Legg Yeung; Mojtaba Seyedhosseini; Yonghui Wu", "journal": "", "ref_id": "b45", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "Cong Zhang; Hongsheng Li; Xiaogang Wang; Xiaokang Yang", "journal": "", "ref_id": "b46", "title": "Cross-scene crowd counting via deep convolutional neural networks", "year": "2015" }, { "authors": "Hao Zhang; Feng Li; Shilong Liu; Lei Zhang; Hang Su; Jun Zhu; Lionel M Ni; Heung-Yeung Shum", "journal": "", "ref_id": "b47", "title": "Dino: Detr with improved denoising anchor boxes for end-to-end object detection", "year": "2022" }, { "authors": "Lvmin Zhang; Anyi Rao; Maneesh Agrawala", "journal": "", "ref_id": "b48", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Yiyuan Zhang; Kaixiong Gong; Kaipeng Zhang; Hongsheng Li; Yu Qiao; Wanli Ouyang; Xiangyu Yue", "journal": "", "ref_id": "b49", "title": "Metatransformer: A unified framework for multimodal learning", "year": "2023" }, { "authors": "Xingyi Zhou; Rohit Girdhar; Armand Joulin; Philipp Krähenbühl; Ishan Misra", "journal": "Springer", "ref_id": "b50", "title": "Detecting twenty-thousand classes using image-level supervision", "year": "2022" }, { "authors": "Zhuofan Zong; Guanglu Song; Yu Liu", "journal": "", "ref_id": "b51", "title": "Detrs with collaborative hybrid assignments training", "year": "2023" }, { "authors": "Xueyan Zou; Jianwei Yang; Hao Zhang; Feng Li; Linjie Li; Jianfeng Gao; Yong Jae Lee", "journal": "", "ref_id": "b52", "title": "Logistics #:171 #:30 #:69 Human #:139 #:6 Figure", "year": "2023" }, { "authors": "T-Rex ", "journal": "", "ref_id": "b53", "title": "applied to logistics and human. To more intuitively visualize overlapping and dense objects, we prompt SAM with the boxes predicted by T-Rex to obtain segmentation masks", "year": "" } ]
[ { "formula_coordinates": [ 3, 363.15, 380.24, 181.97, 9.68 ], "formula_id": "formula_0", "formula_text": "E tgt = ImageEncoder (I tgt )(1)" }, { "formula_coordinates": [ 3, 360.94, 397.07, 184.17, 9.68 ], "formula_id": "formula_1", "formula_text": "E ref = ImageEncoder (I ref )(2)" }, { "formula_coordinates": [ 3, 349.02, 413.89, 196.1, 9.68 ], "formula_id": "formula_2", "formula_text": "P enc = PromptEncoder (P, E ref )(3)" }, { "formula_coordinates": [ 3, 347.79, 449.89, 197.32, 8.99 ], "formula_id": "formula_3", "formula_text": "#Count = ThreshFilter(B, S)(5)" }, { "formula_coordinates": [ 6, 372.32, 374.31, 172.79, 73.95 ], "formula_id": "formula_4", "formula_text": "MAE = 1 J J j=1 c * j -c j (6) NMAE = 1 J J j=1 c * j -c j c * j (7" }, { "formula_coordinates": [ 6, 541.24, 428.67, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" } ]
2024-03-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b34", "b18", "b47", "b50", "b51", "b11" ], "table_ref": [], "text": "Layout is an essential part of graphic design, where the aesthetic appeal relies on the harmonious arrangement and selection of visual elements such as logos and texts. In real-world creative workflows, such as posters [13,36] and magazines [20,49] creation, designers typically work on a given subject; for example, creating an advertising poster of a specific product. We call layout generation under such conditions content-aware layout generation, where the goal is to generate diverse yet plausible arrangements of element bounding boxes that harmonize with the given background image (canvas). Recent studies [52,53] show that generative models can produce content-aware layouts that respect aesthetic principles, such as avoiding overlaps [13]. However, generated layouts often still suffer from artifacts, including misaligned underlay embellishment and text elements. We hypothesize that current approaches based solely on generative models do not scale due to the scarcity of highly structured layout data. Unlike public images on the Web, 1 Our project page is available at https://udonda.github.io/RALF/" }, { "figure_ref": [ "fig_0" ], "heading": "Input image Output layouts", "publication_ref": [ "b15", "b4", "b13", "b42", "b17", "b16", "b51" ], "table_ref": [], "text": "Retrieved examples curating a large dataset of layered graphic designs is not a viable solution since designers typically create their work in proprietary authoring tools, such as Adobe Illustrator [1].\nInspired by the fact that designers often refer to existing designs [17], we propose a retrieval-augmented generation method to address the challenges in the layout domain. Recent literature shows that retrieval augmentation helps in enhancing the generation quality of language models [6,15] and image synthesis [5,44], thanks to the ability to reference real examples in the limited data domain. We argue that retrieval augmentation plays an important role in mitigating the data scarcity problem in content-aware layout generation.\nWe build Retrieval-Augmented Layout TransFormer (RALF), which is an autoregressive generator capable of referencing external layout examples. RALF retrieves reference layouts by nearest neighbor search based on the appearance of the input and supplements the generation process (Fig. 1). Since the input canvas and retrieved layouts have different modalities, we use the cross-attention mechanism to augment the feature input to the generator. Although we build RALF with an autoregressive approach, retrieval augmentation is also effective in other generation approaches such as diffusion models [19], which we show in the experiments.\nWe evaluate our RALF on public benchmarks [18,53] and show that RALF outperforms state-of-the-art models in content-aware layout generation. Thanks to the retrieval capability, RALF requires less than half the training data to achieve the same performance as the baseline. We further show that our modular architecture can adapt to controllable generation tasks that impose various user-specified constraints, which is common in real-world workflow.\nWe summarize our contributions as follows: 1) We find that retrieval augmentation effectively addresses the data scarcity problem in content-aware layout generation. 2) We propose a Retrieval-Augmented Layout Transformer (RALF) designed to integrate retrieval augmentation for layout generation tasks. 3) Our extensive evaluations show that our RALF successfully generates high-quality layouts under various scenarios and significantly outperforms baselines. We will make our code publicly available on acceptance." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Content-agnostic Layout Generation", "publication_ref": [ "b31", "b34", "b47", "b28", "b28", "b29", "b19", "b22", "b12", "b20", "b24", "b17", "b26", "b48", "b17", "b20", "b24" ], "table_ref": [], "text": "Content-agnostic layout generation, which aims at generating layouts without a specific input canvas, has been studied for a long time [2, 33,36,49]. The typical approach involves predicting the arrangement of elements, where each element has a tuple of attributes such as category, position, and size [30]. Recent approaches employ various types of neural networks-based generative models, such as generative adversarial networks (GAN) [25,30,31], variational autoencoders (VAE) [3,21,24], autoregressive models [14,22], non-autoregressive models [26], and diffusion models [9, 19,28,50]. Note that the retrieval augmentation discussed in this paper may not be directly applicable to the content-agnostic setup due to the lack of input queries.\nSeveral works consider user-specified design constraints such as \"a title is above the body\", which are often seen in real-world workflow. Such constraints are studied as controllable generation [19,22,25,26], where the model generates a complete layout from a partial or noisy layout. In this paper, we adapt the concept of controllable generation to the content-aware generation." }, { "figure_ref": [], "heading": "Content-aware Layout Generation", "publication_ref": [ "b50", "b51", "b6", "b8", "b45", "b16", "b5", "b7" ], "table_ref": [], "text": "Content-aware layout generation, relatively less studied compared to the content-agnostic setup, has seen notable progress. ContentGAN [52] first tackles to incorporate image semantics of input canvases. Subsequently, CGL-GAN [53] introduces a saliency map to a non-autoregressive decoder [8,10,47] for better subject representation. DS-GAN [18] proposes a CNN-LSTM framework. ICVT [7] employs a conditional VAE, predicting a category and bounding box autoregressively based on previously predicted elements. RADM [29] leverages a diffusion model and introduces modules to refine both visual-textual and textual-textual presentations. We note that we cannot compare RADM in our experiments because their text annotations are not available.\nCurrent approaches rely solely on generative models and may struggle with capturing sparse data distributions with limited training data. We use retrieval augmentation to mitigate this issue, and our experiments confirm its significant impact on enhancing content-aware generation." }, { "figure_ref": [], "heading": "Retrieval-Augmented Generation", "publication_ref": [ "b13", "b42", "b36" ], "table_ref": [], "text": "Retrieval augmentation [4-6, 15, 44] offers an orthogonal approach to enhance generative models without increasing network parameters or relying heavily on extensive training datasets. Generative models equipped with retrieval augmentation stop storing all relevant knowledge in their model parameters and instead use external memory via retrieving relevant information as needed. A common approach involves retrieving the k-nearest neighbors (k-NN) based on a pre-calculated embedding space as additional input. For example, REALM [15] introduces a retrieval augmentation into language models that fetch k-NN based on preceding tokens. In image generation, RDM [5] demonstrates even a relatively compact network can achieve state-of-the-art performance by retrieval augmentation. KNN-Diffusion [44] shows its capacity to generate out-of-distribution images. The unique challenge in content-aware layout generation involves encoding both image and layout modalities, which we address using a cross-attention mechanism.\nGiven that tasks related to graphic design, such as contentaware layout generation, often suffer from data scarcity problems [38], we believe that retrieval augmentation is particularly beneficial. It provides an efficient training method that leverages existing data more effectively." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b37", "b38" ], "table_ref": [], "text": "Let X and Y be the sets of canvas images and graphic layouts, respectively. We use I ∈ X and L ∈ Y to represent the canvas and layout, respectively. The canvas I ∈ R H×W ×3 and layout L are paired data, where H and W represent the height and width, respectively. We obtain a saliency map S ∈ R H×W ×1 by the off-the-shelf saliency detection method [39,40] from the canvas. We denote the layout by 4 indicates the bounding box in normalized coordinates, c i ∈ {1, . . . , C} indicates an element category of i-th element, and T indicates the number of elements in L.\nL = {l 1 , . . . , l T } = {(c 1 , b 1 ), . . . , (c T , b T )}, where b ∈ [0, 1]" }, { "figure_ref": [ "fig_2" ], "heading": "Retrieval-Augmented Layout Transformer", "publication_ref": [ "b12", "b20", "b39", "b51", "b14", "b30", "b20", "b12", "b20", "b16", "b51", "b5" ], "table_ref": [], "text": "We approach content-aware layout generation by referencing similar examples and generating layout tokens Ẑ autoregressively. Following content-agnostic layout generation works [14,22], we quantize each value in the bounding box of the i-th element b i and obtain representation [x i , y i , w i , h i ] T ∈ {1, . . . B} 4 , where B denotes the number of bins. Here, x, y, w, and h correspond to the tokens for center coordinates, width, and height of the bounding box. We represent an overall layout as a flattened 1D sequence Z = (bos, c 1 , x 1 , y 1 , . . . , w T , h T , eos) ∈ N 5T +2 , where bos and eos are special tokens to denote the start and end of the sequence. We model the joint probability distribution of Z given I and S as a product over a series of conditional distributions using the chain rule:\nP θ (Z|I, S) = 5T +2 t=2 P θ (Z t |Z <t , I, S), (1\n)\nwhere θ is the parameters of our model. Similarly to autoregressive language modeling [41], the model is trained to maximize the log-likelihood of the next token prediction.\nOur proposed model consists of four modules: image encoder, retrieval augmentation module, layout decoder, and optional constraint encoder, as illustrated in Fig. 2. We describe each module below. Image encoder. The image encoder E takes in the input canvas I and the saliency map S, and outputs the feature\nf I = E(I, S) ∈ R H ′ W ′ ×d\n, where H ′ and W ′ represent the down-sampled height and width, and d represents the depth of the feature map. This part is common among content-aware approaches, and we follow the architecture of CGL-GAN [53]. The encoder builds on a CNN backbone and a Transformer encoder. The CNN backbone, typically ResNet50 [16], uses a multi-scale feature pyramid network [32]. The Transformer encoder further refines the encoded image feature. Retrieval augmentation module. The augmentation module transforms the image feature f I into the augmented feature f R . We describe the details in Sec. 3.3. Constraint encoder. Optionally, our model allows control of the layout generation process by additional instruction on desired layout properties such as element types, coordinates, or inter-element relationships. We adopt the Transformer encoder-based model [22] to encode the instructions into a fixed-dimensional vector f const ∈ R n×d , where n denotes the length of the task-specific sequence. f const is then concatenated with the augmented feature f R and fed to the layout decoder.\nLayout decoder. Our model autoregressively generates a layout Ẑ using a Transformer decoder. Starting from the bos token, our decoder iteratively produces output tokens with cross attention to the side feature sequence f R from the retrieval augmentation module and the optional sequence f const from the constraint encoder. A key distinction between our model and previous approaches is that we flatten all the attributes into a single sequence for full attention during generation, which is shown effective in content-agnostic layout generation [14,22]. As we discuss in Eq. (1), we generate layout tokens one by one in 5T +1 steps using attributewise attention. In contrast, GAN-based models [18,53] generate in one step, and ICVT [7] generates in T steps using element-wise attention." }, { "figure_ref": [], "heading": "Retrieval Augmentation", "publication_ref": [], "table_ref": [], "text": "We introduce retrieval augmentation to effectively learn the structured layout domain with limited training data. The retrieval augmentation module consists of the following three stages: 1) retrieving reference layouts from a database, 2) encoding these layouts into a feature representation, and 3) fusing all features into the final augmented feature f R . We elaborate on the details of these three stages. Layout retrieval. Given the input canvas I, we retrieve a set of useful layout examples { L1 , . . . , LK }, where K ∈ N. A challenge lies in the absence of joint embedding for imagelayout retrieval, unlike the CLIP [42] embedding for imagetext retrieval. We hypothesize that given an image-layout pair ( Ĩ, L), L is more likely to be useful when Ĩ is similar to I. From a large dataset of image-layout pairs, we retrieve top-K pairs based on image similarity between I and Ĩ, and extract layouts from these pairs. The choice of the image similarity measure influences the generation quality, as we will discuss in Sec. 4.7 in detail. We use DreamSim [12], which better aligns with human perception of image similarity in diverse aspects such as object appearance, viewing angles, camera poses, and overall layout. All samples from the training split serve as the retrieval source for both training and inference, excluding the query sample from the retrieval source during training to prevent ground-truth leakage. Encoding retrieved layouts.\nEach retrieved layout { L1 , . . . , LK } is encoded into representative features fL = { f1 , . . . , fK } ∈ R K×d , since each layout has a different number of elements. A layout encoder F embeds each retrieved layout Lk into the representative feature, denoted as fk = F ( Lk ) ∈ R d . These extracted features are then concatenated into fL . Following [25], we pre-train F in a self-supervised manner and freeze F thereafter. Feature augmentation. The last step yields the final augmented feature f R by concatenating three features:\nf R = Concatenate(f I , fL , f C ) ∈ R (2H ′ W ′ +K)×d , (2)\nwhere f C is a cross-attended feature between f I and fL :\nf C = CrossAttn(f I , fL ) ∈ R H ′ W ′ ×d .\nIn the cross-attention mechanism, the image feature acts as the query, and the retrieved layout feature serves as both the key and value. This design facilitates an interaction between the input canvas and the reference layouts. We then feed the augmented feature f R into the layout generator. We will validate the design of the augmentation module in Sec. 4.7." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate our RALF in the unconstrained generation as well as in a variety of constrained generation tasks." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b51", "b16", "b16", "b51", "b44" ], "table_ref": [], "text": "We use two publicly available datasets, CGL [53] and PKU [18], which mainly cover e-commerce posters such as cosmetics and clothing. PKU includes three element categories: logo, text, and underlay, and CGL additionally contains embellishment elements. CGL comprises 60,548 annotated posters, i.e., layouts and corresponding images, and 1,000 unannotated canvases, i.e., images only. PKU contains 9,974 annotated posters and 905 unannotated canvases. To obtain canvas-layout pairs for the training, previous works [18,53] employ image inpainting to remove the visual elements. However, CGL does not provide inpainted posters, and PKU provides inpainted posters with undesirable artifacts. We inpaint the posters of both CGL and PKU using a state-of-the-art inpainting technique [46].\nThe original datasets do not provide validation and test splits for annotated posters. This limitation prevents fair hyper-parameter tuning, adopting evaluation metrics relying on ground-truth annotations, and the quantitative evaluation of constrained generation tasks since we cannot create constraints from the annotations. To overcome these issues, we create new dataset splits with a train/val/test ratio of roughly 8:1:1. For CGL, we allocate 48,544/6,002/6,002 annotated posters for train/val/test. For PKU, after excluding posters with more than 11 elements and those with elements occupying less than 0.1% of the canvas, we designate 7,735/1,000/1,000 posters for train/val/test. Both datasets have a maximum of 10 elements. For the evaluations, we use the annotated and unannotated test splits." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b16", "b51", "b25" ], "table_ref": [], "text": "Inspired by the previous works [18,53], we employ five metrics that evaluate the layout quality both in terms of graphic and content aspects.\nGraphic metrics. These metrics evaluate the quality of the generated layouts without considering the canvas. FID (↓) for layout [25,27] has been a primal metric in contentagnostic layout generation, and we adopt this metric in our content-aware scenario. Underlay effectiveness (Und ↑) calculates the proportion of valid underlay elements to the total underlay elements. An underlay element is regarded as valid and scores 1 if it entirely covers a non-underlay element; otherwise, it scores 0. Overlay (Ove ↓) represents the average Intersection over Union of all element pairs, excluding underlay elements.\nContent metrics. These metrics evaluate whether the generated layouts harmonize with the canvas. Occlusion (Occ ↓) computes the average saliency value in the overlapping region between the saliency map S and the layout elements. Readability score (Rea ↓) evaluates the non-flatness of text elements by calculating gradients in the image space along both vertical and horizontal axes within these elements." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b51", "b16", "b5" ], "table_ref": [], "text": "We compare the following methods in the experiments. CGL-GAN [53] [18] is a non-autoregressive model using a CNN-LSTM architecture. DS-GAN is only applicable to the unconstrained task because of the internal sorting algorithm. ICVT [7] is an autoregressive model that combines a Transformer with a conditional VAE." }, { "figure_ref": [], "heading": "DS-GAN", "publication_ref": [ "b17" ], "table_ref": [], "text": "LayoutDM † [19] is a discrete state-space diffusion model that can handle many constrained generation tasks. Since the model is originally designed for content-agnostic layout generation, we extend the model to accept an input image.\nAutoreg Baseline is the one described in Sec. 3.2 and is equivalent to our RALF without retrieval augmentation. RALF is our model described in Sec. 3. Real Data is the ground truth, which can be considered the upper bound. Since we draw the sample from the test split, we calculate the FID score using the validation split.\nTop-1 Retrieval is a nearest-neighbor layout without any generator, which can be considered a retrieval-only baseline." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b16", "b51" ], "table_ref": [], "text": "We re-implement most of the baselines since there are few official implementations publicly available, except for DS-GAN [18]. In RALF, we retrieve K = 16 nearest neighbor layouts. Following CGL-GAN [53], the height and width size of the input image are set to 350 and 240, respectively. We generate layouts on three independent trials and report the average of the metrics. We describe the details of training and network configuration in the appendix." }, { "figure_ref": [ "fig_3" ], "heading": "Unconstrained Generation", "publication_ref": [ "b46" ], "table_ref": [ "tab_0", "tab_8", "tab_1", "tab_0", "tab_2", "tab_4", "tab_4" ], "text": "Baseline comparison. 1, all the models exhibit slight performance degradation in PKU due to the domain gap problem [48] between inpainted canvases and clean canvases. We conjecture that the significant performance degradation in CGL comes from non-negligible spatial shifts in subject distributions, which we demonstrate in the appendix. Effectiveness of retrieval augmentation. Tables 1 and2 demonstrate that retrieval augmentation significantly enhances the Autoreg Baseline. The only exception is the Occ metric on CGL in Table 1, where the Autoreg Baseline already closely matches Real Data metrics. Qualitative results. We show the qualitative comparison in Fig. 3. The results demonstrate that our RALF's ability to generate well-fitted, non-overlapping, and rational layouts. In contrast, the baseline methods often produce misaligned underlay embellishments and overlapped text elements as we indicate by red arrows. We also indicate undesirable elements that appear on a salient region by green arrows.\nTraining dataset size. Here, we show that retrieval augmentation is effective regardless of the training dataset size in Fig. 4. Notably, our RALF trained on just 3,000 samples outperforms the Autoreg Baseline trained on the full 7,734 samples in PKU.\nRetrieval size K. We show that retrieval augmentation is not highly sensitive to the number of retrieved layouts K. As we plot in Fig. 5, retrieval augmentation significantly enhances the performance even with a single retrieved layout compared to the baseline. The plot indicates FID moderately gets better as we increase the retrieval size K. We examine how different K affects the generated results in Fig. 6. The result of K = 1 shows that the generated layout is similar to the reference layouts, while the result of K = 16 shows that a variety of layouts are generated.\nRetrieval augmentation for other generators. While our RALF is an autoregressive generator, we show that retrieval augmentation also benefits other generative mod- els for content-aware layout generation. Here, we adapt CGL-GAN and LayoutDM † with retrieval augmentation and evaluate the performance. Table 3 summarizes the results. CGL-GAN and LayoutDM † combined with our retrieval augmentation consistently improve many evaluation metrics. We provide additional results in the appendix.\nOut-of-domain generalization. Table 4 summarizes the results of a cross-evaluation setup where we use different datasets for training and testing. For example, we use the database and training data from CGL and evaluate PKU in the upper half of Table 4. Remarkably, even in this out-ofdomain setting, retrieval augmentation shows notable improvement and robust generalizability." }, { "figure_ref": [ "fig_5" ], "heading": "Constrained Generation", "publication_ref": [ "b20", "b41" ], "table_ref": [ "tab_5" ], "text": "Following the task setup of content-agnostic generation [22], we evaluate several methods in the following constrained tasks in content-aware generation: Category → Size + Position (C → S + P) takes in element types and generates the sizes and positions for each element. Category + Size → Position (C + S → P) generates element positions based on given element categories and sizes.\nCompletion generates a complete layout using partially placed elements.\nRefinement corrects cluttered layouts where elements are perturbed from the ground truth based on a normal distribution with mean 0 and standard deviation 0.01, following [43].\nRelationship is conditioned on both element types and their spatial relationships, determined by the size and position of element pairs. We randomly use 10% of these relationships in our experiments, following [25]. Input constraints and generated examples for these tasks are illustrated in Fig. 7. Baseline comparison. Table 5 summarizes constrained generation results. The results indicate that RALF is effective even for constrained generation tasks. For tasks such as C + S → P and Refinement, RALF shows notable improvement in the FID metric. This suggests that referencing authentic examples to understand element relationships enhances position prediction accuracy. Overall, the results highlight RALF's capability to significantly augment the generative performance over the baseline approach." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "Ablation Study", "publication_ref": [ "b49" ], "table_ref": [ "tab_6" ], "text": "We investigate our design choices in our retrieval augmentation proposed in Sec. 3.3. Layout retrieval. We employ an image feature extractor to compute the similarity between canvases. We provide a brief overview of possible choices. DreamSim [12] captures diverse aspects of the similarity simultaneously. LPIPS [51] focuses on low-level appearance similarity. CLIP [42] focuses on semantic similarity. Saliency focuses on spatial similarity using the saliency map. We obtain embeddings for similarity computation by down-sampling and flattening S. Random serves as a naïve baseline by randomly sampling layouts without focusing on image similarity.\nWe train our RALF with each choice and assess the performance. Figure 8 plots FID and Readability score for each retrieval method, and Fig. 9 presents some retrieved examples. DreamSim shows the best balance in the graphic and content metrics. Random achieves a reasonable balance, suggesting that referring to real layouts is crucial. In comparison, we conjecture that increasing the chances of retrieving a more suitable reference further boosts the generation quality. Feature augmentation. We explore the design of our feature augmentation module, as detailed in Table 6. What types of features to fuse? RALF combines three features in Eq. (2). We observe that dropping some of the features, as in scenarios (B) and (C), leads to a slight deterioration of the performance. We try adding features of the top-K retrieved images fI ∈ R KH ′ W ′ ×d that are encoded by the image encoder from the retrieved canvas. However, adding fI results in decreased performance, as shown in (D).\nWhere to apply? Our model first applies the Transformer encoder and then retrieval augmentation to the image feature (A). We try another design (E), which places the augmentation module before the Transformer encoder, however, this results in worse readability and underlay metrics in exchange for the slight improvement in FID." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b9" ], "table_ref": [ "tab_0" ], "text": "Limitations. We acknowledge two limitations as follows: 1) Evaluation of content metrics: The current content metrics assume that well-designed layouts avoid placing elements over salient or cluttered areas. If a counterexample exists, the content metrics may not adequately measure layout quality. Also, the graphic metrics can be easily fooled by a real Features include the input canvas feature (fI), retrieved layouts feature ( fL), cross-attended feature (fC), and retrieved images feature ( fI). The full setting of our model (A) is described in Eq. (2).\nexample, as evidenced by the FID score of the Top-1 baseline in Table 1. 2) Feature extraction of retrieved layouts: The layout encoder depends on the number of element categories in the dataset. For real-world creative scenarios, extending to an unlimited number of categories, i.e. an open-vocabulary setting [11], would be necessary. Future work. We outline two prospective directions to enhance retrieval augmentation for content-aware generation further: 1) Ensemble approaches: integrating multiple retrieval results could potentially improve the generation quality. 2) Diversifying retrieval modalities: exploring layout retrieval using alternative modalities, such as language, could widen the application scope. Yet, generating a whole poster beyond bounding boxes, such as image content, text copies, or styling attributes, remains challenging due to the limited training data for layered graphic designs. Even for such a task, we expect that the retrieval augmentation approach could alleviate the data scarcity problem. Potential societal impacts. As common in any generative models, our RALF may unintentionally produce counterfeit advertisements or magazine layouts, posing risks of deception and dissemination of misleading information. " }, { "figure_ref": [], "heading": "C. Dataset Preprocessing", "publication_ref": [ "b16", "b16", "b51" ], "table_ref": [], "text": "We demonstrate the importance of adequately preprocessing annotated poster images in Fig. A. Layout annotations in existing datasets sometimes exhibit inaccuracies for some underlying factors, including the semi-automatic collection process using object detection models [18] as shown in (a) and (b). The inaccuracy severely harms the image inpainting quality when we fully depend on the annotations, as shown in (c). To cope with the inaccuracy, we slightly dilate the target region for inpainting and get better results with fewer We observe that about 20% of the original inpainted images in PKU contain significant artifacts.\nWe plot the number of layout elements for each poster in While we follow previous works [18,53] to use a saliency map, we might be able to simplify our image encoder." }, { "figure_ref": [], "heading": "D. Additional Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Spatial distribution shift.", "publication_ref": [ "b29", "b29", "b16", "b33", "b33", "b51", "b43" ], "table_ref": [ "tab_10" ], "text": "Comprehensive quantitative comparison. We additionally adopt five metrics. Graphic metrics. Alignment (Align ↓) [25,31] computes how well the elements are aligned with each other. For detailed calculation, please refer to [25,31]. Loose underlay effectiveness (Und L ↑) [18] also calculates the proportion of the total area of valid underlay elements to the total of underlay and non-underlay elements. Note that we define this loose metric as Und L ↑ to distinguish it from the strict underlay effectiveness Und S ↑ introduced in the main manuscript. Density (Den ↑) and Coverage (Cov ↑) [35] compute fidelity and diversity aspects of the generated layouts against groundtruth layouts. Please refer to [35] for more details.\nContent metrics. Salient consistency (R shm ↓) [53] computes the Euclidean distance between the output logits of the canvases with or without layout regions masked using a pre-trained VGG16 [45].\nTables D and E present the quantitative result on the annotated test split without user constraints on the PKU and CGL datasets, respectively. RALF notably improves Density and Coverage metrics, indicating that RALF can generate better layouts in terms of both fidelity and diversity. RALF does not achieve the best score regarding R shm and Alignment. However, these metrics may not be very reliable since the best scores for these metrics largely deviate from the scores for Real-Data, unlike other metrics. Retrieval augmentation for baseline method. Table F shows the results of retrieval augmentation for CGL-GAN and LayoutDM † . Even for constrained generation tasks, retrieval augmentation achieves a better quality of generation for other generators on almost all metrics. Impact on changing #Dim in layout decoder. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We would like to thank Mayu Otani, Xueting Wang, Seiji Kurokoshi, and Atsushi Honda for their insightful feedback. This research was partly supported by JSPS KAKENHI 22KJ1014." }, { "figure_ref": [], "heading": "Appendix Table of contents:", "publication_ref": [], "table_ref": [], "text": "• Section A: Code Availability • Section B: Implementation Details • Section C: Dataset Preprocessing • Section D: Additional Results" }, { "figure_ref": [], "heading": "A. Code Availability", "publication_ref": [], "table_ref": [], "text": "We will make our code publicly available on acceptance." }, { "figure_ref": [], "heading": "B. Implementation Details", "publication_ref": [ "b30", "b16", "b45", "b20", "b35", "b32" ], "table_ref": [], "text": "Architecture details. Our RALF consists of four modules: the image encoder, retrieval augmentation, layout decoder, and optional constraint encoder. Table A provides the number of parameters of these modules. Image encoder consists of ResNet-50-FPN [32] and the Transformer encoder. We obtain the saliency map following the approach in DS-GAN [18]. Retrieval augmentation. We implement the retrieval part using faiss [23]. The layout encoder for retrieved layouts consists of the Transformer encoder and a feed-forward network, which adapts the feature map size of retrieved layouts to the size of the layout decoder. Before training, we pretrain the layout encoder for each dataset and extract features over each training dataset to construct the retrieval database. We note that the parameters of the layout encoder (1.59M) are excluded from the total parameters of RALF since they are set with the retrieval database.\nTo calculate a cross-attended feature, the image feature acts as the query, and the retrieved layout feature serves as both the key and value. We use multi-head attention [47] as our cross-attention layer. The effectiveness of the crossattended feature is demonstrated in the comparison of scenarios (B) and (C) in Table 6 in the main paper. Layout decoder. We employ the Transformer decoder. The configurations of the Transformer layers are as follows: 6 layers, 8 attention heads, 256 embedding dimensions, 1,024 hidden dimensions, and 0.1 dropout rate. The size of bins for the layout tokenizer is set to 128. In the inference phase, for the relationship task, we use a decoding space restriction mechanism [22], which aims to prune the predicted tokens that violate a user-specified constraint. Training details. We implemente RALF in PyTorch [37] and train for 50 and 70 epochs with AdamW optimizer [34] for the PKU and CGL datasets, respectively. The training time is about 4 hours and 20 minutes for the PKU dataset and 18 hours for the CGL dataset on a single A100 GPU. We divide the learning rate by 10 after 70% of the total epoch elapsed. We set the batch size, learning rate, weight decay, and gradient norm to 32, 10 -4 , 10 -4 , and 10 -1 , respectively. Testing details. We generate layouts on three independent trials and report the average of the metrics. We use top-k sampling for all the models that rely on sampling in logit space. We set k and temperature to 5 and 1.0, respectively.\nOther baselines. For the training of baseline methods, we follow the original training setting referring to their papers as much as possible. There are some exceptions for a fair comparison. For example, the number of embedding dimensions and hidden dimensions in Transformer is adjusted to roughly match the number of parameters for each model. We use ResNet-50-FPN as the image encoder for all of our baseline methods." } ]
Content-aware graphic layout generation aims to automatically arrange visual elements along with a given content, such as an e-commerce product image. In this paper, we argue that the current layout generation approaches suffer from the limited training data for the high-dimensional layout structure. We show that a simple retrieval augmentation can significantly improve the generation quality. Our model, which is named Retrieval-Augmented Layout Transformer (RALF), retrieves nearest neighbor layout examples based on an input image and feeds these results into an autoregressive generator. Our model can apply retrieval augmentation to various controllable generation tasks and yield high-quality layouts within a unified architecture. Our extensive experiments show that RALF successfully generates content-aware layouts in both constrained and unconstrained settings and significantly outperforms the baselines.
Retrieval-Augmented Layout Transformer for Content-Aware Layout Generation
[ { "figure_caption": "Figure 1 .1Figure 1. Retrieval-augmented content-aware layout generation. We retrieve nearest neighbor examples based on the input image and use them as a reference to augment the generation process.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Overview of Retrieval-Augmented Layout Transformer (RALF). RALF takes a canvas image and a saliency map as input, and then autoregressively generates a layout along with the input image. Our model uses (a) retrieval augmentation that incorporates useful examples to better capture the relationship between the image and the layout, and (b) constraint serialization, an optional module that encodes user-specified requirements, enabling the generation of layouts that adhere to specific requirements for controllable generation.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Visual comparison of unconstrained generation with baselines. Input canvases are selected from the unannotated split.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .Figure 6 .46Figure 4. FID over the training dataset size (#TrainingDataset), which has up to 7,734 samples.", "figure_data": "", "figure_id": "fig_4", "figure_label": "46", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Examples of input constraints and generated results for each constrained generation task. Quotation marks indicate the constraints.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Comparison across different retrieval methods on the PKU test split. We report FID as the representative graphic metric and Readability score as the content metric.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Qualitative comparison of different retrieval methods. We show the query and the top-3 retrieved examples for each method.", "figure_data": "", "figure_id": "fig_7", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure C .CFigure B. Comparison of inpainting for the dataset preprocessing.", "figure_data": "", "figure_id": "fig_8", "figure_label": "C", "figure_type": "figure" }, { "figure_caption": "FigureFigure D. Visual comparison of canvases and saliency maps between the test and unannotated test split of the CGL dataset. Canvases are randomly selected from each split. The averaged saliency map is produced by computing the spatial average of all saliency maps of each split. Mean represents the spatial average of all saliency maps of each split.", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. C. Although we filter out posters with more than 11 layout elements, it only accounts for about 2% of the original dataset.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure D shows the visual comparison of canvases and saliency maps between the test and unannotated test split of CGL. We see that the proportion of space occupied by the saliency map is different according to the different values of Mean. As a result, this difference causes the performance degradation in CGL. Inference speed.", "figure_data": "", "figure_id": "fig_11", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure E .EFigure E. Visual comparison of constrained generation with baselines on the PKU annotated test split.", "figure_data": "", "figure_id": "fig_12", "figure_label": "E", "figure_type": "figure" }, { "figure_caption": "Figure F .FFigure F. Visual comparison of constrained generation with baselines on the CGL annotated test split.", "figure_data": "", "figure_id": "fig_13", "figure_label": "F", "figure_type": "figure" }, { "figure_caption": "is a non-autoregressive encoder-decoder model employing a Transformer architecture. The model takes in the empty or layout constraint to the decoder. Rea ↓ Und ↑ Ove ↓ FID↓ Occ ↓ Rea ↓ Und ↑ Ove ↓ FID↓ Unconstrained generation results on the PKU and CGL test split. Our RALF outperforms the Autoreg Baseline and achieves the best score on almost all metrics. For reference, we show the Real Data and the Top-1 Retrieval baselines, which do not have a generator.", "figure_data": "PKUCGLMethod#ParamsContentGraphicContentGraphicOcc ↓ Real Data -0.112 0.01020.990.0009 1.580.125 0.01700.980.0002 0.79Top-1 Retrieval-0.212 0.02180.990.0021.430.214 0.02660.990.0005 0.93CGL-GAN [53]41M0.138 0.01640.410.074 34.51 0.157 0.02370.290.161 66.75DS-GAN [18]30M0.142 0.01690.630.027 11.80 0.141 0.02290.450.057 41.57ICVT [7]50M0.146 0.01850.490.318 39.13 0.124 0.02050.420.310 65.34LayoutDM † [19]43M0.150 0.01920.410.190 27.09 0.127 0.01920.820.0202.36Autoreg Baseline41M0.134 0.01640.430.019 13.59 0.125 0.01900.920.0112.89RALF (Ours)43M0.119 0.01280.920.0083.450.125 0.01800.980.0041.32PKU unannotatedCGL unannotatedMethodContentGraphicContentGraphicCGL-GAN0.191 0.03120.320.069 0.481 0.05680.260.269DS-GAN0.180 0.03010.520.026 0.435 0.05630.290.071ICVT0.189 0.03170.480.292 0.446 0.04250.670.301LayoutDM †0.165 0.02850.380.201 0.421 0.05060.490.069Autoreg Baseline 0.154 0.02740.350.022 0.384 0.04270.760.058RALF (Ours)0.133 0.02310.870.018 0.336 0.03970.930.027", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Unconstrained generation results on the PKU and CGL unannotated test split, which is real data without inpainting artifacts.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Retrieval augmentation for CGL-GAN and LayoutDM † on the PKU test split.", "figure_data": "MethodRetrieval Occ ↓ Rea ↓ Und ↑ Ove ↓ FID↓CGL-GAN0.138 0.01640.410.074 34.51CGL-GAN✓0.144 0.01640.630.039 13.28LayoutDM †0.150 0.01920.410.190 27.09LayoutDM †✓0.123 0.01440.510.091 10.03", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table1presents the quantitative results on the annotated test split without user constraints. RALF achieves the best scores, except for the Occ metric of ICVT on CGL. Top-1 Retrieval, which almost disregards the given content, is unsuitable for the task, as we show deficient performance in content metrics.Table2summarizes results on the unannotated test split. RALF achieves the best scores in all the metrics. Compared with Table", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Generation", "figure_data": "CGL PKUAutoreg Baseline 0.176 0.0276 RALF (Ours) 0.144 0.02490.84 0.960.037 0.023PKU CGLAutoreg Baseline 0.341 0.0464 RALF (Ours) 0.286 0.03550.29 0.790.037 0.036", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "132 0.01580.480.038 11.47 0.140 0.02130.650.047 23.93LayoutDM †0.152 0.02010.460.172 20.56 0.127 0.01920.790.0263.39Autoreg Baseline 0.135 0.01670.430.028 10.48 0.124 0.01880.890.0151.36RALF (Ours)0.124 0.01380.900.0102.210.126 0.01800.970.0060.50C + S → PCGL-GAN0.129 0.01550.480.0439.110.129 0.02020.750.0276.96LayoutDM †0.143 0.01850.450.122 24.90 0.127 0.01900.820.0212.18Autoreg Baseline 0.137 0.01690.460.0285.460.127 0.01910.880.0130.47RALF (Ours)0.125 0.01380.870.0100.620.128 0.01850.960.0060.21CompletionCGL-GAN0.150 0.01740.430.061 25.67 0.174 0.02310.210.182 78.44LayoutDM †0.135 0.01750.350.134 21.70 0.127 0.01920.760.0203.19Autoreg Baseline 0.125 0.01610.420.0235.960.124 0.01850.910.0112.33RALF (Ours)0.120 0.01400.880.0121.580.126 0.01850.960.0051.04RefinementCGL-GAN0.122 0.01410.390.0906.400.124 0.01820.860.0241.20LayoutDM †0.115 0.01210.570.0082.860.127 0.01880.750.0181.98Autoreg Baseline 0.131 0.01710.410.0265.890.126 0.01830.890.0040.15RALF (Ours)0.113 0.01090.950.0040.130.126 0.01760.980.0020.14RelationshipAutoreg Baseline 0.140 0.01770.440.028 10.61 0.127 0.01890.880.0151.28RALF (Ours)0.122 0.01410.850.0092.230.126 0.01840.950.0060.55", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study of RALF design on the PKU test split. The top two results are highlighted in bold and underline, respectively.", "figure_data": "0.134 0.01440.920.008 4.67", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Table B compares inference speeds. Compared to Autoreg Baseline, the total inference speed of RALF Inference time comparison on the PKU dataset. RALF consists of three components -feature extraction (DreamSim), layout retrieval (Retrieval), and layout generation (Network). The total inference time (Total) is the sum of these individual components.increases by about 35%. While the latency is produced, our RALF can enhance the quality of generation. Impact of a saliency map. We compare scenarios with and without a saliency map in TableCsince manually creating an inaccurate saliency map is unreasonable. The result shows that the presence of it has a negligible effect on performance.", "figure_data": "CGL-GAN LayoutDM † Autoreg BaselineRALF DreamSim Retrieval Network TotalTime [s]0.0120.4950.2250.0220.0310.2520.305", "figure_id": "tab_7", "figure_label": "B", "figure_type": "table" }, { "figure_caption": "Table G provides the results of RALF and Autoreg Baseline while changing the number of parameters in the layout decoder. We modify the number of features (#Dim) and hidden dim to four times the number of #Dim. RALF's performance Method Saliency map Occ ↓ Rea ↓ Und ↑ Ove ↓ FID ↓ Rea ↓ R shm ↓ Align ↓ Und L ↑ Und S ↑ Ove ↓ Den↑ Cov↑ FID↓ Unconstrained generation results on the PKU test split. Rea ↓ R shm ↓ Align ↓ Und L ↑ Und S Ove ↓ Den↑ Cov↑ FID↓", "figure_data": "Autoreg Baseline0.132 0.01690.450.021 11.78Autoreg Baseline✓0.134 0.01650.440.018 13.51RALF0.122 0.01290.900.0073.97RALF✓0.119 0.01290.920.0083.45Table C. Quantitative results without and with a saliency map.PKUMethodContentpeaks when #Dim is 256. Autoreg Baseline's performance GraphicOcc ↓ Real-Data 0.112 0.0102 Top1-Retrieval 0.212 0.0218 CGL-GAN [53] 0.138 0.0164 DS-GAN [18] 0.142 0.0169 ICVT [7] 0.146 0.0185 LayoutDM † [19] 0.150 0.0192 Autoreg Baseline 0.134 0.0164 RALF (Ours) 0.119 0.012913.94 16.33 14.32 14.95 13.92 13.06 14.43 14.110.00379 0.00371 0.00311 0.00347 0.00228 0.00298 0.00192 0.00267improves as #Dim increases, but the model with #Dim=768 still clearly underperforms RALF with #Dim=256. Thus, 0.99 0.99 0.0009 0.95 0.95 1.58 retrieval augmentation enables us to use a relatively compact 0.99 0.99 0.002 1.07 0.97 1.43 network for content-aware layout generation. This result 0.81 0.41 0.074 0.70 0.68 34.51 aligns with the trend observed in other domains, such as im-0.89 0.63 0.027 1.10 0.82 11.80 age generation [5]. We conjecture slight performance degra-0.63 0.49 0.318 0.35 0.40 39.13 dation as we increase #Dim over 256 in RALF is caused by 0.64 0.41 0.190 0.74 0.59 27.09 overfitting as we watch loss curves for training and valida-0.79 0.43 0.019 1.13 0.79 13.59 tion. Visual comparison on constrained generation. Figures E 0.98 0.92 0.008 1.25 0.97 3.45and F provide the qualitative comparisons of constrainedgeneration for the PKU and CGL datasets, respectively. Theresults demonstrate that our RALF successfully generateswell-fitted, non-overlapping, and rational layouts even inconstrained generation tasks.CGLMethodContentGraphicOcc ↓ Real-Data 0.125 0.017014.330.002400.990.980.0002 0.931.000.79Top1-Retrieval0.214 0.026616.020.002540.990.990.0005 1.010.900.93CGL-GAN [53]0.157 0.023714.120.003200.670.290.1610.310.28 66.75DS-GAN [18]0.141 0.022914.850.002570.710.450.0570.640.40 41.57ICVT [7]0.124 0.020513.400.003190.550.420.3100.160.22 65.34LayoutDM † [19] 0.127 0.019214.150.002420.920.820.0200.870.932.36Autoreg Baseline 0.125 0.019014.220.002340.970.920.0111.050.912.89RALF (Ours)0.125 0.018014.260.002360.990.980.0041.090.961.32", "figure_id": "tab_8", "figure_label": "D", "figure_type": "table" }, { "figure_caption": "Unconstrained generation results on the CGL test split. Rea ↓ Und ↑ Ove ↓ FID↓ Occ ↓ Rea ↓ Und ↑ Ove ↓ FID↓", "figure_data": "PKUCGLTaskMethodRetrievalContentGraphicContentGraphicReal Data Top-1 Retrieval CGL-GAN Occ ↓ Unconstraint 0.112 0.0102 0.212 0.0218 0.138 0.0164 CGL-GAN ✓ 0.144 0.01640.99 0.99 0.41 0.630.0009 1.58 0.002 1.43 0.074 34.51 0.157 0.0237 0.125 0.0170 0.214 0.0266 0.039 13.28 0.172 0.02450.98 0.99 0.29 0.420.0002 0.79 0.0005 0.93 0.161 66.75 0.157 60.67LayoutDM †0.150 0.01920.410.190 27.09 0.127 0.01920.820.0202.36LayoutDM †✓0.123 0.01440.510.091 10.03 0.126 0.01870.850.0191.97CGL-GAN0.132 0.01580.480.038 11.47 0.140 0.02130.650.047 23.93C → S + PCGL-GAN LayoutDM †✓0.140 0.0153 0.152 0.02010.66 0.460.030 10.23 0.138 0.0202 0.172 20.50 0.127 0.01920.82 0.790.021 10.01 0.026 3.39LayoutDM †✓0.121 0.01410.550.0889.020.127 0.01890.810.0263.36CGL-GAN0.129 0.01550.480.0439.110.129 0.02020.750.0276.96C + S → PCGL-GAN LayoutDM †✓0.146 0.0178 0.143 0.01850.57 0.450.036 0.122 24.90 0.127 0.0190 7.74 0.135 0.02070.78 0.820.020 0.0216.01 2.18LayoutDM †✓0.123 0.01440.590.071 10.68 0.127 0.01880.830.0201.77CGL-GAN0.146 0.01750.420.076 27.18 0.174 0.02310.210.182 78.44CompletionCGL-GAN LayoutDM †✓0.146 0.0169 0.135 0.01750.71 0.350.039 12.46 0.155 0.0230 0.134 21.70 0.127 0.01920.46 0.760.102 48.82 0.020 3.19LayoutDM †✓0.120 0.01430.450.071 12.96 0.126 0.01890.790.0182.55CGL-GAN0.122 0.01410.390.0906.400.124 0.01820.860.0241.20RefinementCGL-GAN LayoutDM †✓0.129 0.0157 0.115 0.01210.37 0.570.072 0.0084.91 2.860.133 0.0194 0.127 0.01880.85 0.750.013 0.0181.56 1.98LayoutDM †✓0.115 0.01210.570.0072.910.126 0.01860.760.0191.79", "figure_id": "tab_9", "figure_label": "E", "figure_type": "table" }, { "figure_caption": "Retrieval augmentation for CGL-GAN and LayoutDM † on the PKU and CGL test split for unconstrained and constrained generation. Rea ↓ Und ↑ Ove ↓ FID↓ Occ ↓ Rea ↓ Und ↑ Ove ↓ FID↓", "figure_data": "PKUCGLMethod#Dim #ParamsDecContentGraphicContentGraphicOcc ↓ Autoreg Baseline 128 0.146 0.0184 2.55M RALF 0.123 0.01410.41 0.710.030 18.86 0.127 0.0196 0.007 4.14 0.125 0.01800.86 0.970.013 3.60 0.005 1.27Autoreg Baseline ♢ RALF ♢2566.59M0.134 0.0165 0.119 0.01290.44 0.920.018 13.51 0.125 0.0190 0.008 3.45 0.125 0.01800.92 0.980.011 2.90 0.004 1.31Autoreg Baseline RALF51219.46M0.128 0.0150 0.122 0.01310.57 0.940.011 10.85 0.122 0.0184 0.010 3.61 0.128 0.01820.95 0.970.009 2.74 0.004 1.72Autoreg Baseline RALF76838.82M0.122 0.0150 0.126 0.01310.70 0.930.012 0.0088.46 3.190.124 0.0183 0.131 0.01870.95 0.970.008 2.26 0.004 1.72", "figure_id": "tab_10", "figure_label": "F", "figure_type": "table" }, { "figure_caption": "Qualitative result of varying network parameters on unconstrained generation metrics on the PKU and CGL test split. We modify the number of features (#Dim) in the input of cross-attention layers and the sequence to the decoder layer. #ParamsDec indicates the number of parameters of the layout decoder. ♢ represents the setting of our experiments in the main manuscript.", "figure_data": "AutoregRALF (Ours)Input imageConstraintGround truthCGL-GAN! LayoutDMBaselineOutput 1Output 2Output 3Category:Logo,C → S + PText,UnderlayCategory+ Size:C + S → PLogo, …,Underlaywith SizeCompletionRe/inementCategory+ Relation:Relationshipe.g. UnderlaybottomCanvas,…PKU DatasetLogoUnderlayText", "figure_id": "tab_11", "figure_label": "G", "figure_type": "table" } ]
Daichi Horita; Naoto Inoue; Kotaro Kikuchi; Kota Yamaguchi; Kiyoharu Aizawa
[ { "authors": "Maneesh Agrawala; Wilmot Li; Floraine Berthouzoz", "journal": "Communications of the ACM", "ref_id": "b0", "title": "Design Principles for Visual Communication", "year": "2011" }, { "authors": "Diego Martin Arroyo; Janis Postels; Federico Tombari", "journal": "", "ref_id": "b1", "title": "Variational Transformer Networks for Layout Generation", "year": "2021" }, { "authors": "Akari Asai; Sewon Min; Zexuan Zhong; Danqi Chen", "journal": "ACL", "ref_id": "b2", "title": "ACL 2023 Tutorial: Retrieval-based Language Models and Applications", "year": "2023" }, { "authors": "Andreas Blattmann; Robin Rombach; Kaan Oktay; Björn Ommer", "journal": "NeurIPS", "ref_id": "b3", "title": "Retrieval-Augmented Diffusion Models", "year": "2022" }, { "authors": "Sebastian Borgeaud; Arthur Mensch; Jordan Hoffmann; Trevor Cai; Eliza Rutherford; Katie Millican; George Van Den Driessche; Jean-Baptiste Lespiau; Bogdan Damoc; Aidan Clark; Diego De Las; Aurelia Casas; Jacob Guy; Roman Menick; Tom Ring; Saffron Hennigan; Loren Huang; Chris Maggiore; Albin Jones; Andy Cassirer; Michela Brock; Geoffrey Paganini; Oriol Irving; Simon Vinyals; Karen Osindero; Jack W Simonyan; Erich Rae; Laurent Elsen; Sifre", "journal": "", "ref_id": "b4", "title": "Improving language models by retrieving from trillions of tokens", "year": "2021" }, { "authors": "Yunning Cao; Ye Ma; Min Zhou; Chuanbin Liu; Hongtao Xie; Tiezheng Ge; Yuning Jiang", "journal": "ACM MM", "ref_id": "b5", "title": "Geometry Aligned Variational Transformer for Image-conditioned Layout Generation", "year": "2022" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "", "ref_id": "b6", "title": "End-to-End Object Detection with Transformers", "year": "2020" }, { "authors": "Shang Chai; Liansheng Zhuang; Fengying Yan", "journal": "", "ref_id": "b7", "title": "Lay-outDM: Transformer-based Diffusion Model for Layout Generation", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b8", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Weixi Feng; Wanrong Zhu; Tsu-Jui Fu; Varun Jampani; Arjun Akula; Xuehai He; Sugato Basu; Xin ; Eric Wang; William Yang; Wang ", "journal": "NeurIPS", "ref_id": "b9", "title": "LayoutGPT: Compositional Visual Planning and Generation with Large Language Models", "year": "2023" }, { "authors": "Stephanie Fu; * ; Netanel Tamir; * ; Shobhita Sundaram; * ; Lucy Chai; Richard Zhang; Tali Dekel; Phillip Isola", "journal": "NeurIPS", "ref_id": "b10", "title": "Dream-Sim: Learning New Dimensions of Human Visual Similarity using Synthetic Data", "year": "2023" }, { "authors": "Shunan Guo; Zhuochen Jin; Fuling Sun; Jingwen Li; Zhaorui Li; Yang Shi; Nan Cao", "journal": "", "ref_id": "b11", "title": "Vinci: An Intelligent Graphic Design System for Generating Advertising Posters", "year": "2021" }, { "authors": "Kamal Gupta; Alessandro Achille; Justin Lazarow; Larry Davis; Vijay Mahadevan; Abhinav Shrivastava", "journal": "", "ref_id": "b12", "title": "Lay-outTransformer: Layout Generation and Completion with Self-attention", "year": "2021" }, { "authors": "Kelvin Guu; Kenton Lee; Zora Tung; Panupong Pasupat; Ming-Wei Chang", "journal": "", "ref_id": "b13", "title": "REALM: Retrieval-Augmented Language Model Pre-Training", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b14", "title": "Deep Residual Learning for Image Recognition", "year": "2016" }, { "authors": "R Scarlett; Chia-Chen Herring; Jesse Chang; Brian P Krantzler; Bailey", "journal": "", "ref_id": "b15", "title": "Getting Inspired! Understanding How and Why Examples Are Used in Creative Design Practice", "year": "2009" }, { "authors": "Xiangteng Hsiao Yuan Hsu; Yuxin He; Hao Peng; Qing Kong; Zhang", "journal": "CVPR", "ref_id": "b16", "title": "PosterLayout: A New Benchmark and Approach for Content-Aware Visual-Textual Presentation Layout", "year": "2023" }, { "authors": "Naoto Inoue; Kotaro Kikuchi; Edgar Simo-Serra; Mayu Otani; Kota Yamaguchi", "journal": "", "ref_id": "b17", "title": "LayoutDM: Discrete Diffusion Model for Controllable Layout Generation", "year": "2023" }, { "authors": "Ali Jahanian; Jerry Liu; Qian Lin; Daniel Tretter; O' Eamonn; Seungyon Brien-Strain; Claire Lee; Nic Lyons; Jan Allebach", "journal": "", "ref_id": "b18", "title": "Recommendation System for Automatic Design of Magazine Covers", "year": "2013" }, { "authors": "Zhaoyun Jiang; Shizhao Sun; Jihua Zhu; Jian-Guang Lou; Dongmei Zhang", "journal": "", "ref_id": "b19", "title": "Coarse-to-fine generative modeling for graphic layouts", "year": "2022" }, { "authors": "Z Jiang; J Guo; S Sun; H Deng; Z Wu; V Mijovic; Z Yang; J Lou; D Zhang", "journal": "", "ref_id": "b20", "title": "LayoutFormer++: Conditional Graphic Layout Generation via Constraint Serialization and Decoding Space Restriction", "year": "2023" }, { "authors": "Jeff Johnson; Matthijs Douze; Hervé Jégou", "journal": "IEEE Transactions on Big Data", "ref_id": "b21", "title": "Billion-scale similarity search with GPUs", "year": "2019" }, { "authors": "Abdu Akash; Thibaut Jyothi; Jiawei Durand; Leonid He; Greg Sigal; Mori", "journal": "", "ref_id": "b22", "title": "LayoutVAE: Stochastic Scene Layout Generation from a Label Set", "year": "2019" }, { "authors": "Kotaro Kikuchi; Edgar Simo-Serra; Mayu Otani; Kota Yamaguchi", "journal": "ACM MM", "ref_id": "b23", "title": "Constrained Graphic Layout Generation via Latent Optimization", "year": "2021" }, { "authors": "Xiang Kong; Lu Jiang; Huiwen Chang; Han Zhang; Yuan Hao; Haifeng Gong; Irfan Essa", "journal": "", "ref_id": "b24", "title": "BLT: Bidirectional Layout Transformer for Controllable Layout Generation", "year": "2022" }, { "authors": "Hsin-Ying Lee; Weilong Yang; Lu Jiang; Madison Le; Irfan Essa; Haifeng Gong; Ming-Hsuan Yang", "journal": "", "ref_id": "b25", "title": "Neural Design Network: Graphic Layout Generation with Constraints", "year": "2019" }, { "authors": "Elad Levi; Eli Brosh; Mykola Mykhailych; Meir Perez", "journal": "", "ref_id": "b26", "title": "DLT: Conditioned Layout Generation with Joint Discrete-Continuous Diffusion Layout Transformer", "year": "2023" }, { "authors": "Fengheng Li; An Liu; Wei Feng; Honghe Zhu; Yaoyu Li; Zheng Zhang; Jingjing Lv; Xin Zhu; Junjie Shen; Zhangang Lin; Jingping Shao", "journal": "", "ref_id": "b27", "title": "Relation-Aware Diffusion Model for Controllable Poster Layout Generation", "year": "2023" }, { "authors": "Jianan Li; Jimei Yang; Aaron Hertzmann; Jianming Zhang; Tingfa Xu", "journal": "ICLR", "ref_id": "b28", "title": "LayoutGAN: Generating Graphic Layouts with Wireframe Discriminators", "year": "2019" }, { "authors": "Jianan Li; Jimei Yang; Jianming Zhang; Chang Liu; Christina Wang; Tingfa Xu", "journal": "IEEE TVCG", "ref_id": "b29", "title": "Attribute-Conditioned Layout GAN for Automatic Graphic Design", "year": "2021" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b30", "title": "Feature Pyramid Networks for Object Detection", "year": "2017" }, { "authors": "Simon Lok; Steven Feiner", "journal": "", "ref_id": "b31", "title": "A Survey of Automated Layout Techniques for Information Presentations", "year": "2001" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "ICLR", "ref_id": "b32", "title": "Fixing Weight Decay Regularization in Adam", "year": "2019" }, { "authors": "Muhammad Ferjad Naeem; Seong Joon Oh; Youngjung Yunjey Choi; Jaejun Yoo", "journal": "", "ref_id": "b33", "title": "Reliable Fidelity and Diversity Metrics for Generative Models", "year": "2020" }, { "authors": "O' Peter; Aseem Donovan; Aaron Agarwala; Hertzmann", "journal": "CHI", "ref_id": "b34", "title": "DesignScape: Design with Interactive Layout Suggestions", "year": "2015" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Köpf; Edward Z Yang; Zach Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "NeurIPS", "ref_id": "b35", "title": "PyTorch: An Imperative Style, High-Performance Deep Learning Library", "year": "2019" }, { "authors": "Chunyao Qian; Shizhao Sun; Weiwei Cui; Jian-Guang Lou; Haidong Zhang; Dongmei Zhang", "journal": "IEEE TVCG", "ref_id": "b36", "title": "Retrieve-Then-Adapt: Example-based Automatic Generation for Proportion-related Infographics", "year": "2021" }, { "authors": "Xuebin Qin; Zichen Zhang; Chenyang Huang; Chao Gao; Masood Dehghan; Martin Jagersand", "journal": "", "ref_id": "b37", "title": "BASNet: Boundary-Aware Salient Object Detection", "year": "2019" }, { "authors": "Xuebin Qin; Hang Dai; Xiaobin Hu; Deng-Ping Fan; Ling Shao; Luc Van Gool", "journal": "", "ref_id": "b38", "title": "Highly Accurate Dichotomous Image Segmentation", "year": "2022" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b39", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b40", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "Soliha Rahman; Matthias Vinoth Pandian Sermuga Pandian; Jarke", "journal": "", "ref_id": "b41", "title": "RUITE: Refining UI Layout Aesthetics Using Transformer Encoder", "year": "2021" }, { "authors": "Shelly Sheynin; Oron Ashual; Adam Polyak; Uriel Singer; Oran Gafni; Eliya Nachmani; Yaniv Taigman", "journal": "ICLR", "ref_id": "b42", "title": "KNN-Diffusion: Image Generation via Large-Scale Retrieval", "year": "2023" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "ICLR", "ref_id": "b43", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "year": "2015" }, { "authors": "Roman Suvorov; Elizaveta Logacheva; Anton Mashikhin; Anastasia Remizova; Arsenii Ashukha; Aleksei Silvestrov; Naejin Kong; Harshith Goka; Kiwoong Park; Victor Lempitsky", "journal": "", "ref_id": "b44", "title": "Resolution-robust Large Mask Inpainting with Fourier Convolutions", "year": "2022" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b45", "title": "Attention Is All You Need", "year": "2017" }, { "authors": "Chenchen Xu; Min Zhou; Tiezheng Ge; Yuning Jiang; Weiwei Xu", "journal": "", "ref_id": "b46", "title": "Unsupervised Domain Adaption With Pixel-Level Discriminator for Image-Aware Layout Generation", "year": "2023" }, { "authors": "Xuyong Yang; Tao Mei; Ying-Qing Xu; Yong Rui; Shipeng Li", "journal": "ACM TOMM", "ref_id": "b47", "title": "Automatic Generation of Visual-Textual Presentation Layout", "year": "2016" }, { "authors": "Junyi Zhang; Jiaqi Guo; Shizhao Sun; Jian-Guang Lou; Dongmei Zhang", "journal": "", "ref_id": "b48", "title": "LayoutDiffusion: Improving Graphic Layout Generation by Discrete Diffusion Probabilistic Models", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b49", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Xinru Zheng; Xiaotian Qiao; Ying Cao; Rynson W H Lau", "journal": "ACM TOG", "ref_id": "b50", "title": "Content-Aware Generative Modeling of Graphic Design Layouts", "year": "2019" }, { "authors": "Min Zhou; Chenchen Xu; Ye Ma; Tiezheng Ge; Yuning Jiang; Weiwei Xu", "journal": "", "ref_id": "b51", "title": "Composition-aware Graphic Layout GAN for Visual-textual Presentation Designs", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 308.86, 546.07, 236.25, 20.72 ], "formula_id": "formula_0", "formula_text": "L = {l 1 , . . . , l T } = {(c 1 , b 1 ), . . . , (c T , b T )}, where b ∈ [0, 1]" }, { "formula_coordinates": [ 3, 92.38, 422.25, 190.78, 30.2 ], "formula_id": "formula_1", "formula_text": "P θ (Z|I, S) = 5T +2 t=2 P θ (Z t |Z <t , I, S), (1" }, { "formula_coordinates": [ 3, 283.16, 432.98, 3.87, 8.64 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 3, 50.11, 566.63, 111.23, 12.87 ], "formula_id": "formula_3", "formula_text": "f I = E(I, S) ∈ R H ′ W ′ ×d" }, { "formula_coordinates": [ 4, 61.71, 436.62, 225.32, 13.37 ], "formula_id": "formula_4", "formula_text": "f R = Concatenate(f I , fL , f C ) ∈ R (2H ′ W ′ +K)×d , (2)" }, { "formula_coordinates": [ 4, 91.48, 480.23, 153.52, 13.37 ], "formula_id": "formula_5", "formula_text": "f C = CrossAttn(f I , fL ) ∈ R H ′ W ′ ×d ." } ]
2024-03-29
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b22", "b67", "b69", "b19", "b36", "b69", "b24", "b41", "b0" ], "table_ref": [ "tab_12" ], "text": "Problem Setting There is extensive interest from the computer vision community for training classifiers that are robust to distribution shifts. Pioneering works in this area [23,47,67] focused on optimizing for simple shifts in the image distribution, such as sketch-to-real adaptation. As the topic evolved, the community proposed increasingly harder adaptation problems by eliminating some restrictive assumptions. For the domain generalization (DG) problem [55, 68],\nwe do not assume access to unlabeled target data; for the cross-dataset generalization (XD) problem [70], we allow source and target label spaces to be different; and for the parameter efficient learning (PEFT) problem [20,37,57], we impose a tight budget on the number of parameters that can be tuned. Our work lies at the confluence of these three topics. Similar to CoOp [70] and MaPLe [25], we do assume access to labeled few-shot generic source data, such as ImageNet. Since we assume nothing about the relationship between source and target datasets, this setting can be more useful in practice than strict zero-shot learning. In this paper, we propose two parameter efficient few-shot methods, called word and descriptor soups, that finetune vision-language (VL) models to generalize to target datasets which may contain unseen labels and/or shifts in the image distribution. Our methods achieve state-of-the-art on some benchmarks without additional gradient-based tuning, but can also improve state-of-the-art gradient-based finetuning methods with an additional diversity loss.\nMotivation Our work is motivated by the recent success of classification by description methods [24,35,42] in both zero-shot (ZS) classification and open-vocabulary object detection. These methods ask an LLM like GPT to generate a list of short descriptions for each class, then aggregate predictions from the descriptions to improve ZS accuracy, see Fig. 1(a). It is often claimed that the impressive gain in ZS accuracy comes from additional information given by the GPT descriptions. However, a recent study called WaffleCLIP [45] observed that random descriptors or even strings of random words can achieve similar ZS accuracy to GPT descriptors, when ensembled together (see Fig. 5). Therefore, gains in ZS accuracy achieved by descriptor methods are mostly driven by ensembling rather than the content of the descriptors themselves. Inspired by this observation, we propose descriptor and word soups, two methods which outperform WaffleCLIP by selecting descriptors or chains of words that maximize few-shot accuracy. Word soup has 3 advantages: (1) it outperforms existing descriptor-inspired ZS methods in the few-shot OOD setting since it directly maxi-Figure 1. Illustration of word and descriptor soups. We conceptually position our two soup methods along the tradeoff between parameter efficiency and flexibility; we then list the pros and cons of our soups compared to prior work. Firstly, word soup is more parameter efficient than soft prompt tuning, because it uses discrete tokens (see Fig. 2). Secondly, word soup does not require an LLM or handcrafted prompts. Lastly, word soup attains higher target accuracy than prior descriptor methods by allowing a descriptor to be any permutation of words and explicitly maximizing its accuracy on training data (see Fig. 3). However, word soup achieves this flexibility by sacrificing the explainability of descriptors. On the other hand, descriptor soup is interpretable (see Table 1), but less flexible than word soup, since it is limited to selecting from the pool of GPT descriptors. mizes classification accuracy (see Fig. 1(c)); (2) it is more parameter efficient than existing few-shot methods since the model is frozen and only the discrete descriptor tokens need to be stored; and (3) it does not require an LLM. The pros and cons of both descriptor and word soups are concisely stated in Figure 1 and discussed more in the method section.\nMethod Overview According to the above motivation, we design a progression of three methods: descriptor soup, word soup, and word soup training with diversity loss. These methods build upon each other but can be used independently and in combination with prior methods. We opted for this style of presentation, since there are motivating empirical insights at each stage, and each method achieves state-of-theart depending on resource constraints (such as availability of an LLM at training time or parameter storage budget). Descriptor soup is loosely inspired by model soups [58]; \"soup\" refers to a set of descriptors. We calculate an aggregate prediction based on the centroid of descriptors in the soup. We start with the most accurate descriptor on the training data and greedily add descriptors to the soup if training accuracy increases, see Fig. 1(b). Similarly, for word soups, we assemble a chain of words by greedily appending a word if it increases the training accuracy of the word chain, see Fig. 1(c). Finally, we present a diversity loss that can be used to optimize the CLIP model, using the word soup as an initialization. This loss is required to maintain the initial diversity among word soup members throughout finetuning." }, { "figure_ref": [], "heading": "Contributions", "publication_ref": [], "table_ref": [], "text": "We make the following contributions to the computer vision literature:\n• We present word soup, which improves SoTA on fewshot cross-dataset (XD) and domain-generalization (DG) benchmarks by 1% and 0.8% resp. • Our word soup uses fewer parameters than SoTA parameter efficient methods while achieving higher accuracy than parameter-free ZS methods in both few-shot settings. • We propose a diversity loss to train VL models initialized with word soup. This allows our method to seamlessly combine with prior few-shot finetuning methods. • We present qualitative results (e.g. Tab. 1) to understand what is means for a descriptor to be \"good\", and analyze the generalizability of these descriptors (Fig. 3). These results extend the current understanding of how descriptor and prompting methods work." }, { "figure_ref": [ "fig_0" ], "heading": "Related Work", "publication_ref": [ "b69", "b68", "b24", "b42", "b70", "b11", "b48", "b29", "b64", "b60", "b41", "b58", "b37", "b0", "b9", "b32", "b47", "b16", "b19", "b69", "b70", "b19", "b38", "b16", "b52", "b17", "b20", "b52" ], "table_ref": [], "text": "Few-shot CLIP finetuning We follow the problem settings of CoOp [70], CoCoOp [69], MaPLe [25], and Clipood [51], which finetune a CLIP-like model [43] on few-shot ImageNet in a manner that generalizes to OOD target datasets. Many prompt tuning methods build on top of CoOp by using different loss functions [3, 5, 9, 41, 62], using clever optimization techniques [71], ensembling multiple prompts [6, 31], leveraging different sources of information [12,22,49], leveraging synergy between modalities [26,30,65], or using different network architectures [8,61]. We take a fundamentally different approach from these prior methods, drawing inspiration from classification by description [35]. Specifically, prior methods tune a soft prompt while our method tunes a sequence of discrete tokens. Zero-shot CLIP Many recent papers use LLM descriptors to aid ZS or open-vocabulary visual tasks, including classification [35,42] and detection [24]. WaffleCLIP [45] observed that the impressive gains in accuracy reported by these works are mostly driven by ensembling and datasetlevel concepts. WaffleCLIP ensembles random descriptors and uses an LLM to discover dataset-level concepts, while we design an optimization procedure to learn good descriptors from data. Our algorithm is loosely related to model averaging methods [58,59]. However, unlike model soups [58], we do not generate multiple training trajectories, since all descriptors share the same model weights. ZS accuracy can also be improved with hierarchical label sets [38] or handcrafted prompts [1]. Test-time prompt tuning methods [10,33,48,50] train a sample-specific prompt that maximizes agreement between predictions based on a set of image augmentations. These methods suffer from long inference times due to test-time optimization.\nParameter efficient finetuning (PEFT) Our word soup can be considered a PEFT [17,20] method, but specialised to finetuning VL models in the OOD setting. Prior PEFT methods include shallow text prompt tuning [22,31,70,71], visual prompt tuning [20], bias tuning [64], adapters [11,16,39,54,66], LoRA [17], SSF [29], side-tuning [53], and others [18,21,32,63]. Unlike the above works, our word soup tunes fewer parameters by leveraging discrete text tokens. Similar to LST [53], we use minimal GPU memory, since no backpropagation is required. We empirically compare with a representative subset of PEFT methods in the OOD settings in Fig. 2. Clearly, our word soup establishes a better tradeoff between parameter efficiency and OOD accuracy, compared to prior work." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "This section is organized into 4 parts. Section 3.1 reviews the classification by description [35] and WaffleCLIP [45] methods, which motivate our soup methods. Section 3.2 presents descriptor soup, a novel intermediary method which still uses GPT descriptors at training time but not at test time. Section 3.3 presents word soup, which is similarly motivated but only requires a list of English words at training time. Section 3.4 describes the diversity loss used to finetune the CLIP model using word soup as the initialization. Please use Fig. 1 as a reference. We organize the methods in this section in order of increasing flexibility, since it is more natural to motivate word soups this way. However, word soups can also be motivated in the opposite direction by shortcomings of soft prompt tuning, as noted in Fig. 1; this motivation is included in Appendix B. We also propose a token offset trick in Appendix C to augment descriptor soups." }, { "figure_ref": [], "heading": "LLM Descriptors and WaffleCLIP", "publication_ref": [ "b41" ], "table_ref": [], "text": "Several works use LLM descriptors to supplement class names in VL models [24,35,42]. These methods ask an LLM to describe the object being classified and incorporate this information into the textual input by forming sentences such as \"a photo of a tench, which is a freshwater fish\" or \"a photo of a goldfish, which has small black eyes\". The LLM generates on average 5.8 such descriptors per label, and the centroids of the resulting text embeddings are used for zeroshot classification of images. The improvement in zero-shot accuracy can be attributed to (1) additional information coming from the LLM and (2) ensembling. In WaffleCLIP, Roth et al. [45] claim that most of the gain in accuracy reported by Menon and Vondrick [35] can be attributed to ensembling. They showed that appending a similar number of randomly selected descriptors to the class names can achieve similar zero-shot accuracies as the GPT descriptors. We confirm this result in Fig. 5. Observe in this figure that both random descriptors (labeled as \"random soup\") and chains of random nonsensical words (labeled as \"waffle CLIP\") perform better than classification by description (\"GPT centroids\") for the same number of descriptors per label (m). This is a surprising result. We reason that selecting descriptors which maximize few-shot training accuracy would achieve higher accuracy than random descriptors; this motivates descriptor soup." }, { "figure_ref": [], "heading": "Descriptor Soup", "publication_ref": [ "b24", "b69" ], "table_ref": [ "tab_8", "tab_12", "tab_12", "tab_12" ], "text": "We reference Alg. T train (d i )\n(1) Note that 1 m m i=1 T train (d i ) denotes the L2-normalized centroid of text embeddings for each class. We always normalize the centroid so it can be used to calculate the cosine similarity with image embeddings; this is omitted from the math to avoid clutter.\nEq. 1 is an intractable combinatorial problem, but we can approximately solve it via a greedy approach or by solving the continuous version of the problem using gradient descent. We use a greedy approach, inspired by Wortsman et al. [58]. The algorithm can be summarized as (reference Alg. Note that the soup methods are not truly zero-shot because they require some training data. However, we do compare against all baselines in the few-shot setting in Table 6. We use the ViT/B-16 CLIP model trained by Open-AI. All non-deterministic numbers are an average of 3 random seeds. m indicates the number of descriptors used. \"Ensemble\" refers to the set of 80 handcrafted prompts created by Open-AI; GPT score mean corresponds to the classification by description method. We use centroid evaluation unless \"score mean\" is explicitly stated. We achieve substantial gains over GPT descriptors and waffle CLIP as indicated in the bottom two rows.\ndescriptors which describe the dataset as a whole, rather than individual labels; these descriptors are orthogonal to the classification problem and increase classification accuracy by increasing alignment between corresponding image and text embeddings.\n(2) Descriptor soups generalize when the target classification problem has a narrower scope than the source classification problem. Prior work (e.g. [25,45,70]) suggests that handcrafted dataset-specific descriptors such as \"a type of aircraft\" or \"a type of pet\" improve ZS accuracy. Dataset-level descriptors like these are easier to design than label-level descriptors, so using dataset-level descriptors is currently standard practice. We hypothesize that these descriptors improve accuracy by increasing alignment between corresponding image and text embeddings; we demonstrate this in Tab. 1. e.g. \"a type of pet\" improves pet classification accuracy by 0.6% and alignment by 0.01.\nWe further hypothesize that descriptor soup members learn to mimic the behavior of handcrafted dataset-level descriptors. We display examples of descriptor soups trained on three different datasets in Table 1 in support of this intuition. Descriptors trained on pets (in pink) mention \"claws\", \"eyes\", and \"hair\", which are concepts common to most pets. In a similar vein, descriptors trained on textures/DTD (in yellow) mention \"pattern\", \"logo\", and \"design\". Meanwhile, ImageNet is a broader dataset, so descriptors trained on Ima-geNet (in blue) are generally non-specific (e.g. \"which could be brown or grey\"). This is intuitive, since ImageNet is a dataset with diverse classes. A descriptor such as \"which is a type of dog\" would be detrimental to the zero-shot accuracy, since it would bias the classifier toward labels that are types of dogs. Table 1 shows that individual descriptor soup members increase both the alignment and classification accuracy, when the source and target datasets are the same. The next paragraph addresses the issue of generalizability when source and target datasets are different.\nGeneralizability Descriptor soups trained on ImageNet generalize to target datasets with narrower scopes, but not vice versa. This is because ImageNet concepts are a superset of narrower target datasets; e.g. ImageNet classes contain types of cars and pets. Table 1 shows that descriptors trained on ImageNet (blue) improve both the alignment and accuracy on Pets and Textures; but descriptors trained on the latter two datasets (pink and yellow) decrease the same metrics on ImageNet. To further support the generalizability of descriptor soups, we show a positive correlation between ImageNet accuracy and average target dataset accuracy in Fig. 3 (right). Finally, we train a descriptor soup on test data to maximize average accuracy of 10 datasets; we call this the \"descriptor soup upper bound\" in the middle of Tab. 6. The upper bound only achieves marginal improvement over the descriptor soup trained on ImageNet (three rows above the upper bound in Tab. 6). This suggests that greedily maximizing the descriptor soup accuracy on ImageNet training data is a good approximation of maximizing the target accuracy; i.e. the generalization gap is small." }, { "figure_ref": [], "heading": "Word Soup", "publication_ref": [], "table_ref": [], "text": "Descriptor soup achieves impressive state-of-the-art performance, but it is still reliant on an LLM at training time to generate a list of candidate descriptors and is limited to this fixed descriptor list. In order to remove the reliance on LLMs and make the optimization process more flexible, we propose to generate descriptors in a greedy fashion using individual words selected from a dictionary. We use the list of 10,000 most commonly-used words on the web 1 as the candidate pool of words. Given a list of n words W = {w 1 , ..., w n } (we abuse some notations slightly, since the word soup is a separate method). Descriptors are allowed to be any sequence of words, as long as the length does not exceed p. Concretely,\nD * m = {d * 1 , ..., d * m } = arg min d1:m∈D ′ ℓ S train , 1 m m i=1\nT train (d i )\nD ′ := {all q permutations of W, ∀q ≤ p} (2)\nThe word soup problem described by Eq. 2 is again intractable, so we propose an approximate greedy solution using the following steps (see Alg. 2 in the Appendix): We obtain a total of m independent (in a loose sense) descriptors by repeating steps 2-4. In these steps, we randomly select from W topk0 and shuffle W topk1 to encourage diversity among the m selected descriptors. Instead of truncating all descriptors to a pre-determined length p, we introduce a patience parameter in Alg. 2, which implicitly controls the average descriptor length. We now motivate word soup." }, { "figure_ref": [], "heading": "Motivation from descriptor soup", "publication_ref": [], "table_ref": [], "text": "The descriptor soup method has some intuitive properties covered in the previous sub-section, but is limited by the small number of good descriptors. Fig. 3 left shows that only about 1,200 descriptors (green line) in D are better than no descriptor (vanilla ZS; red line). The descriptor soup is limited to various combinations of these 1,200 \"good\" descriptors. On the contrary, when we expand the hypothesis space to be D ′ , any permutation of a set of words, there are many more good descriptors to choose from, as indicated by the orange line in Fig. 3 left. In other words, word soup improves classification accuracy by increasing the size of the hypothesis class. Tab. 2 supports this assertion by showing that individual word soup descriptors achieve higher accuracies on ImageNet than descriptor soup members." }, { "figure_ref": [ "fig_3" ], "heading": "Diversity loss", "publication_ref": [ "b70" ], "table_ref": [], "text": "Word soup already achieves competitive performance on most benchmarks. A reasonable next step would be to finetune using the word soup descriptors as an initialization. A variety of methods exist for few-shot finetuning of CLIP, e.g. CoOp, Clipood, and MaPLe. However, in many cases we actually see a slight decline in target accuracy after finetuning in Tab. 4 (λ = 0). This is because finetuning all descriptors on the same few-shot data forces text-prototypes to converge to the same locations in the embedding space, eliminating the initial diversity. Given fixed word soup descriptors D * = {d * 1 , ..., d * m }, our training loss is: where CE denotes the cross entropy loss, ŷd * i ∈ ∆ c (c is the number of classes) denotes the soft prediction of the model with descriptor d * i ; y truth denotes the one-hot encoding of the true label; and ŷd * i ,0 ∈ ∆ c denotes the soft prediction of the initial model with descriptor d * i . λ ∈ [0, 1] is a hyperparameter controlling the amount of regularization. ŷd * i is the quantity being optimized. ŷd * i ,0 is the output of a softmax with temperature τ 0 (the teacher temperature). As in classical knowledge distillation, it is often useful to set the teacher temperature to be different than the training temperature. Training the expectation directly in Eq. 3 requires storing mc forward and backward passes of the text encoder in memory, which is not scalable. In practice, we use one descriptor per mini-batch and rotate among the m descriptors in a round-robin fashion, but we train for the same number of iterations as finetuning with one descriptor.\nℓ train = E d * i ∼D * CE(ŷ d * i , (1 -λ)y truth + λŷ d * i ,0 )(3)\nOur training loss biases the model prediction toward the initial prediction of the model using each description, thereby maintaining the diversity of predictions present at initialization. Fig. 4 ing with λ = 0.25 results in a higher average KL divergence between descriptor predictions ŷd * i and a higher average target accuracy than training with lower λs. Additionally, Tab. 4 displays results for a naive CoOp ensemble and CoOp trained with regularization towards the initialization. These results show that our diversity loss results cannot be obtained by simply ensembling or regularizing predictions towards the initialization as in [4,71]. The training does not take longer than standard cross entropy training, since only one model is trained for all descriptors. Descriptor tokens are fixed." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b45", "b6", "b35", "b33", "b59", "b12", "b51", "b43", "b55", "b14" ], "table_ref": [], "text": "We present the main few-shot results in Tab. 6. The goal this section is to demonstrate the following in the OOD setting: the-art ZS methods with only 1 or 2 descriptors. Therefore, unlike some prior methods, our method is not primarily driven by ensembling. (Fig. 5)\nDatasets We train on random 16-shot splits of ImageNet-1K [46] and test on 14 unseen target datasets: Caltech-101\n[28], Oxford-Pets [40], Stanford-Cars [27], Flowers-102 [36], Food-101 [2], FGVC-Aircraft [34], SUN-397 [60], Describable-Textures (DTD) [7], EuroSAT [13], UCF-101 (an action recognition dataset) [52], ImageNet-V2 [44], ImageNet-Sketch [56], ImageNet-A (natural adversarial examples) [15], and ImageNet-R [14]. The last four datasets are domain-shifted versions of ImageNet containing images from the ImageNet-1K label space.\nExperimental Setting All baselines and methods are trained on 16-shot ImageNet-1K data and tested on the indicated target datasets. Hyperparameters: We tune parameters on a withheld validation set. Word soup (Alg. 2) has three parameters: k 0 , k 1 and patience. The diversity loss has two parameters: λ and τ 0 . These 5 parameters are constant across all experiments. We tune the learning rate separately for each baseline, but keep all other training parameters consistent across methods. We report temperature, batch size, optimizer, EMA setting, token length, initialization and other training details in Appendix A. We discuss the difference between centroid and score mean evaluation in Appendix D.\nDiscussion In Tab. 6, we first observe that stacking our word soup method on top of CE, CoOp, MaPLe, or CLIPood achieves approximately 0.8-1.0% increase in average target accuracy for both XD and DG benchmarks. Due to the space limitation, we only compare word soup with other ZS methods when combined with CE, since CE achieves the highest XD accuracy out of the 4 finetuning methods. m indicates the number of descriptors for each label, on average. The greedy descriptor soup can be augmented using our token offset trick, which uses 6 augmented copies of each descriptor. The token offset trick improves accuracy by 0.4% and 0.3% on XD and DG, resp. but at a significant computational cost. The greedy word soup matches the performance of the augmented descriptor soup without the additional computational cost. Overall, the best OOD accuracy is achieved by either the descriptor soup with token offsets or word soup.\nAblation Study An ablation study on our soup methods with varying m is presented in Fig. 5. On both benchmarks, our word soup performs best for all m. We note that the word soup with m = 2 already outperforms all ZS baselines for all values of m up to 64. This result indicates that, unlike stateof-the-art ZS methods, ensembling is not the main ingredient Parameter Efficiency and Computational Efficiency A discussion regarding efficiency of our methods is deferred to Appendix E." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed descriptor and word soups to tackle the cross-dataset and domain generalization problems.\nDescriptor soup greedily selects a set of descriptors by maximizing training accuracy on a source dataset. Word soup builds a chain of words using a similar greedy procedure. These greedy soup methods achieve higher target classification accuracy than prior descriptor-based methods by explicitly maximizing training accuracy. We further proposed a loss function to preserve word soup diversity throughout finetuning. When using word soup for initialization and finetuning with the diversity loss, we can significantly improve the accuracy of existing few-shot OOD finetuning methods. Compared to all baselines, word soup achieves the best trade-off between parameter efficiency and target accuracy.\nDescriptor " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "Similar to many related works, the main limitation of our work is that we require the source dataset to cover a broad range of classes (e.g. ImageNet). As a counter example, we cannot hope to train on pets classification and generalize to ImageNet. We highlighted this limitation in based on finetuned model weights would be sub-optimal, since the pretrained text encoder captures a richer set of textual information. Remaining details are organized in Table 7. Mini-batches are randomly sampled, but with exactly one sample per label per batch. Cross entropy and CLIPood both tune the last three layers of the image and text encoders, in addition to a shallow text prompt (like CoOp) at a higher learning rate. The only difference between Cross entropy and CLIPood is the loss function; the latter method uses an adaptive margin. We use cross entropy loss for all baselines except ProDA and ProGrad. ProDA and ProGrad consume more GPU memory during training, so we were unable to fit them onto a single A40 GPU when training with cross entropy. Consequently, we were forced to use a CLIP-like contrastive loss for these two methods to reduce the number of text encoder evaluations." }, { "figure_ref": [], "heading": "B. Additional Word Soup Motivation", "publication_ref": [ "b69", "b24", "b70", "b19", "b16" ], "table_ref": [], "text": "A natural baseline for word soup is soft prompt tuning (CoOp), since the former method can be thought of as \"discrete\" prompt tuning. Soft prompt tuning optimizes over a continuous parameter space using gradient descent, whereas 2e-5 CoOp [70] 8e-5 MaPLe [25] 0.025 KgCoOp [22] 4e-5 ProDA [31] 3.2e-4 ProGrad [71] 1.28e-3 VPT [20] 0.8 bitfit [64] 1.25e-4 CLIP-adapter [11] 6e-3 SSF [29] 1e-4 adapter [16] 2.5e-3 LoRA [17] 1e-5 The plots on the right indicate the loss value at the corresponding locations in the contour plots on the left, for better visualization. We observe that the word soup initialization lies in a lower and flatter region, compared to the random initialization. Consequently, finetuning from the word soup initialization results in lower training and test errors compared to finetuning from the random initialization.\n(orange star) and the finetuned soft descriptor. The resulting soft descriptor lies at the bottom of a sharp loss basin. On the other hand, the word soup initialized descriptor (blue star) lies at an equally low but much flatter region of the loss landscape. Finetuning from this initialization leads to a lower error on both source and target data, as indicated in blue. This visualization suggests that our word soup algorithm finds robust flat minima, since it is not limited to a narrow loss basin like gradient descent methods." }, { "figure_ref": [], "heading": "C. Token offset trick (for Descriptor Soup)", "publication_ref": [], "table_ref": [], "text": "We propose a novel trick to augment/diversify the descriptors at test time to further increase the target accuracy of descriptor soups. This trick does not improve the performance of word soups significantly. Unlike the vision encoder, which has a cls token at a fixed position (either prepended or appended to the image tokens), the CLIP text encoder does not have a separate cls token. Instead, CLIP uses the output embedding which corresponds to the position of the end-ofsentence token in the input. In classification problems, the text inputs are generally short compared to the context size (number of total tokens). Consequently, the end-of-sentence token is always near the beginning of the sequence, with the remainder padded by null tokens. In this regime, there is never any information at the end of the input token sequence to attend to, so a large portion of the information in the pretrained model is not used. We remedy this inefficient use of pretrained parameters by shifting the description toward the end of the sequence by t tokens. For example, if t = 5, we have:\n• original: a photo of a dog, which may be large or small. • augmented: a photo of a dog, ! ! ! ! ! which may be large or small. (\"!\" denotes the null token) For all experiments with token offsets, we set t = {0, 5, 10, 15, 20, 25} for a total of 6 augmented copies per descriptor. This diversifies the text embeddings at the expense of increasing the text centroid evaluation time 6-folds." }, { "figure_ref": [], "heading": "D. Centroid vs. Score Mean Evaluation", "publication_ref": [], "table_ref": [], "text": "In this work, we presented both centroid and score mean results for both our soup methods and ensemble baselines. Centroid evaluation refers to averaging the text features among descriptors before calculating the cosine similarity between image and text features. Score mean evaluation refers to calculating the cosine similarity between image and text features and then averaging the similarity scores among descriptors.\nConcretely, let there be m descriptors and c classes. Let x I denote a normalized image feature and x j T,k denote the normalized text feature corresponding to class k and descriptor j; k ∈ [1 : c] and j ∈ [1 : m].\nThe predicted score for class k using centroid evaluation, s k , is defined as:\nx T,k = 1 m m j=1 x j T,k s k = x I , x T,k ∥x T,k ∥\nThe predicted score for class k using score mean evaluation is defined as:\ns k = 1 m m j=1 x I , x j T,k\nEmpirically, we found that score mean evaluation usually leads to small numerical improvements. However, in large scale applications where retrieval speed is crucial, centroid . Different number of shots. We experiment with the same 14 datasets as the main paper and report average of 3 random trials. We report average target accuracies over 10 diverse datasets (left) and 4 ImageNet shifts (right). Here we verify that the improvements of both word soup and CoOp + word soup over CoOp are resilient to the number of shots. Indeed, we emphasize that word soup is very resilient in extreme low shot scenarios due to the low number of parameters.\nevaluation can be more efficiently implemented than score mean evaluation, due to the existence of fast nearest neighbor retrieval frameworks." }, { "figure_ref": [ "fig_0", "fig_3" ], "heading": "E. Additional Ablation Studies", "publication_ref": [ "b70" ], "table_ref": [ "tab_15", "tab_15" ], "text": "We present additional ablation studies in Table 8 and Figure 7. Table 8 presents OOD generalization results with a different source data set. Figure 7 presents results with different number of shots.\nParameter Efficiency Fig. 2 compares the parameter efficiency of our word soups against PEFT baselines. We observe that word soup can achieve the maximal CoOp accuracy using 25× and 70 × fewer parameters on the XD and DG benchmarks, resp. This impressive reduction in parameter storage requirements is due to the discrete nature of word soup parameters. A discrete token requires only one integer parameter, while a soft token requires 512 floating-point parameters. 4. We then attempt to simply minimize the KL divergence between the training prediction and the initial prediction; this shows that the diversity loss is not simply a form of regularization towards the initialization as in MIRO [4] and ProGrad [71]. Finally, we train using our diversity loss with λ = 0.25, which achieves a 1% increase in accuracy on average. Average of 3 trials. This is an expanded version of " }, { "figure_ref": [], "heading": "Computational Efficiency", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgements DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited. This material is based upon work supported by the Under Secretary of Defense for Research and Engineering under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Under Secretary of Defense for Research and Engineering." } ]
Over the past year, a large body of multimodal research has emerged around zero-shot evaluation using GPT descriptors. These studies boost the zero-shot accuracy of pretrained VL models with an ensemble of label-specific text generated by GPT. A recent study, WaffleCLIP, demonstrated that similar zero-shot accuracy can be achieved with an ensemble of random descriptors. However, both zeroshot methods are un-trainable and consequently sub-optimal when some few-shot out-of-distribution (OOD) training data is available. Inspired by these prior works, we present two more flexible methods called descriptor and word soups, which do not require an LLM at test time and can leverage training data to increase OOD target accuracy. Descriptor soup greedily selects a small set of textual descriptors using generic few-shot training data, then calculates robust class embeddings using the selected descriptors. Word soup greedily assembles a chain of words in a similar manner. Compared to existing few-shot soft prompt tuning methods, word soup requires fewer parameters by construction and less GPU memory, since it does not require backpropagation. Both soups outperform current published few-shot methods, even when combined with SoTA zero-shot methods, on crossdataset and domain generalization benchmarks. Compared with SoTA prompt and descriptor ensembling methods, such as ProDA and WaffleCLIP, word soup achieves higher OOD accuracy with fewer ensemble members.
Descriptor and Word Soups : Overcoming the Parameter Efficiency Accuracy Tradeoff for Out-of-Distribution Few-shot Learning
[ { "figure_caption": "Figure 2 .2Figure2. Comparison with PEFT and ZS methods. We vary m for word soup as in Fig.5. We vary the number of prompt tokens for CoOp, VPT and MaPLe, the number of prompts for ProDA, the rank for LoRA and adapters, and the number of layers tuned for SSF and bitfit. CoOp stores 512 parameters per soft token, while word soup stores 1 parameter per discrete token. Average of 3 runs. Word soup achieves the maximal CoOp accuracy with only 1/25 of the parameters on the XD benchmark and 1/70 of the parameters on the DG benchmark. Detailed results see Tab. 11 in the Appendix.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "1 ): 1 .11Calculate ℓ(S train , T train (d)) for all d ∈ D. Sort the descriptors by increasing loss / decreasing accuracy. With slight abuse of notation, denote the sorted list as D = [d 0 , ..., d n ]. 2. Initialize the \"descriptor soup\" D * = {d 0 } with the best descriptor. 3. For i in 1 : n: Add d i to D * if it decreases the loss of D * . 4. Return the first m descriptors in D * . Please find ZS results for descriptor soup in Tab. 3. Building Intuition A natural question to ask is: descriptor soup members no longer describe individual classes, so why does Alg. 1 work? The answer has two parts (1) Alg. 1 finds", "figure_data": "", "figure_id": "fig_1", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure3. (Left) Plot of ImageNet accuracy when the same descriptor is appended to every class label. Observe that there are more than 1,000 GPT descriptors and single-word descriptors that are better than standard ZS (in red). When we further consider word chains of length 4, the number of accurate descriptors increases dramatically (orange). (Right) Scatter plot of average target accuracy vs. ImageNet accuracy of GPT descriptors. We observe a positive correlation, so descriptors trained on ImageNet are likely to generalize to other datasets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Varying λ in the diversity loss. λ = 0 corresponds to the standard CE loss. The left plot displays the average KL divergence between predicted class probabilities of word soup descriptors over the course of training. The right plot displays the cross-dataset accuracy for the same training runs.We observe that a larger λ leads to higher diversity among descriptors; this results in a higher test accuracy.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Contour plot of the 0-1 loss over the 2D parameter space spanned by two initializations (indicated by stars) and the finetuned parameters. The orange and blue stars indicate the random initialization and word soup initialization, resp. The top and bottom rows plot the 0-1 loss on the training and test data (average of 10 test datasets), resp. For this figure, we train 10 descriptor tokens. The plots on the right indicate the loss value at the corresponding locations in the contour plots on the left, for better visualization.We observe that the word soup initialization lies in a lower and flatter region, compared to the random initialization. Consequently, finetuning from the word soup initialization results in lower training and test errors compared to finetuning from the random initialization.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure7. Different number of shots. We experiment with the same 14 datasets as the main paper and report average of 3 random trials. We report average target accuracies over 10 diverse datasets (left) and 4 ImageNet shifts (right). Here we verify that the improvements of both word soup and CoOp + word soup over CoOp are resilient to the number of shots. Indeed, we emphasize that word soup is very resilient in extreme low shot scenarios due to the low number of parameters.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": ". Example of a 5 member word soup trained on ImageNet (inblue) along with random chains of words (in gray) for comparison.Comparing with Tab. 1, we observe that the word soup descriptorsachieve higher accuracy than descriptor soups, since word soup ismore flexible from an optimization perspective. Here, we includeuniformity scores, since chains of random words improve alignmentat the expense of increasing uniformity. Uniformity is the averagecosine similarity between image and text embeddings with differentlabels.inal classes. We wish to select a set of m descriptors thatmaximizes accuracy on few-shot training data. Let's definethe loss function ℓ(S train , T train (d)) to be the 0-1 loss of themodel using descriptor d over the entire training dataset S train .T train (d) denotes the label text embeddings calculated by thetext encoder by appending descriptor d to all class names.Since all parameters of the vision model remain constant,we ignore vision model parameters in the notation. We aimto find a set of m descriptors whose centroids in the textembedding space minimize the 0-1 loss:D * m = {d * 1 , ..., d * m } = arg min d1:m∈Dℓ S train ,1 mm i=1", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with ZS methods. All baseline methods in this table use prompts/descriptors on top of the pretrained model in a ZS manner.", "figure_data": "SourceCross-dataset (XD) Evaluation TargetsDomain Generalization TargetsmINetCaltechPetsCarsFlowersFoodAircraftSUNDTDEuroSATUCFMeanINet-V2SketchINet-AINet-RMeanCLIP ZS [70]167.193.3 89.0 65.4 71.0 85.7 25.0 63.2 43.6 46.7 67.4 65.02 61.0 46.6 47.2 74.1 57.22Ensemble [43]8068.493.5 88.8 66.0 71.1 86.0 24.8 66.0 43.9 45.0 68.0 65.31 61.9 48.5 49.2 77.9 59.36GPT centroids [35]5.868.294.1 88.4 65.8 71.5 85.7 24.7 67.5 44.7 46.6 67.4 65.63 61.5 48.2 48.9 75.1 58.40GPT score mean [35]5.868.693.7 89.0 65.1 72.1 85.7 23.9 67.4 44.0 46.4 66.8 65.42 61.8 48.1 48.6 75.2 58.42Random descriptors1667.994.1 87.6 65.6 71.5 85.6 24.9 66.1 44.7 49.1 67.2 65.65 61.6 48.7 50.0 76.7 59.22+ offset trick (ours)9668.593.5 89.2 65.8 72.0 85.7 25.2 66.1 44.4 53.0 68.2 66.29 61.9 48.9 50.6 77.5 59.76Waffle CLIP [45]1668.193.5 88.4 65.4 72.0 85.9 25.9 66.2 44.1 46.3 68.0 65.58 61.8 48.6 49.8 76.2 59.08+ offset trick (ours)9668.693.1 89.5 65.9 72.1 86.1 26.3 66.2 44.2 52.5 68.8 66.49 62.1 48.9 50.2 77.1 59.59Descriptor soup (ours)16.768.994.7 89.4 66.2 72.2 86.2 25.5 67.3 45.1 46.6 68.7 66.18 62.1 48.7 49.7 76.4 59.25+ offset trick (ours)10069.193.8 89.8 66.0 72.9 86.2 25.4 66.8 45.0 51.6 69.1 66.67 62.6 49.0 50.5 77.2 59.82Word soup (ours)869.294.4 89.5 65.4 72.3 85.8 25.8 67.4 44.7 53.5 68.4 66.72 62.9 48.7 50.2 77.0 59.69Word soup score mean (ours)869.494.3 89.6 65.4 72.4 85.9 25.9 67.3 45.2 55.8 68.5 67.03 63.0 49.0 50.4 77.2 59.90gain over GPT+0.8+0.6 +0.6 +0.3 +0.3 +0.2 +2.0 -0.1 +1.2 +9.4 +1.7+1.6+1.2 +0.9 +1.8 +2.0+1.5gain over Waffle+1.3+0.8 +1.2 +0.0 +0.4 +0.0 -0.0 +1.1 +1.1 +9.5 +0.5+1.5+1.2 +0.4 +0.6 +1.0+0.8", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "same descriptor is appended to every class label. Observe that there are more than 1,000 GPT descriptors and single-word descriptors that are better than standard ZS (in red). When we further consider word chains of length 4, the number of accurate descriptors increases dramatically (orange). (Right) Scatter plot of average target accuracy vs. ImageNet accuracy of GPT descriptors. We observe a positive correlation, so descriptors trained on ImageNet are likely to generalize to other datasets.", "figure_data": "", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Select the top-k 0 and top-k 1 words, denoted as W topk0 and W topk1 , resp. k 0 < k 1 . 2. Randomly select a word w from W topk0 and initialize the descriptor d = w. 3. Shuffle W topk1 . Then, for w ′ ∈ W topk1 , append w ′ to d, only if it increases the accuracy of d. 4. return d.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation results to support the diversity loss. \"Vanilla CoOp + word soup\" refers to appending the word soup descriptors directly to soft CoOp prompts. \"CoOp ensemble\" refers to ensembling m randomly-initialized soft descriptors trained with CoOp. Observe that the model trained with our diversity loss (λ = 0.25) achieves a 1% increase in accuracy on average. This increase in accuracy cannot be achieved with label smoothing or regularization towards the initialization as inMIRO [4] and ProGrad[71]. Detailed results see Tab. 9 in the Appendix.", "figure_data": "m Source XD MeanDG MeanINet (10 datasets) (4 datasets)", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison with ZS baselines at different model scales. † indicates a model trained by Open-AI [43]; ‡ indicates a model trained by Open-CLIP [19]. Detailed results see Tab. 12 in the Appendix.", "figure_data": "Cross-dataset Evaluation Target Meanm B/32 † B/16 † L/14 ‡ CoCa L/14 ‡g/14 ‡ZS161.32 65.02 73.1174.8277.58GPT score mean5.8 61.22 65.42 73.0875.4877.14Waffle CLIP1662.13 65.58 73.2575.3777.72Desc. soup + offsets 100 62.79 66.67 73.1975.9578.04Word soup (ours)862.24 67.03 73.5676.0878.09Domain Generalization Evaluation Target Meanm B/32 † B/16 † L/14 ‡ CoCa L/14 ‡g/14 ‡ZS147.68 57.22 64.8867.9471.37GPT score mean5.8 47.95 58.42 64.9667.6771.26Waffle CLIP1649.07 59.08 64.4767.8570.99Desc. soup + offsets 100 50.05 59.82 65.8168.3272.21Word soup (ours)850.00 59.90 65.7368.7372.05KL divergence between soft predictions of descriptor pairs on training data0.01 0.02 0.03 0.04 0.05 0.06 0.07200400 Training Iterations 600800 1000 =0.0 =0.1 =0.25 Cross-dataset Evaluation (average accuracy of 10 datasets)66.2 66.4 66.6 66.8 67.0 67.2200400 Training Iterations 600800 1000", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "verifies this interpretation by showing that train-Comparison with few-shot methods and few-shot methods stacked with ZS methods. † indicates author-reported numbers on the same datasets with the same train-test splits. Other numbers are our reproductions. All methods except the upper bound were trained on 3 random 16-shot splits of ImageNet. m indicates number of descriptors used. Either our descriptor soup with the offset trick or our word soup achieves the best accuracy on average. We use the ViT/B-16 CLIP model. Detailed results see Tab. 10 in the Appendix.", "figure_data": "mSourceXD MeanDG MeanINet(10 datasets) (4 datasets)CLIP ZS [43]167.165.0257.22CoOp [70] †71.563.8859.3Co-CoOp [69] †71.065.7459.9MaPLe [25] †70.766.3060.3CLIPood [51] †71.660.5Cross Entropy (CE)172.366.8060.39+ GPT score mean [35]5.871.766.8659.92+ Random descriptors3271.666.8960.69+ Waffle CLIP [45]3271.666.5860.65+ Descriptor soup (ours)16.772.167.1060.70+ offset trick (ours)10072.167.5161.01+ Word soup centroids (ours)871.867.1661.22+ Word soup score mean (ours)871.767.4361.32+ Descriptor soup upper bound1171.767.6261.01ProGrad [71]169.866.4858.96KgCoOp [22]169.266.1658.64ProDA [31]3270.066.2358.83Vanilla CoOp [70]170.066.5259.25+ Word soup score mean (ours)870.267.3060.25Vanilla MaPLe [25]170.766.4459.32+ Word soup score mean (ours)870.866.6560.20Vanilla CLIPood [51]172.966.5060.47+ Word soup score mean (ours)872.067.4261.23", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": "of the", "figure_id": "tab_12", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Miscellaneous training details for training on 16-shotImageNet-1K in the OOD setting.", "figure_data": "word soup optimizes over a discrete parameter space using agreedy algorithm. Many prior works (e.g. [58, 59]) observethat gradient descent is limited to a narrow convex basinaround the initialization, when finetuning a pretrained deep", "figure_id": "tab_14", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Experiments using a different source dataset (a 16-shot subset of LAION-2B queried using ImageNet label names). Settings are identical to Table10(the expanded form of Table6in the main paper).", "figure_data": "SourceCross-dataset Evaluation TargetsDomain Generalization TargetsINetCaltechPetsCarsFlowersFoodAircraftSUNDTDEuroSATUCFMeanINet-V2SketchINet-AINet-RMeanCLIP ZS67.1 93.3 89.0 65.4 71.0 85.7 25.0 63.2 43.6 46.7 67.4 65.02 61.0 46.6 47.2 74.1 57.22Word soup68.8 94.1 89.5 65.9 72.6 86.3 26.1 67.2 45.3 53.9 67.8 66.87 62.6 49.0 50.4 77.0 59.73Vanilla CoOp68.7 94.4 90.2 66.1 70.9 85.8 26.0 66.7 47.4 50.1 68.9 66.63 61.9 48.6 49.8 76.7 59.26+ Word soup 69.1 94.6 91.1 65.2 71.8 86.0 25.1 67.4 46.0 51.9 69.1 66.82 62.7 49.4 50.3 78.0 60.09", "figure_id": "tab_15", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "We emphasize that our method adds negligible test time computation, despite requiring m text encoder evaluations per label. For classification tasks, more time is spent processing image data compared to text data. For example, the evaluation of the m = 8 word soup in Table 6 took 239 seconds, of which 234 seconds were spent evaluating image embeddings and only 4.6 seconds were spent evaluating text embeddings.", "figure_data": "SourceCross-dataset Evaluation TargetsDomain Generalization TargetsmINetCaltechPetsCarsFlowersFoodAircraftSUNDTDEuroSATUCFMeanINet-V2SketchINet-AINet-RMeanCLIP ZS167.193.3 89.0 65.4 71.0 85.7 25.0 63.2 43.6 46.7 67.4 65.02 61.0 46.6 47.2 74.1 57.22Vanilla CoOp170.094.6 91.2 65.4 71.2 86.3 24.6 66.9 48.0 48.3 68.7 66.52 63.2 48.4 49.2 76.2 59.25+ word soup869.694.6 90.8 65.2 70.3 86.0 24.8 66.9 47.6 50.7 69.0 66.59 62.9 48.2 49.6 76.3 59.26CoOp ensemble869.894.4 91.5 66.2 72.6 86.6 25.7 67.7 46.4 47.9 67.8 66.68 63.0 48.4 49.6 75.8 59.18CoOp regularized towards initialization 170.294.8 91.1 65.4 72.1 86.2 24.8 67.6 46.2 52.7 69.0 66.97 63.6 49.1 49.6 77.5 59.94+ word soup869.994.7 90.1 64.7 71.8 85.5 25.0 67.4 45.5 53.6 68.7 66.69 63.4 49.2 49.9 77.7 60.05CoOp with label smoothing170.194.5 90.6 64.9 72.0 85.8 24.6 67.3 45.4 50.0 68.6 66.37 63.4 49.1 50.2 77.6 60.09+ word soup869.994.5 89.9 64.9 71.7 85.2 25.0 66.8 44.8 50.0 68.3 66.13 63.6 49.3 50.1 77.7 60.16CoOp + word soup (λ = 0)869.894.3 90.8 64.8 71.1 86.0 24.1 67.2 46.8 48.4 68.8 66.21 63.2 48.3 49.0 76.1 59.15+ our diversity loss (λ = 0.25)870.294.7 91.0 65.4 72.3 86.0 24.8 67.8 45.9 55.2 69.2 67.23 63.6 49.3 50.1 77.9 60.20", "figure_id": "tab_16", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation results to support the diversity loss. \"Vanilla CoOp + word soup\" refers to naively appending the word soup descriptors trained on the pretrained model to the separately trained soft CoOp prompts. \"CoOp ensemble\" refers to ensembling m randomly-initialized soft descriptors. This requires running CoOp m times, but offers negligible gains in accuracy. In the second half of the table, we fix the descriptor tokens and train the prompt tokens only. We first run CoOp with standard CE training (λ = 0) and observe a decrease in accuracy compared to the naive \"Vanilla CoOp + word soup\" baseline, caused by the diversity collapse issue observed in Figure", "figure_data": "", "figure_id": "tab_17", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Table 4 in the main paper.", "figure_data": "SourceCross-dataset Evaluation TargetsDomain Generalization TargetsmINetCaltechPetsCarsFlowersFoodAircraftSUNDTDEuroSATUCFMeanINet-V2SketchINet-AINet-RMeanCLIP ZS [43]167.193.3 89.0 65.4 71.0 85.7 25.0 63.2 43.6 46.7 67.4 65.02 61.0 46.6 47.2 74.1 57.22CoOp [70] †71.51 93.70 89.14 64.51 68.71 85.30 18.47 64.15 41.92 46.39 66.55 63.88 64.20 47.99 49.71 75.21 59.3Co-CoOp [69] †71.02 94.43 90.14 65.32 71.88 86.06 22.94 67.36 45.73 45.37 68.21 65.74 64.07 48.75 50.63 76.18 59.9MaPLe [25] †70.72 93.53 90.49 65.57 72.23 86.20 24.74 67.01 46.49 48.06 68.69 66.30 64.07 49.15 50.90 76.98 60.3CLIPood [51] †71.664.9 49.3 50.4 77.2 60.5Cross Entropy (CE)172.394.6 89.8 64.9 72.4 86.3 25.3 68.1 45.7 51.5 69.4 66.80 65.4 49.4 49.8 77.0 60.39+ GPT score mean [35]5.871.794.3 89.9 64.5 72.1 86.0 24.5 68.6 46.6 53.8 68.4 66.86 64.9 49.4 48.8 76.6 59.92+ Random descriptors3271.694.6 89.3 64.7 72.1 86.0 25.3 67.5 45.4 55.2 68.8 66.89 64.8 49.9 50.2 77.9 60.69+ Waffle CLIP [45]3271.694.1 89.8 65.0 72.6 86.1 26.1 67.7 45.0 50.9 68.4 66.58 65.1 49.7 50.3 77.4 60.65+ Descriptor soup (ours)16.7 72.194.7 89.9 65.0 72.4 86.3 25.6 68.0 45.6 53.9 69.5 67.10 65.3 49.7 50.1 77.7 60.70+ offset trick (ours)10072.194.1 90.4 66.3 73.3 86.3 26.1 67.8 46.4 55.0 69.4 67.51 65.3 49.8 50.8 78.2 61.01+ Word soup centroids (ours)871.894.4 90.4 65.0 72.3 86.1 25.3 68.2 45.5 55.4 69.1 67.16 65.2 50.2 50.7 78.7 61.22+ Word soup score mean (ours)871.794.5 90.2 65.1 72.4 86.2 25.6 68.1 45.6 57.3 69.3 67.43 65.3 50.3 50.9 78.7 61.32+ Descriptor soup upper bound 1171.794.4 90.2 66.5 72.9 86.1 26.3 67.4 46.4 57.2 68.6 67.62 64.9 49.7 50.9 78.6 61.01ProGrad [71]169.894.4 91.5 65.8 72.4 86.4 25.3 66.6 47.2 46.3 69.0 66.48 63.2 48.2 48.6 75.9 58.96KgCoOp [22]169.294.3 89.9 63.9 71.0 85.7 23.7 66.2 44.4 54.4 68.3 66.16 62.3 48.0 48.8 75.5 58.64ProDA [31]3270.094.2 90.2 64.7 70.8 85.7 23.1 67.0 45.8 51.4 69.4 66.23 63.0 48.1 48.4 75.7 58.83Vanilla CoOp [70]170.094.6 91.2 65.4 71.2 86.3 24.6 66.9 48.0 48.3 68.7 66.52 63.2 48.4 49.2 76.2 59.25+ Word soup score mean (ours)870.294.7 90.9 65.4 72.0 86.0 25.0 67.7 45.9 56.2 69.2 67.30 63.6 49.3 50.1 77.9 60.25Vanilla MaPLe [25]170.793.7 91.2 65.4 71.9 86.2 25.0 67.2 46.2 48.6 68.9 66.44 63.9 48.6 48.4 76.3 59.32+ Word soup score mean (ours)870.894.1 91.2 65.2 71.8 85.8 24.0 67.0 46.0 53.5 68.0 66.65 64.0 49.6 49.2 77.9 60.20Vanilla CLIPood [51]172.994.8 89.8 64.9 72.2 85.9 25.8 67.8 46.4 48.7 68.7 66.50 66.0 49.5 49.5 76.9 60.47+ Word soup score mean (ours)872.094.4 90.8 64.8 72.4 86.0 25.4 67.9 46.0 57.6 68.9 67.42 65.5 50.2 50.8 78.5 61.23", "figure_id": "tab_18", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison with few-shot methods and few-shot methods stacked with ZS methods. † indicates author-reported numbers on the same datasets with the same train-test splits. Other numbers are from our reproductions using our github code. We tune all baselines on a withheld validation set, so our numbers are different from published numbers. The descriptor soup upper bound was trained to maximize average cross-dataset accuracy (on test data); this loosely approximates the maximally achievable accuracy on these benchmarks without using extra information. All other methods were trained on 3 random 16-shot splits of ImageNet. m indicates number of descriptors used. All methods are evaluated on top of 3 models finetuned with different random seeds. Due to space limitations, we only compare with ZS baselines stacked on top of the CE-finetuned few-shot model, since this is the best finetuned model. Either our descriptor soup with the offset trick or our word soup achieves the best accuracy on most datasets. Finally, we stack our word soup method on top of CoOp, MaPLe, and CLIPood finetuned models to show that word soup is complementary to most existing robust finetuning methods. Average of 3 trials. This is an expanded version of Table6in the main paper. .0 65.7 72.6 86.1 24.9 67.8 45.6 55.5 69.2 67.30 63.7 49.5 50.5 77.9 60.39", "figure_data": "SourceCross-dataset Evaluation TargetsDomain Generalization Targetsparameters(thousands)INetCaltechPetsCarsFlowersFoodAircraftSUNDTDEuroSATUCFAverageINet-V2INet-SketchINet-AINet-RAverageVPT shallow 1 token0.76868.793.8 90.0 65.1 69.5 85.3 24.2 66.0 44.7 41.9 67.8 64.84 62.1 47.9 47.9 76.7 58.67VPT shallow 2 tokens268.793.8 90.0 65.2 69.5 85.2 24.2 66.2 44.8 42.3 67.1 64.84 62.2 48.0 47.3 76.7 58.54VPT shallow 3 tokens268.793.9 90.0 65.6 70.2 85.3 24.8 66.2 44.7 43.8 67.5 65.20 62.4 48.1 47.0 76.6 58.52VPT shallow 3 tokens268.693.8 89.5 64.8 70.1 85.3 24.1 66.1 44.5 45.4 67.7 65.12 62.1 48.0 47.1 76.4 58.41VPT deep 2 layers568.893.5 89.7 65.0 70.3 85.4 24.0 65.9 44.7 49.3 67.6 65.54 62.2 48.2 46.9 76.6 58.47VPT deep 3 layers768.793.5 89.4 65.3 70.4 85.3 24.2 66.2 44.8 45.0 67.5 65.16 62.3 48.2 46.8 76.4 58.42MaPLe 1 layer39670.194.2 91.1 64.3 71.1 86.1 24.5 67.0 47.3 51.8 68.6 66.61 63.4 48.4 48.8 76.3 59.22MaPLe 2 layers39770.493.6 91.8 64.3 71.3 85.9 24.7 67.0 46.9 48.1 68.5 66.21 63.7 48.3 49.2 76.1 59.34MaPLe 3 layers39970.793.7 91.2 65.4 71.9 86.2 25.0 67.2 46.2 48.6 68.9 66.44 63.9 48.6 48.4 76.3 59.32bitfit last layer1768.394.1 89.5 65.2 71.4 85.9 24.9 65.7 44.7 46.9 67.9 65.62 61.7 48.0 48.5 75.9 58.51bitfit last 2 layers3468.893.9 89.9 65.3 71.4 85.9 25.1 66.4 45.1 47.4 68.4 65.88 62.1 48.6 48.5 76.6 58.93bitfit last 3 layers5169.193.9 90.0 65.3 71.7 85.8 25.0 66.7 45.4 48.3 68.4 66.05 62.6 48.7 48.5 76.8 59.12CoOp 1 token0.51269.494.3 91.4 64.4 71.7 86.3 24.6 67.2 47.3 49.1 68.5 66.49 63.1 48.2 49.0 76.1 59.08CoOp 2 tokens169.994.6 91.6 65.5 72.0 86.1 25.0 66.8 48.2 49.6 69.4 66.89 63.2 48.5 48.8 76.3 59.20CoOp 3 tokens270.294.5 91.0 66.0 71.6 86.3 24.6 66.8 47.6 49.0 68.9 66.63 63.4 48.5 49.5 76.3 59.45ProGrad 1 token0.51269.494.2 91.0 65.6 72.7 86.4 25.1 66.2 46.0 48.2 68.5 66.39 62.8 48.1 48.5 75.7 58.77ProGrad 2 tokens169.594.1 90.8 65.7 72.6 86.3 24.8 66.5 45.5 47.7 68.7 66.28 62.8 48.0 48.5 75.7 58.75ProGrad 3 tokens269.894.4 91.5 65.8 72.4 86.4 25.3 66.6 47.2 46.3 69.0 66.48 63.2 48.2 48.6 75.9 58.96KgCoOp 1 token0.51268.693.4 89.4 63.4 70.9 85.9 23.8 65.6 44.9 52.5 68.1 65.80 62.0 47.8 49.1 75.7 58.63KgCoOp 2 tokens169.093.3 89.3 62.8 70.2 85.8 23.8 66.0 45.4 53.0 69.0 65.85 62.4 48.0 49.1 75.9 58.85KgCoOp 3 tokens269.294.3 89.9 63.9 71.0 85.7 23.7 66.2 44.4 54.4 68.3 66.16 62.3 48.0 48.8 75.5 58.64ProDA ensemble size 42070.594.3 90.4 65.3 71.2 86.1 24.9 67.2 46.4 50.4 69.4 66.54 63.6 48.6 49.4 76.0 59.43ProDA ensemble size 84170.193.8 90.3 65.1 71.0 85.8 24.9 67.4 45.5 49.4 68.4 66.15 63.3 48.8 49.5 76.6 59.55ProDA ensemble size 168269.994.3 90.5 64.5 70.8 85.6 24.3 66.6 45.2 48.4 68.8 65.90 63.1 48.4 48.9 76.1 59.13ProDA ensemble size 3216470.094.2 90.2 64.7 70.8 85.7 23.1 67.0 45.8 51.4 69.4 66.23 63.0 48.1 48.4 75.7 58.83ProDA ensemble size 6432869.494.4 90.0 64.5 69.5 85.1 22.7 66.4 44.9 49.6 67.8 65.49 62.7 48.0 48.7 76.2 58.91CLIP-adapter reduction=128467.193.3 89.0 65.3 70.9 85.7 25.1 63.3 43.5 46.6 67.4 65.00 60.9 46.6 47.2 74.1 57.18CLIP-adapter reduction=64867.193.3 88.8 65.4 71.1 85.7 24.9 63.3 43.5 46.5 67.2 64.97 60.9 46.5 47.2 74.0 57.17CLIP-adapter reduction=321667.493.2 88.4 65.2 70.1 85.6 24.9 64.1 44.0 46.3 66.8 64.84 60.9 46.9 47.9 74.5 57.55CLIP-adapter reduction=163367.693.3 88.3 64.9 70.1 85.6 24.5 64.4 43.9 46.7 66.8 64.86 61.2 47.2 48.4 75.1 57.98CLIP-adapter reduction=86667.993.4 88.7 65.4 70.2 85.7 24.8 65.1 44.3 46.6 66.7 65.09 61.5 47.5 48.5 75.3 58.21CLIP-adapter reduction=413167.893.4 89.0 65.2 70.2 85.7 24.5 65.2 44.2 46.0 66.8 65.02 61.5 47.5 48.3 75.1 58.12SSF last layer1268.194.0 89.5 65.4 71.0 85.7 24.7 65.6 45.3 51.6 68.5 66.13 61.6 47.8 46.4 75.7 57.87SSF last 2 layers2568.594.1 89.9 65.1 71.2 85.8 24.8 66.3 45.9 49.1 68.2 66.04 62.1 48.3 47.2 76.3 58.46SSF last 3 layers3768.594.2 89.5 64.9 71.2 85.3 24.4 66.2 45.8 49.3 67.8 65.86 62.1 48.1 47.2 76.3 58.44LoRA rank=11867.393.5 89.3 65.4 71.3 85.7 25.1 64.2 44.4 47.9 67.6 65.43 61.4 47.1 46.9 74.9 57.59LoRA rank=23767.693.7 90.0 65.7 71.2 85.7 25.3 65.6 45.9 49.6 67.8 66.05 61.9 47.7 45.3 75.6 57.62LoRA rank=47467.693.8 90.1 65.7 71.5 85.7 25.2 65.4 46.0 50.9 67.7 66.19 61.8 47.7 46.2 76.0 57.93LoRA rank=814768.093.9 90.0 65.7 71.4 85.4 25.5 65.9 46.3 52.6 67.2 66.39 61.9 47.1 42.2 74.4 56.40ResBlock-adapter reduction=1285568.093.8 89.2 64.0 71.1 84.7 23.3 65.1 45.3 46.0 67.6 65.01 61.2 47.4 47.2 75.5 57.81ResBlock-adapter reduction=6411168.894.0 89.7 64.2 70.8 85.0 23.5 65.8 45.5 46.9 68.0 65.35 61.8 48.0 48.0 76.3 58.52ResBlock-adapter reduction=3222169.194.2 90.0 64.4 71.4 85.3 23.2 66.1 45.2 46.8 67.4 65.41 62.5 48.1 48.3 76.8 58.94ResBlock-adapter reduction=1644269.394.2 89.9 64.2 71.3 85.3 23.8 66.4 45.6 47.5 67.9 65.60 62.8 48.4 48.4 76.9 59.12ResBlock-adapter reduction=888569.594.1 89.5 64.6 71.3 85.6 23.6 66.6 44.8 45.3 67.9 65.33 63.0 48.6 48.8 77.0 59.36ResBlock-adapter reduction=4176969.794.1 89.5 64.8 71.2 85.5 24.0 66.8 44.9 46.8 67.8 65.55 63.1 48.7 49.0 77.1 59.48Word Soup m = 10.01268.693.9 89.2 64.6 71.8 86.0 24.7 65.9 44.2 48.0 67.7 65.61 62.1 47.9 49.7 76.3 59.01Word Soup m = 20.02469.094.1 90.3 65.6 72.5 86.0 25.5 66.9 45.0 52.0 68.6 66.64 62.4 48.8 50.2 76.6 59.50Word Soup m = 40.04869.394.1 89.9 65.9 72.4 86.5 25.7 67.1 45.8 53.6 68.7 66.96 62.9 48.9 50.3 77.2 59.80Word Soup m = 80.09669.494.1 89.9 65.7 72.5 86.4 25.9 67.0 44.9 54.6 68.8 66.99 63.1 49.0 50.5 77.3 59.95Word Soup m = 160.19269.594.0 89.9 65.9 72.5 86.3 26.1 67.4 45.2 54.8 68.8 67.08 63.2 49.0 50.7 77.2 60.02Word Soup m = 320.38469.694.2 89.9 65.9 72.4 86.5 26.2 67.4 45.1 54.7 69.0 67.12 63.2 49.0 50.6 77.3 60.04Word Soup m = 640.76769.594.1 90.0 65.9 72.5 86.4 26.2 67.4 45.2 55.1 69.0 67.17 63.3 49.1 50.7 77.4 60.11Word Soup + CoOp m = 4270.294.5 91.0 65.6 72.3 86.0 25.1 67.7 45.7 56.1 68.6 67.26 63.7 49.3 50.1 77.9 60.26Word Soup + CoOp m = 8270.294.4 91.0 65.3 72.1 86.1 25.2 67.7 45.5 55.5 68.7 67.15 63.5 49.3 50.2 78.0 60.25Word Soup + CoOp m = 16270.294.5 91", "figure_id": "tab_19", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Detailed numerical results for PEFT comparison in Fig.2. Average of 3 trials. These results are plotted in Figure2of the main paper. Also reference Section 7 (Results) for a discussion.", "figure_data": "SourceCross-dataset Evaluation TargetsDomain Generalization TargetsmINetCaltechPetsCarsFlowersFoodAircraftSUNDTDEuroSATUCFMeanINet-V2SketchINet-AINet-RMeanOpen-AI CLIP ViT-B/32ZS161.991.587.460.366.480.219.162.242.340.363.561.3254.640.729.166.347.68GPT score mean5.863.091.888.160.066.680.219.164.443.136.262.761.2255.441.029.465.947.95Waffle CLIP1663.391.888.060.967.480.419.663.841.744.863.062.1355.841.631.167.849.07Desc. soup + offsets10064.191.587.760.766.980.419.964.443.648.364.562.7956.542.631.869.350.05Word soup864.591.588.060.467.080.919.364.642.045.563.262.2456.942.532.068.750.00Open CLIP ViT-L/14ZS173.396.492.992.075.885.734.172.757.352.172.173.1165.661.047.285.764.88GPT score mean5.873.696.792.891.276.585.333.772.758.651.671.773.0866.161.247.585.164.96Waffle CLIP1672.796.192.491.776.485.834.472.458.652.272.573.2565.360.746.585.464.47Desc. soup + offsets10074.096.692.892.076.385.534.572.759.150.072.373.1966.061.948.786.665.81Word soup874.396.592.192.276.086.035.073.658.552.973.073.5666.861.648.286.365.73Open CLIP CoCa-L/14ZS175.197.693.892.777.387.536.673.657.258.573.474.8267.563.553.887.067.94GPT score mean5.874.997.693.792.476.287.336.373.958.964.973.675.4867.663.552.886.867.67Waffle CLIP1675.097.593.992.777.387.537.473.157.563.073.975.3767.563.852.887.367.85Desc. soup + offsets10075.597.593.992.677.587.337.273.861.163.675.075.9568.064.253.287.968.32Word soup875.997.593.877.887.738.474.160.563.574.776.0868.864.054.387.968.73Open CLIP ViT-g/14ZS177.797.793.693.581.690.044.174.365.355.880.077.5870.466.459.789.071.37GPT score mean5.877.697.293.793.681.489.643.174.763.158.776.377.1471.066.358.888.971.26Waffle CLIP1677.397.893.593.781.389.844.174.165.858.078.977.7270.165.959.088.970.99Desc. soup + offsets10078.097.894.193.980.789.243.175.067.060.479.278.0471.567.260.290.072.21Word soup878.497.693.793.981.489.844.075.066.060.079.578.0971.667.160.089.672.05", "figure_id": "tab_20", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Detailed numerical results for different model scales. This is an expanded version of Table5. Average of 3 trials.", "figure_data": "", "figure_id": "tab_21", "figure_label": "12", "figure_type": "table" } ]
Christopher Liao; Theodoros Tsiligkaridis; Brian Kulis
[ { "authors": "James Urquhart Allingham; Jie Ren; Xiuye Michael W Dusenberry; Yin Gu; Dustin Cui; Jeremiah Zhe Tran; Balaji Liu; Lakshminarayanan", "journal": "PMLR", "ref_id": "b0", "title": "A simple zero-shot prompt weighting technique to improve prompt ensembling in text-image models", "year": "2023" }, { "authors": "Lukas Bossard; Matthieu Guillaumin; Luc Van Gool", "journal": "", "ref_id": "b1", "title": "Food-101 -mining discriminative components with random forests", "year": "2014" }, { "authors": "Adrian Bulat; Georgios Tzimiropoulos", "journal": "", "ref_id": "b2", "title": "Lasp: Text-totext optimization for language-aware soft prompting of vision & language models", "year": "2023" }, { "authors": "Junbum Cha; Kyungjae Lee; Sungrae Park; Sanghyuk Chun", "journal": "Springer", "ref_id": "b3", "title": "Domain generalization by mutual-information regularization with pre-trained models", "year": "2022" }, { "authors": "Guangyi Chen; Weiran Yao; Xiangchen Song; Xinyue Li; Yongming Rao; Kun Zhang", "journal": "", "ref_id": "b4", "title": "Prompt learning with optimal transport for vision-language models", "year": "2022" }, { "authors": "Junhyeong Cho; Gilhyun Nam; Sungyeon Kim; Hunmin Yang; Suha Kwak", "journal": "", "ref_id": "b5", "title": "Promptstyler: Prompt-driven style generation for source-free domain generalization", "year": "2023" }, { "authors": "Mircea Cimpoi; Subhransu Maji; Iasonas Kokkinos; Sammy Mohamed; Andrea Vedaldi", "journal": "", "ref_id": "b6", "title": "Describing textures in the wild", "year": "2013" }, { "authors": "Rajshekhar Das; Yonatan Dukler; Avinash Ravichandran; Ashwin Swaminathan", "journal": "", "ref_id": "b7", "title": "Learning expressive prompting with residuals for vision transformers", "year": "2023" }, { "authors": "Mohammad Mahdi Derakhshani; Enrique Sanchez; Adrian Bulat; G Turrisi Victor; Da Costa; G M Cees; Georgios Snoek; Brais Tzimiropoulos; Martinez", "journal": "", "ref_id": "b8", "title": "Bayesian prompt learning for image-language model generalization", "year": "2023" }, { "authors": "Chun-Mei Feng; Kai Yu; Yong Liu; Salman Khan; Wangmeng Zuo", "journal": "", "ref_id": "b9", "title": "Diverse data augmentation with diffusions for effective test-time prompt tuning", "year": "2023" }, { "authors": "Peng Gao; Shijie Geng; Renrui Zhang; Teli Ma; Rongyao Fang; Yongfeng Zhang; Hongsheng Li; Yu Qiao", "journal": "International Journal of Computer Vision", "ref_id": "b10", "title": "Clipadapter: Better vision-language models with feature adapters", "year": "2023" }, { "authors": "Xuehai He; Diji Yang; Weixi Feng; Tsu-Jui Fu; Arjun Akula; Varun Jampani; Pradyumna Narayana; Sugato Basu; William Yang; Wang ; Xin Eric; Wang ", "journal": "", "ref_id": "b11", "title": "Cpl: Counterfactual prompt learning for vision and language models", "year": "2022" }, { "authors": "Patrick Helber; Benjamin Bischke; Andreas Dengel; Damian Borth", "journal": "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing", "ref_id": "b12", "title": "Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification", "year": "2019" }, { "authors": "Dan Hendrycks; Steven Basart; Norman Mu; Saurav Kadavath; Frank Wang; Evan Dorundo; Rahul Desai; Tyler Zhu; Samyak Parajuli; Mike Guo", "journal": "", "ref_id": "b13", "title": "The many faces of robustness: A critical analysis of out-of-distribution generalization", "year": "2021" }, { "authors": "Dan Hendrycks; Kevin Zhao; Steven Basart; Jacob Steinhardt; Dawn Song", "journal": "", "ref_id": "b14", "title": "Natural adversarial examples", "year": "2021" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "PMLR", "ref_id": "b15", "title": "Parameter-efficient transfer learning for nlp", "year": "2019" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b16", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Zi-Yuan Hu; Yanyang Li; Liwei Michael R Lyu; Wang", "journal": "", "ref_id": "b17", "title": "Vl-pet: Vision-and-language parameter-efficient tuning via granularity control", "year": "2023" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt; Openclip", "journal": "", "ref_id": "b18", "title": "If you use this software", "year": "2021" }, { "authors": "Menglin Jia; Luming Tang; Bor-Chun Chen; Claire Cardie; Serge Belongie; Bharath Hariharan; Ser-Nam Lim", "journal": "Springer", "ref_id": "b19", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "Shibo Jie; Zhi-Hong Deng", "journal": "", "ref_id": "b20", "title": "Convolutional bypasses are better vision transformer adapters", "year": "2022" }, { "authors": "Baoshuo Kan; Teng Wang; Wenpeng Lu; Xiantong Zhen; Weili Guan; Feng Zheng", "journal": "", "ref_id": "b21", "title": "Knowledge-aware prompt tuning for generalizable vision-language models", "year": "2023" }, { "authors": "Guoliang Kang; Lu Jiang; Yi Yang; Alexander G Hauptmann", "journal": "", "ref_id": "b22", "title": "Contrastive adaptation network for unsupervised domain adaptation", "year": "2019" }, { "authors": "Prannay Kaul; Weidi Xie; Andrew Zisserman", "journal": "", "ref_id": "b23", "title": "Multimodal classifiers for open-vocabulary object detection", "year": "2023" }, { "authors": "Muhammad Uzair Khattak; Hanoona Rasheed; Muhammad Maaz; Salman Khan; Fahad Shahbaz Khan", "journal": "", "ref_id": "b24", "title": "Maple: Multimodal prompt learning", "year": "2023" }, { "authors": "Muhammad Uzair; Khattak ; Syed Talal Wasim; Muzammal Naseer; Salman Khan; Ming-Hsuan Yang; Fahad Shahbaz Khan", "journal": "", "ref_id": "b25", "title": "Self-regulating prompts: Foundational model adaptation without forgetting", "year": "2023" }, { "authors": "Jonathan Krause; Michael Stark; Jia Deng; Li Fei-Fei", "journal": "", "ref_id": "b26", "title": "3d object representations for fine-grained categorization", "year": "2013" }, { "authors": "Fei-Fei Li; Marco Andreeto; Marc'aurelio Ranzato; Pietro Perona", "journal": "Caltech", "ref_id": "b27", "title": "", "year": "2022" }, { "authors": "Dongze Lian; Daquan Zhou; Jiashi Feng; Xinchao Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Scaling & shifting your features: A new baseline for efficient model tuning", "year": "2022" }, { "authors": "Zhiqiu Lin; Samuel Yu; Zhiyi Kuang; Deepak Pathak; Deva Ramanan", "journal": "", "ref_id": "b29", "title": "Multimodality helps unimodality: Crossmodal few-shot learning with multimodal models", "year": "2023" }, { "authors": "Yuning Lu; Jianzhuang Liu; Yonggang Zhang; Yajing Liu; Xinmei Tian", "journal": "", "ref_id": "b30", "title": "Prompt distribution learning", "year": "2022" }, { "authors": "Gen Luo; Minglang Huang; Yiyi Zhou; Xiaoshuai Sun; Guannan Jiang; Zhiyu Wang; Rongrong Ji", "journal": "", "ref_id": "b31", "title": "Towards efficient visual adaption via structural re-parameterization", "year": "" }, { "authors": "Xiaosong Ma; Jie Zhang; Song Guo; Wenchao Xu", "journal": "", "ref_id": "b32", "title": "Swapprompt: Test-time prompt adaptation for vision-language models", "year": "2023" }, { "authors": "Subhransu Maji; Esa Rahtu; Juho Kannala; Matthew B Blaschko; Andrea Vedaldi", "journal": "", "ref_id": "b33", "title": "Fine-grained visual classification of aircraft", "year": "2013" }, { "authors": "Sachit Menon; Carl Vondrick", "journal": "", "ref_id": "b34", "title": "Visual classification via description from large language models", "year": "2007" }, { "authors": "Maria-Elena Nilsback; Andrew Zisserman", "journal": "", "ref_id": "b35", "title": "Automated flower classification over a large number of classes", "year": "2008" }, { "authors": "Laura Niss; Kevin Vogt-Lowell; Theodoros Tsiligkaridis", "journal": "", "ref_id": "b36", "title": "Quantified task misalignment to inform PEFT: An exploration of domain generalization and catastrophic forgetting in CLIP", "year": "2024" }, { "authors": "Zachary Novack; Julian Mcauley; Zachary Chase Lipton; Saurabh Garg", "journal": "PMLR", "ref_id": "b37", "title": "Chils: Zero-shot image classification with hierarchical label sets", "year": "2023" }, { "authors": "Omiros Pantazis; Gabriel Brostow; Kate Jones; Oisin Mac; Aodha ", "journal": "", "ref_id": "b38", "title": "Svl-adapter: Self-supervised adapter for vision-language pretrained models", "year": "2022" }, { "authors": "Andrea Omkar M Parkhi; Andrew Vedaldi; Zisserman; Jawahar", "journal": "IEEE", "ref_id": "b39", "title": "Cats and dogs", "year": "2012" }, { "authors": "Fang Peng; Xiaoshan Yang; Linhui Xiao; Yaowei Wang; Changsheng Xu", "journal": "IEEE Transactions on Multimedia", "ref_id": "b40", "title": "Sgva-clip: Semantic-guided visual adapting of vision-language models for few-shot image classification", "year": "2023" }, { "authors": "Sarah Pratt; Ian Covert; Rosanne Liu; Ali Farhadi", "journal": "", "ref_id": "b41", "title": "What does a platypus look like? generating customized prompts for zero-shot image classification", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b42", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Benjamin Recht; Rebecca Roelofs; Ludwig Schmidt; Vaishaal Shankar", "journal": "", "ref_id": "b43", "title": "Do imagenet classifiers generalize to imagenet?", "year": "2019" }, { "authors": "Karsten Roth; Jae ; Myung Kim; A Koepke; Oriol Vinyals; Cordelia Schmid; Zeynep Akata", "journal": "", "ref_id": "b44", "title": "Waffling around for performance: Visual classification with random words and broad concepts", "year": "2023" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Aditya Karpathy; Michael Khosla; Alexander C Bernstein; Li Berg; Fei-Fei", "journal": "International Journal of Computer Vision (IJCV)", "ref_id": "b45", "title": "ImageNet Large Scale Visual Recognition Challenge", "year": "2015" }, { "authors": "Kuniaki Saito; Donghyun Kim; Stan Sclaroff; Trevor Darrell; Kate Saenko", "journal": "", "ref_id": "b46", "title": "Semi-supervised domain adaptation via minimax entropy", "year": "2019" }, { "authors": "Jameel Hassan; Abdul Samadh; Hanan Gani; Noor Hazim Hussein; Muhammad Uzair Khattak; Muzammal Naseer; Fahad Khan; Salman Khan", "journal": "", "ref_id": "b47", "title": "Align your prompts: Test-time prompting with distribution alignment for zero-shot generalization", "year": "2023" }, { "authors": "Cheng Shi; Sibei Yang", "journal": "", "ref_id": "b48", "title": "Logoprompt: Synthetic text images can be good visual prompts for vision-language models", "year": "2023" }, { "authors": "Manli Shu; Weili Nie; De-An Huang; Zhiding Yu; Tom Goldstein; Anima Anandkumar; Chaowei Xiao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b49", "title": "Test-time prompt tuning for zero-shot generalization in vision-language models", "year": "2022" }, { "authors": "Yang Shu; Xingzhuo Guo; Jialong Wu; Ximei Wang; Jianmin Wang; Mingsheng Long", "journal": "", "ref_id": "b50", "title": "Clipood: Generalizing clip to out-of-distributions", "year": "2023" }, { "authors": "Khurram Soomro; Mubarak Amir Roshan Zamir; Shah", "journal": "", "ref_id": "b51", "title": "UCF101: A dataset of 101 human actions classes from videos in the wild", "year": "2012" }, { "authors": "Yi-Lin Sung; Jaemin Cho; Mohit Bansal", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b52", "title": "Lst: Ladder side-tuning for parameter and memory efficient transfer learning", "year": "2022" }, { "authors": "Yi-Lin Sung; Jaemin Cho; Mohit Bansal", "journal": "", "ref_id": "b53", "title": "Vl-adapter: Parameter-efficient transfer learning for vision-and-language tasks", "year": "2022" }, { "authors": "Kevin Vogt-Lowell; Noah Lee; Theodoros Tsiligkaridis; Marc Vaillant", "journal": "", "ref_id": "b54", "title": "Robust fine-tuning of vision-language models for domain generalization", "year": "2023" }, { "authors": "Haohan Wang; Songwei Ge; Eric P Xing; Zachary C Lipton", "journal": "", "ref_id": "b55", "title": "Learning robust global representations by penalizing local predictive power", "year": "2019" }, { "authors": "Zifeng Wang; Zizhao Zhang; Chen-Yu Lee; Han Zhang; Ruoxi Sun; Xiaoqi Ren; Guolong Su; Vincent Perot; Jennifer Dy; Tomas Pfister", "journal": "", "ref_id": "b56", "title": "Learning to prompt for continual learning", "year": "2022" }, { "authors": "Mitchell Wortsman; Gabriel Ilharco; Ya Samir; Rebecca Gadre; Raphael Roelofs; Ari S Gontijo-Lopes; Hongseok Morcos; Ali Namkoong; Yair Farhadi; Simon Carmon; Kornblith", "journal": "PMLR", "ref_id": "b57", "title": "Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time", "year": "2022" }, { "authors": "Mitchell Wortsman; Gabriel Ilharco; Jong Wook Kim; Mike Li; Simon Kornblith; Rebecca Roelofs; Raphael Gontijo Lopes; Hannaneh Hajishirzi; Ali Farhadi; Hongseok Namkoong", "journal": "", "ref_id": "b58", "title": "Robust fine-tuning of zero-shot models", "year": "2022" }, { "authors": "Jianxiong Xiao; James Hays; Krista A Ehinger; Aude Oliva; Antonio Torralba", "journal": "", "ref_id": "b59", "title": "Sun database: Large-scale scene recognition from abbey to zoo", "year": "2010" }, { "authors": "Yinghui Xing; Qirui Wu; De Cheng; Shizhou Zhang; Guoqiang Liang; Yanning Zhang", "journal": "", "ref_id": "b60", "title": "Class-aware visual prompt tuning for vision-language pre-trained model", "year": "2022" }, { "authors": "Hantao Yao; Rui Zhang; Changsheng Xu", "journal": "", "ref_id": "b61", "title": "Visuallanguage prompt tuning with knowledge-guided context optimization", "year": "2023" }, { "authors": "Tao Yu; Zhihe Lu; Xin Jin; Zhibo Chen; Xinchao Wang", "journal": "", "ref_id": "b62", "title": "Task residual for tuning vision-language models", "year": "2023" }, { "authors": "Elad Ben Zaken; Shauli Ravfogel; Yoav Goldberg", "journal": "", "ref_id": "b63", "title": "Bitfit: Simple parameter-efficient fine-tuning for transformer-based masked language-models", "year": "2021" }, { "authors": "Yuhang Zang; Wei Li; Kaiyang Zhou; Chen Huang; Chen Change Loy", "journal": "", "ref_id": "b64", "title": "Unified vision and language prompt learning", "year": "2022" }, { "authors": "Renrui Zhang; Rongyao Fang; Wei Zhang; Peng Gao; Kunchang Li; Jifeng Dai; Yu Qiao; Hongsheng Li", "journal": "", "ref_id": "b65", "title": "Tipadapter: Training-free clip-adapter for better vision-language modeling", "year": "2021" }, { "authors": "Yuchen Zhang; Tianle Liu; Mingsheng Long; Michael Jordan", "journal": "PMLR", "ref_id": "b66", "title": "Bridging theory and algorithm for domain adaptation", "year": "2019" }, { "authors": "Kaiyang Zhou; Ziwei Liu; Yu Qiao; Tao Xiang; Chen Change Loy", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b67", "title": "Domain generalization: A survey", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b68", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "International Journal of Computer Vision", "ref_id": "b69", "title": "Learning to prompt for vision-language models", "year": "2007" }, { "authors": "Beier Zhu; Yulei Niu; Yucheng Han; Yue Wu; Hanwang Zhang", "journal": "", "ref_id": "b70", "title": "Prompt-aligned gradient for prompt tuning", "year": "2023" } ]
[ { "formula_coordinates": [ 6, 50.98, 412.82, 189.27, 30.32 ], "formula_id": "formula_0", "formula_text": "D * m = {d * 1 , ..., d * m } = arg min d1:m∈D ′ ℓ S train , 1 m m i=1" }, { "formula_coordinates": [ 6, 50.98, 446.26, 236.05, 23.98 ], "formula_id": "formula_1", "formula_text": "D ′ := {all q permutations of W, ∀q ≤ p} (2)" }, { "formula_coordinates": [ 6, 322.26, 704.2, 223.52, 11.54 ], "formula_id": "formula_2", "formula_text": "ℓ train = E d * i ∼D * CE(ŷ d * i , (1 -λ)y truth + λŷ d * i ,0 )(3)" }, { "formula_coordinates": [ 14, 126.31, 538.18, 80.99, 65.65 ], "formula_id": "formula_3", "formula_text": "x T,k = 1 m m j=1 x j T,k s k = x I , x T,k ∥x T,k ∥" }, { "formula_coordinates": [ 14, 119.44, 638.77, 90.83, 30.32 ], "formula_id": "formula_4", "formula_text": "s k = 1 m m j=1 x I , x j T,k" } ]
2023-11-22
[ { "figure_ref": [ "fig_0", "fig_0", "fig_2", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b8", "b25", "b29", "b35", "b18", "b29", "b35", "b4", "b45", "b13", "b29", "b34", "b35", "b34", "b13", "b35", "b24", "b42", "b43", "b35", "b24", "b42", "b43", "b42", "b43" ], "table_ref": [], "text": "Human pose estimation (HPE) is a fundamental research topic in the field of computer vision and plays a crucial role in human-centred vision applications. The goal of HPE is to localize the exact pixel positions of keypoints of body parts from the images by detection and estimation methods. However, the development of HPE faces significant challenges when dealing with complicated situations, including viewpoint and appearance variations, occlusion, multiple persons, and imaging artefacts, etc. With further research, HPE has a wide range of applications in action recognition [9], intelligent surveillance [17], intention recognition [11], and automated driving [26], as one of the fundamental tasks of understanding human behaviour.\nDevelopments in recent years have shown that deep learning-based methods have achieved state-of-the-art results in solving the Human pose estimation problem. There are currently two mainstream methods: (i) first predicting keypoints heatmap and then converting them to position [4, 30,36,47], and (ii) directly regressing keypoints position [19,37,38]. In this paper, we study heatmap-based methods, which typically consist of a backbone network for feature extraction and a regressor for heatmap estimation.\nAccording to the form of the different combinations of the backbone networks and the regressors, different types of network architectures are extended. A common used network framework is to connect from high resolution to low resolution and then from low resolution to high resolution, such as SimpleBaseline [47], Hourglass [30]. Another network framework maintains high resolution throughout the process by connecting multi-resolution subnets in parallel, such as HRNet [36] and its variants [50,52]. In addition, multi-scale fusion [4, 5,22] and multi-stage supervision [46] can be integrated into both types of network frameworks. For intensive prediction tasks such as HPE, the ability of the backbone network to extract features often determines the performance of the model. Therefore, this paper adopts the SimpleBaseline [47] configuration in its combinatorial form, using transposed convolution to accomplish the low-to-high process, and concentrates on the design of the backbone network.\nThe improvement of the performance of backbone networks mainly relies on the development of feature extraction techniques. For a long time, Convolutional Neural Network (CNN) have achieved remarkable success in computer vision due to their excellent feature extraction capability, becoming the dominant method in this field, such as [14,29,30,35,36]. However, the feature extraction capability of CNN is restricted by the receptive field. In order to extract long-range features, the receptive field has to be expanded by increasing the depth of the network, even if the feature contains only a small amount of information, which results in a larger network size and a higher computational overhead. As a result, the CNN-based network model shows a significant increase in parameters and GFLOPs with gradual improvement in performance, such as Mo-bileNetV2 [35], ResNet-50 [14], and HRNet-W32 [36] in Fig. 1. Recently, there has been a number of Transformerbased backbone networks proposed, which have received a lot of attention in the field of computer vision due to their excellent long-distance modelling capability and outstanding performance, such as [8,13,24,25,43]. The Transformer-based model outperforms the classical CNNbased model on large datasets. However, when the amount of data is insufficient, the Transformer-based model will fall behind due to the difficulty in exploiting its powerful feature extraction capability. As shown in Fig. 1, PVTv2-B2 [44] slightly underperforms HRNet-W32 [36] of the same size in terms of performance on the MPII test set. It is worth noting that by adopting the Transformer-based approach we should take advantage of its long-range modelling capability rather than relying on a large number of block stacks.\nIn this study, we design the network architecture of HEViTPose for HPE tasks by taking inspiration from established networks (such as EfficientViT [24], Swin [25], PVT [43], and PVTv2 [44]) to ensure a balance between model performance, size and computational overhead, as shown in Fig. 3. The HEViTPose shows a well-balanced performance in all aspects and surpasses all models shown in Fig. 1.\nThe main contributions of this work are summarized as:\n• We propose a CGSR-MHA module that combines the benefits of CGA [24], SRA [43], and MHA [40]. This module significantly decreases computational costs by incorporating feature grouping and spatial degradation mechanisms, while maintaining feature diversity with multiple low-dimensional attention heads. • We introduce the concept of PEOW based on OPE [44], further reveal the relationship between the number of overlapping edges and local continuity through experiments, and gradually improve the indicators of the model through the optimisation of PEOW." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b26", "b30", "b40", "b0", "b24", "b42", "b42" ], "table_ref": [], "text": "Hunman Pose Estimation. The algorithmic frameworks for 2D multi-person pose estimation are classified into top-down [4, 10, 36, 47] and button-up [12,27,31,33]. The top-down algorithmic framework has been decomposing the multi-person pose estimation task into two subtasks, multi-person detection [2, 23,41] and single-person pose estimation, which is considered to be high in accuracy, high in computation and slow in inference. While button-up algorithmic framework decomposes the task into two subtasks of keypoint detection and keypoint grouping for multiple people, which is considered to be computationally fast and less accurate. In recent years, the continuous advancement of object detection algorithms [1,28,42] has led to the promotion of the top-down algorithm framework, making a significant breakthrough in inference speed. As a result, it has gradually emerged in the task of real-time human pose estimation. This paper conducts research on HPE via a top-down algorithmic framework, primarily concentrating on the architectural design of the backbone network.\nTransformer based vision backbones. The study of backbone architectures in this paper is an extension of ViT [8] and its related studies [13,25,39,43]. ViT [8] divides images into medium-sized image blocks and converts them into a series of fixed-length patch embeddings, and performs image classification through the Transformer architecture, achieving a balance between speed and accuracy. However, the excellent performance of ViT relies heavily on the support of large-scale training datasets. In order to address this issue, DeiT [39] outlines various training approaches and distillation techniques that enhance data efficiency, thus rendering ViT more efficient when dealing with smaller datasets. Several studies in the same period provided ideas to improve the performance of Transformerbased network architectures. PVT [43] introduces a pyramid structure to construct a multi-resolution feature map, which achieves better accuracy in dense prediction tasks." }, { "figure_ref": [], "heading": "LocalViT [20] incorporates depth-wise convolution into", "publication_ref": [ "b24", "b5", "b29", "b35", "b4" ], "table_ref": [], "text": "ViT to improve the local continuity of features. Swin [25] adopts local window self-attention instead of global selfattention, reducing the quadratic relationship between network complexity and image size to a linear relationship and achieving a speed-accuracy balance. In addition, MHSA [40] embeds the input features into multiple subspaces by attention head number and computes the attention maps separately, which has been shown to help improve model performance. However, improving performance simply by increasing the number of heads of attention is inefficient and creates significant computational redundancy. The work of EfficientViT [24] shows that assigning different splits of the complete feature to different attention heads can effectively reduce attentional computational redundancy. This problem-solving approach follows the same line of thought as grouping convolution [6,54]. In order to prevent the performance loss caused by excessive grouping, this work has employed two strategies. Firstly, it has appropriately increased the number of attention heads within the group. Secondly, it has controlled the dimensions of the Q, K, and V projections to correspond with the number of heads. This approach can significantly reduce the computational overhead while ensuring the network performance.\nHigh resolution feature maps. The programme of highresolution feature maps has been a great success on the HPE mission. In the development of high-resolution feature maps, four main approaches have emerged, including: (i) Dilated convolutions [3, 51] maintain the high resolution of the feature map by removing some downsampling layers, preventing the loss of spatial information but incurring more computational cost. (ii) Stacked Hourglass [30], CPN [4] utilise a decoder to restore high-resolution representations from low-resolution representations. (iii) The highresolution representation of the HRNet [36] model consists of different subnetworks with different resolutions, ensuring that the network retains its high resolution, and generates high-resolution feature maps with rich information through multi-scale fusion between branches. (iv) Transposed convolution [5,47] improves the resolution of the feature maps at the end of the network. SimpleBaselines [47] demonstrates that transposed convolution can generate high-quality feature maps for heatmap prediction. Our proposed HEViTPose follows the SimpleBaselines [47] approach to generate high-resolution feature maps from lowresolution feature maps extracted from the backbone network by transposed convolution." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Overall Architecture", "publication_ref": [ "b43", "b42" ], "table_ref": [], "text": "For the HPE task, a network model called HEViTPose is designed in this paper, as shown in Fig. 3(a). In the patch embedding part of the input image, we are inspired by the Overlapping Patch Embedding (OPE) [44] to propose a concept of Patch Embedding Overlap Width (PEOW), and design an optimisation experiment of PEOW in Sec. 4 to help readers further understand the relationship between the amount of overlap and local continuity. In the backbone network part, we implement the design concept of PVT [43], which involves the incorporation of the progressive pyramid structure [22] into the Transformer framework. This enhances the performance of the HPE task by generating multi-scale feature maps. We have meticulously designed the Transformer-based backbone network also called HEViTPose. The network consists of three stages, each with a similar architecture, including a patch embedding layer, a downsampling layer (except for the first stage), and a transformer coding layer. When presented with an input image of size H × W × 3, the network generates three feature maps sequentially, producing a feature pyramid of\nH 4 × W 4 × C 1 , H 8 × W 8 × C 2 , and H 16 × W 16 × C 3 .\nIn the head network part, we directly perform two up-sampling (transposed convolution) operations on the feature maps extracted from the backbone network, and the generated highresolution feature maps ( H 4 × W 4 × 16)[47]. In the regressor section, we simply regress the 16 keypoint heatmaps with H 4 × W 4 × 16 feature maps generated by the head network, while defining the loss function as the mean square error of the predicted heatmap and the groundtruth heatmap. Here the groundtruth heatmap is generated by a 2D Gaussian algorithm with a standard deviation of 1 pixel centred on the groundtruth position of each keypoint." }, { "figure_ref": [ "fig_1", "fig_1", "fig_3", "fig_3", "fig_5", "fig_5", "fig_8", "fig_8" ], "heading": "Patch Embedding Overlap Width", "publication_ref": [ "b43" ], "table_ref": [], "text": "ViT [8] directly divides the image into non-overlapping patches, and then extracts embedding features for different patches separately through the Transformer network architecture, as shown in the left part of Fig. 2. However, truncating the image causes a loss of continuity information be- tween patches, making it challenging to reconstruct all continuity information even when combining various embedding features. PVTv2 [44] preserves the local continuity of the image by extracting features through OPE, resulting in an improvement in the performance of the network, as shown in the right part of Fig. 2. Since the work of PVTv2 did not provide a detailed analysis of image patch overlap and local continuity, this section builds on OPE to portray the relationship between the amount of image patch overlap and local continuity through the relationship between the amount of image patch overlap and network performance.\nDefinition of PEOW. In order to quantitatively describe the amount of overlap of image patches, the concept of patch embedding overlap width (PEOW) is proposed, i.e., the width of the \"grid lines\" formed by repeated computations of the convolution kernel and image pixels. Fig. 4 shows the case of PEOW = 1, where the number in the circle indicates the number of times an image pixel is used for computation when it is convolved.\nAnalysis. By observing the sliding process of the convolution kernel in Fig. 4. We find that pixels with the number 1 are associated with elements in only 1 region covered by the convolution kernel, pixels with the number 2 can be associated with elements in 2 regions, and pixels with the number 4 can be associated with 4 regions. The modeling process of a convolution kernel is shown in Fig. 6. The continuity between two adjacent image patches can be modeled through three pixel point samples. The continuity between four pairwise adjacent image patches can only be modeled by a sample of 1 pixel point.\nAssuming that the modelling function F(•) in this layer contains operations such as convolution, activation function act(•), etc., the predicted values of the four outputs in Fig. 6 are ŷ1 , ŷ2 , ŷ4 , ŷ5 , as in Eqs. (1) to (4). Assuming that the overall modelling function is G(•), the final output of any of the predicted values is ẑi , as in Eq. ( 5). ŷ1 = act(x 11 w 11 + ... + x 33 w 33 ) = F(x 13 , x 23 , x 31 , x 32 , x 33 )\n(1) ŷ2 = F(x 13 , x 23 , x 33 , x 34 , x 35 )\n(2) ŷ4 = F(x 31 , x 32 , x 33 , x 43 , x 53 )\n(3) ŷ5 = F(x 33 , x 34 , x 35 , x 43 , x 53 ) (4) ẑi = G(ŷ 1 , ŷ2 , . . . , ŷ4 , ŷ5 , . . . )\nWe can observe from Eqs. (1) to ( 5) that although the deep model G has all the input variables, it is relatively difficult to establish continuity through different layers of variables nested in functions. However, we can provide a wealth of information for building local continuity by controlling PEOW in the shallow model F. Thus, two points for fur- It is not challenging for us to suggest expanding PEOW to improve continuity and conductivity information. We did not analyze the stride value reduction method for increasing PEOW, because this method introduces multilayer overlapping, which easily produces information redundancy, as shown in Fig. 5 (a). Therefore, we adopt the method of increasing the size of the convolution kernel to increase PEOW, as shown in Fig. 5 (b). In the experimental section, we select different PEOWs to conduct experiments to verify the reasonableness of the thinking." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "HEViTPose Building Block", "publication_ref": [ "b43", "b15", "b42", "b42" ], "table_ref": [], "text": "In order to achieve a balance between model performance and feature extraction efficiency, we first introduce a sandwich layout from EfficientViT [24] in the HEViT-Pose building blocks to reduce the memory time consumption caused by the self-attentive layer in the model and to enhance communication between channels. Secondly, we follow the PVTv2 [44] approach by replacing the fixedsize positional embedding with the positional coding introduced by the zero-padded convolutional layer (DWConv) to adapt to arbitrary size input images, as shown in Fig. 3 (b). Thirdly, we have utilised the successful practices of TNT [13], MHSA [40], CGA [24], ISSA [16], SRA [43], and other attentional mechanisms to propose a Cascaded Group Spatial Reduction Multiple Head Attention (CGSR-MHA), which is more accurate in terms of precision, more computationally efficient, and more suitable for such intensive tasks as human pose estimation, by providing each subgroup with a different segmentation of the full feature and thus explicitly decomposing the attentional computation between subgroups, as in Fig. 3(c). CGSR-MHA. The Cascaded Group Attention (CGA) proposed by EfficientViT [24] alleviates the attention head redundancy problem of Multiple Head Self-Attention (MHSA), resulting in an improvement in feature extraction efficiency. However, the CGA method exclusively divides each cascade group feature differently into Q, K, and V matrices for ablation experiments, which is more conservative in solving the information redundancy problem. Therefore, this paper adopts the idea of PVT [43] to reduce the dimensionality of the features within the cascade group, and solves the information redundancy problem by directly controlling the dimensionality of the Q, K, V matrices. Formally, CGSR-MHA can be formulated as:\nWe divide the input feature map\nX ∈ R C×H×W into G groups: X = [X 1 , X 2 , ..., X G ], where X g ∈ R C G ×H×W\ndenotes the feature map for each cascade grouping and g ∈ Table 1. Comparison on the MPII test set(PCKh@0.5). The performance, parameters, and GFLOPs for the pose estimation network are measured w/o considering human detection. All results come from retraining under the same conditions, and none uses any pre-training. We compute the percentages in terms of parameters and GFLOPs reduction between models marked with the same symbol. " }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Xg = L(X g , r), g = 1 L(X g + Xg-1 , r), g ∈ (2, 3, • • • , G)(6)\nwhere L(•) denotes the attention function within the cascade group, and X g ∈ R \nC G × L to C G × H × W . X = Linear(Concat( X1 , X2 , ..., XG )),(8)\nwhere Linear(•) denotes the linear projection operation and Concat(•) denotes the concatenation operation. X denotes the output feature map, obtained by the CGSR-MHA(•) operation.\nModel families of HEViTPose. In order to compare the network models of different scale, we build a model family containing three models and show the details of the architecture of each model in Tab. 2. C i , L i , G i , R i are the width, depth, number of groupings and spatial reduction rate of the ith stage, respectively. " }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b31", "b31", "b35", "b43", "b34" ], "table_ref": [], "text": "HEViTPose-T HEViTPose-S HEViTPose-B Training details. In this paper, we follow the common training strategy of the mmpose codebase [7], setting up different data pipelines for the training set and the validation set. The model was trained on NVIDIA RTX3060 GPU (12GB). We use the Adam [18] optimizer. The learning schedule follows the setting [32]. The base learning rate is set to 1e-3, and dropped to 1e-4 and 1e-5 at the 170th and 200th epochs, respectively. The training process is terminated within 210 epochs. We set the input size to 256×256 and the training batch size to 32. Note that all models are trained from scratch without any pre-training.\n{C 1 , C 2 , C 3 } {64,128,192} {128,192,224}{128,256,384} {G 1 , G 2 , G 3 } {1,2,3} {1,2,3} {1,2,3} {L 1 , L 2 , L 3 } {4,4,4} {4,3,2} {4,4,4} {R 1 , R 2 , R 3 } {8,4,2}{8\nTesting details. We follow the two-stage top-down multiple human pose estimation paradigm similar as [4,32], which consists of using object detectors to detect human instances and using pose estimation networks to generate keypoint predictions for the instances. We use the same person detectors provided by SimpleBaseline [47] for both the val-Table 3. Comparison on the COCO test-dev2017 set. The performance, parameters, and GFLOPs for the pose estimation network are measured w/o considering human detection. All results come from retraining under the same conditions, and none uses any pre-training. We compute the percentages in terms of parameters and GFLOPs reduction between models marked with the same symbol. Results on the test set. Tab. 1 reports the human pose estimation performance of our method and existing state-of-the-art methods on the MPII test set. Our proposed HEViTPose family achieves competitive performance compared with the state-of-the-art methods in terms of fewer model size (Params) and computation complexity (GFLOPs). For example, our HEViTPose-B achieves a score of 90.7 PCKh@0.5, which is the same as HRNet-W32 [36] with 62.1% fewer parameters and 43.4% fewer GFLOPs. Compared to PVTv2 [44], our HEViTPose-S reduces only 0.3 AP, but results in 79.8% fewer parameters and 37.0% fewer GFLOPs. Compared to MobileNetV2 [35], our HEViTPose-T improves by 2.1 AP with 66.5% fewer parameters and 17.5% fewer GFLOPs." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "COCO Keypoints Detection", "publication_ref": [ "b20", "b24", "b43" ], "table_ref": [], "text": "Dataset. The COCO dataset [21] was divided into train2017, val2017 and test-dev2017 sets with 57k, 5k and 20k images, respectively.\nTraining and testing details. In training and testing, we set the data enhancement pipeline, training strategy, and human pose estimation paradigm the same as MPII. But in order to speed up the training process of the model on the COCO 2017 dataset, we set the batch size to 64 and distributed the model training on 2 NVIDIA RTX4090 GPUs (2×24GB).\nResults on the test-dev2017 set. As shown in Tab. 3, we compared our HEViTPose-B with several of the most representative networks on COCO test-dev2017, and our network showed competitive results on parameters and GLOPs. For example, compared to the excellent Swin-S [25], our HEViTPose-B is only 0.1 AP lower, while the parameters are 80.4% lower and the GFLOPs are 63.8% lower. Compared with the most competitive PVTv2-B2 [44] in re- " }, { "figure_ref": [], "heading": "Ablation Experiments", "publication_ref": [], "table_ref": [], "text": "In this subsection, we investigate the effect of each component in HEViTPose on the MPII human pose estimation dataset. All settings follow the MPII experiments.\nInfluence of PEOW. In Tab. 4, we investigate the effect of different PEOW on HEViTPose. We observe that proper control of PEOW can significantly improve the model performance while reducing parameters and GFLOPs. According to Tab. 4, we have the following findings: (i) We can obtain the highest PCKh@0.5 score of 88.1 when we control PEOW = 3 and when stride = 4, while obtaining the lowest parameters (9.82M) and the lowest GFLOPs (5.65G). (ii) For double-layer overlap, the PCKh@0.5 score increases by 0.5 when PEOW is increased from 1 to 3. When PEOW is increased from 3 to 7, the PCKh@0.5 score decreases by 3.1. This suggests that there is a limit to improving the network performance by increasing PEOW alone, and that choosing the right PEOW will allow the model to maintain a balanced performance, parameters, and GFLOPs. (iii) Controlling PEOW to be 3, comparing stride to be 2 and 4, we find that the multi-layer overlap created by stride to be 2 does not improve the PCKh@0.5 score, but rather reduces it by 0.6, and greatly increases parameters and GFLOPs. Influence of HEViTPose. In Tab. 5, we adjust the parameter configurations of the HEViTPose model and observe the variation of the model over the MPII val set. As described in #1, when {r 1 , r 2 , r 3 } and {h 1 , h 2 , h 3 } are unchanged, the larger {g 1 , g 2 , g 3 } is, the lower the parameters and GFLOPs of HEViTPose-B. As described in #2, when {g 1 , g 2 , g 3 } and {h 1 , h 2 , h 3 } are unchanged, the larger {r 1 , r 2 , r 3 } is, the larger the parameters and GFLOPs of HEViTPose-B are, but the training time of the model is also drastically reduced. As described in #3, when {g 1 , g 2 , g 3 } and {r 1 , r 2 , r 3 } are constant, the training time increases substantially as {h 1 , h 2 , h 3 } increases. Due to limited training resources, we adjust the number of feature heads in the group to keep normal operation. After experiments, we obtain a set of parameter values that are balanced in performance, parameters, GFLOPs and training efficiency: {g 1 , g 2 , g 3 } = {4, 4, 4}, {r 1 , r 2 , r 3 } = {8, 4, 2}, {h 1 , h 2 , h 3 } = {2, 4, 8}. At this point EViTPose-B has the best performance at 89.4 PCKh@0.5 while keeping low parameters, GFLOPs and training time.\nAblation of HEViTPose on MPII validation set. In Tab. 6, we compare the optimization effects of different components on the model on the MPII validation set. Finally, under the condition that the number of parameters is maintained, the PCKh@0.5 score is increased by 1.8, and the computation amount is decreased by 5.6%. The implementation details of the different methods in the table are as follows: (a) The patch embedding part follows the OPE configuration of EfficientViT [24], which corresponds to the case of PEOW of 1 as proposed in this paper, with EfficientViT-M4 for the backbone network, and other configurations follow the basic Top-Down paradigm adopted. (b) Adjusting PEOW=3 enhances the PCKh@0.5 score of this network model by 0.5 to 88.1. It also lowers GFLOPs by 4% and slightly reduces the number of parameters. (c) The backbone network replaces EfficientViT-M4 with the HEViTPose-B proposed in this paper, which further im-Table 6. Ablation experiments for HEViTPose-B on the MPII validation set(PCKh@0.5). " }, { "figure_ref": [], "heading": "Method PEOW=3 HEViTPose Params FLOPs Total", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The paper first presents the concept of Patch Embedding Overlap Width (PEOW), which can help readers to further understand the role of Overlapping Patch Embedding (OPE) and provides an effective tool for adjusting the amount of overlap to re-establish local continuity. Then, the text proposes the High-Efficiency Vision Transformer for Human pose estimation (HEViTPose), which is a highperformance and efficient transformer architecture. The key idea is to reduce the computational redundancy through feature grouping and in-group feature dimensionality reduction, while retaining high performance through cascading of grouping and MHA of in-group features, which improves the efficiency of feature extraction. Finally, our HEViTPose benefits from the information provided by the early convolution containing local continuum features, and also benefits from the remote information interaction of the transformer in the cascade group. We experimentally validate the effectiveness of HEViTPose on a pose estimation task." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was partially supported by National Natural Science Foundation of China under grants 61563005." } ]
Human pose estimation in complicated situations has always been a challenging task. Many Transformer-based pose networks have been proposed recently, achieving encouraging progress in improving performance. However, the remarkable performance of pose networks is always accompanied by heavy computation costs and large network scale. In order to deal with this problem, this paper proposes a High-Efficiency Vision Transformer for Human Pose Estimation (HEViTPose). In HEViTPose, a Cascaded Group Spatial Reduction Multi-Head Attention Module (CGSR-MHA) is proposed, which reduces the computational cost through feature grouping and spatial degradation mechanisms, while preserving feature diversity through multiple low-dimensional attention heads. Moreover, a concept of Patch Embedded Overlap Width (PEOW) is defined to help understand the relationship between the amount of overlap and local continuity. By optimising PEOW, our model gains improvements in performance, parameters and GFLOPs. Comprehensive experiments on two benchmark datasets (MPII and COCO) demonstrate that the small and large HEViTPose models are on par with state-of-the-art models while being more lightweight. Specifically, HEViTPose-B achieves 90.7 PCK@0.5 on the MPII test set and 72.6 AP on the COCO test-dev2017 set. Compared with HRNet-W32 and Swin-S, our HEViTPose-B significantly reducing Params (↓62.1%;↓80.4%) and GFLOPs (↓43.4%;↓63.8%). Code and models are available at here.
HEViTPose: High-Efficiency Vision Transformer for Human Pose Estimation
[ { "figure_caption": "Figure 1 .1Figure 1. Comparison of HEViTPose and SOTA network models on the MPII test set regarding performance, parameters, and GFLOPs. The size of each bubble represents parameters.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. left: Patch Embedding of ViT; right: Overlapping Patch Embedding of PVTv2. The image block under the pink mask represents the first patch of each of these two methods.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Overview of HEViTPose. (a) Network Architecture of HEViTPose; (b) HEViTPose Block; (c) CGSR-MHA.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Set the convolution operation Conv2d(kernel size=3, stride=2, padding=1) corresponding to PEOW=1. Where solid circles indicate image pixels, dashed circles indicate padding pixels, and the 3×3 mask indicates the convolution kernel.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Two ways to add PEOW. (a) Setting the multilayer overlap corresponding to the convolution operation Conv2d(3,1,1); (b) Setting PEOW=3 corresponding to the convolution operation Conv2d(7,4,3).", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Modelling process for convolution with PEOW=1. For ease of understanding, here let the bias b = 0.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "CG×H×W denotes the feature map within the cascade group after L(•) processing. L(X g , r) = Rep hw l (M HA(Rep l hw (SR(X g , r)))), (7) where SR(•) denotes the spatial reduction operation and r denotes the spatial reduction rate. Rep l hw (•) denotes the operation of reshaping the features in the group of size C G × H × W to C G × L first, and then the LN (•) operation is performed on the feature map, where L = H × W denotes the encoding length and LN (•) denotes the layer normalisation operation. Rep hw l (•) denotes the operation of performing LN (•) on the feature map first, and then the operation of reshaping the features of size", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 .1MPII Human Pose Estimation Dataset. The MPII Human Pose dataset [7] has around 28k person instances used as training samples and around 12k person instances used as test samples.", "figure_data": "", "figure_id": "fig_7", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "5 .558G 89.4 proves the model PCKh@0.5 score by 1.3 and decreases GFLOPs by 1.2%.", "figure_data": "", "figure_id": "fig_8", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Architectural details of the HEViTPose model family.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Influence of adjusting the PEOW value on the HEViTPose network on the MPII validation set(PCKh@0.5). The input image size is 3×256×256 and the output image size is 128×64×64.", "figure_data": "PEOW(kernel size, stride)Params FLOPs Total1conv(3,2), conv(3, 2) 9.87M 5.91G 87.63conv(7, 4)9.82 5.65G 88.17conv(15,8), deconv(4,2) 10.5M 6.29G 85.03conv(7, 2), conv (7, 2) 10.21M 7.38G 87.5cent times, our HEViTPose-B is only 0.1 AP lower, whilethe parameters are 63.4% lower and the GFLOPs are 3.3%lower.", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of different group numbers in each stage of HEViTPose on MPII validation set(PCKh@0.5). {g1, g2, g3}: the number of cascade groups in each stages; {r1, r2, r3}: the spatial reduction ratio of the features within the group at each stage; {h1, h2, h3}: the number of attention heads for spatial reduction features within the group for each stage.#{g 1 , g 2 , g 3 } {r 1 , r 2 , r 3 } {h 1 , h 2 , h 3 }", "figure_data": "ParamsFLOPsTraining Time(day)Total(PCKh@0.5)", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Chengpeng Wu; Guangxing Tan; Chunyu Li
[ { "authors": "Elahe Arani; Shruthi Gowda; Ratnajit Mukherjee; Omar Magdy; Senthilkumar Kathiresan; Bahram Zonooz", "journal": "", "ref_id": "b0", "title": "A comprehensive study of real-time object detection networks across multiple domains: A survey", "year": "2022" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b1", "title": "End-toend object detection with transformers", "year": "2020" }, { "authors": "Liang-Chieh Chen; Maxwell Collins; Yukun Zhu; George Papandreou; Barret Zoph; Florian Schroff; Hartwig Adam; Jon Shlens", "journal": "Advances in neural information processing systems", "ref_id": "b2", "title": "Searching for efficient multi-scale architectures for dense image prediction", "year": "2018" }, { "authors": "Yilun Chen; Zhicheng Wang; Yuxiang Peng; Zhiqiang Zhang; Gang Yu; Jian Sun", "journal": "", "ref_id": "b3", "title": "Cascaded pyramid network for multi-person pose estimation", "year": "2018" }, { "authors": "Bowen Cheng; Bin Xiao; Jingdong Wang; Honghui Shi; Thomas S Huang; Lei Zhang", "journal": "", "ref_id": "b4", "title": "Higherhrnet: Scaleaware representation learning for bottom-up human pose estimation", "year": "2020" }, { "authors": "Chollet Franc", "journal": "", "ref_id": "b5", "title": "Xception: Deep learning with depthwise separable convolutions", "year": "2017" }, { "authors": "", "journal": "MMPose Contributors", "ref_id": "b6", "title": "Openmmlab pose estimation toolbox and benchmark", "year": "2020" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b7", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Haodong Duan; Yue Zhao; Kai Chen; Dahua Lin; Bo Dai", "journal": "", "ref_id": "b8", "title": "Revisiting skeleton-based action recognition", "year": "2022" }, { "authors": "Shuqin Hao-Shu Fang; Yu-Wing Xie; Cewu Tai; Lu", "journal": "", "ref_id": "b9", "title": "Rmpe: Regional multi-person pose estimation", "year": "2017" }, { "authors": "Zhijie Fang; Antonio M López", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b10", "title": "Intention recognition of pedestrians and cyclists by 2d pose estimation", "year": "2019" }, { "authors": "Zigang Geng; Ke Sun; Bin Xiao; Zhaoxiang Zhang; Jingdong Wang", "journal": "", "ref_id": "b11", "title": "Bottom-up human pose estimation via disentangled keypoint regression", "year": "2021" }, { "authors": "Kai Han; An Xiao; Enhua Wu; Jianyuan Guo; Chunjing Xu; Yunhe Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Transformer in transformer", "year": "2021" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b13", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Tong He; Zhi Zhang; Hang Zhang; Zhongyue Zhang; Junyuan Xie; Mu Li", "journal": "", "ref_id": "b14", "title": "Bag of tricks for image classification with convolutional neural networks", "year": "2019" }, { "authors": "Lang Huang; Yuhui Yuan; Jianyuan Guo; Chao Zhang; Xilin Chen; Jingdong Wang", "journal": "", "ref_id": "b15", "title": "Interlaced sparse self-attention for semantic segmentation", "year": "2019" }, { "authors": "Kashif Muhammad Attique Khan; Sajid Javed; Tanzila Ali Khan; Usman Saba; Junaid Habib; Aaqif Ali Khan; Abbasi Afzaal", "journal": "Multimedia tools and applications", "ref_id": "b16", "title": "Human action recognition using fusion of multiview and deep features: an application to video surveillance", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "" }, { "authors": "Jiefeng Li; Siyuan Bian; Ailing Zeng; Can Wang; Bo Pang; Wentao Liu; Cewu Lu", "journal": "", "ref_id": "b18", "title": "Human pose regression with residual log-likelihood estimation", "year": "2021" }, { "authors": "Yawei Li; Kai Zhang; Jiezhang Cao; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b19", "title": "Localvit: Bringing locality to vision transformers", "year": "2021" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b20", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b21", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Wei Liu; Dragomir Anguelov; Dumitru Erhan; Christian Szegedy; Scott Reed; Cheng-Yang Fu; Alexander C Berg", "journal": "Springer", "ref_id": "b22", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "Xinyu Liu; Houwen Peng; Ningxin Zheng; Yuqing Yang; Han Hu; Yixuan Yuan", "journal": "", "ref_id": "b23", "title": "Efficientvit: Memory efficient vision transformer with cascaded group attention", "year": "2023" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b24", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": "Mingqi Lu; Yaocong Hu; Xiaobo Lu", "journal": "Applied Intelligence", "ref_id": "b25", "title": "Driver action recognition using deformable and dilated faster r-cnn with optimized region proposals", "year": "2020" }, { "authors": "Zhengxiong Luo; Zhicheng Wang; Yan Huang; Liang Wang; Tieniu Tan; Erjin Zhou", "journal": "", "ref_id": "b26", "title": "Rethinking the heatmap regression for bottom-up human pose estimation", "year": "2021" }, { "authors": "Chengqi Lyu; Wenwei Zhang; Haian Huang; Yue Zhou; Yudong Wang; Yanyi Liu; Shilong Zhang; Kai Chen", "journal": "", "ref_id": "b27", "title": "Rtmdet: An empirical study of designing real-time object detectors", "year": "2022" }, { "authors": "Ningning Ma; Xiangyu Zhang; Hai-Tao Zheng; Jian Sun", "journal": "", "ref_id": "b28", "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "year": "2018" }, { "authors": "Alejandro Newell; Kaiyu Yang; Jia Deng", "journal": "Springer", "ref_id": "b29", "title": "Stacked hourglass networks for human pose estimation", "year": "2016" }, { "authors": "Alejandro Newell; Zhiao Huang; Jia Deng", "journal": "Advances in neural information processing systems", "ref_id": "b30", "title": "Associative embedding: End-to-end learning for joint detection and grouping", "year": "2017" }, { "authors": "George Papandreou; Tyler Zhu; Nori Kanazawa; Alexander Toshev; Jonathan Tompson; Chris Bregler; Kevin Murphy", "journal": "", "ref_id": "b31", "title": "Towards accurate multi-person pose estimation in the wild", "year": "2017" }, { "authors": "Leonid Pishchulin; Eldar Insafutdinov; Siyu Tang; Bjoern Andres; Mykhaylo Andriluka; Peter V Gehler; Bernt Schiele", "journal": "", "ref_id": "b32", "title": "Deepcut: Joint subset partition and labeling for multi person pose estimation", "year": "2016" }, { "authors": "Isidoros Rodomagoulakis; Nikolaos Kardaris; Vassilis Pitsikalis; E Mavroudi; Athanasios Katsamanis; Antigoni Tsiami; Petros Maragos", "journal": "IEEE", "ref_id": "b33", "title": "Multimodal human action recognition in assistive human-robot interaction", "year": "2016" }, { "authors": "Mark Sandler; Andrew Howard; Menglong Zhu; Andrey Zhmoginov; Liang-Chieh Chen", "journal": "", "ref_id": "b34", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "Ke Sun; Bin Xiao; Dong Liu; Jingdong Wang", "journal": "", "ref_id": "b35", "title": "Deep high-resolution representation learning for human pose estimation", "year": "2019" }, { "authors": "Xiao Sun; Bin Xiao; Fangyin Wei; Shuang Liang; Yichen Wei", "journal": "", "ref_id": "b36", "title": "Integral human pose regression", "year": "2018" }, { "authors": "Alexander Toshev; Christian Szegedy", "journal": "", "ref_id": "b37", "title": "Deeppose: Human pose estimation via deep neural networks", "year": "2014" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "PMLR", "ref_id": "b38", "title": "Training data-efficient image transformers & distillation through attention", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b39", "title": "Attention is all you need", "year": "2017" }, { "authors": "Chien-Yao Wang; Alexey Bochkovskiy; Hong-Yuan Mark Liao", "journal": "", "ref_id": "b40", "title": "Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors", "year": "2023" }, { "authors": "Manchen Wang; Joseph Tighe; Davide Modolo", "journal": "", "ref_id": "b41", "title": "Combining detection and tracking for human pose estimation in videos", "year": "2020" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "", "ref_id": "b42", "title": "Pyramid vision transformer: A versatile backbone for dense prediction without convolutions", "year": "2021" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "Computational Visual Media", "ref_id": "b43", "title": "Pvt v2: Improved baselines with pyramid vision transformer", "year": "2007" }, { "authors": "Zhicheng Wang; Wenbo Li; Binyi Yin; Qixiang Peng; Tianzi Xiao; Yuming Du; Zeming Li; Xiangyu Zhang; Gang Yu; Jian Sun", "journal": "", "ref_id": "b44", "title": "Mscoco keypoints challenge", "year": "2018" }, { "authors": "Shih-En Wei; Varun Ramakrishna; Takeo Kanade; Yaser Sheikh", "journal": "", "ref_id": "b45", "title": "Convolutional pose machines", "year": "2016" }, { "authors": "Bin Xiao; Haiping Wu; Yichen Wei", "journal": "", "ref_id": "b46", "title": "Simple baselines for human pose estimation and tracking", "year": "2018" }, { "authors": "Saining Xie; Ross Girshick; Piotr Dollár; Zhuowen Tu; Kaiming He", "journal": "", "ref_id": "b47", "title": "Aggregated residual transformations for deep neural networks", "year": "2017" }, { "authors": "Yufei Xu; Jing Zhang; Qiming Zhang; Dacheng Tao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Vitpose: Simple vision transformer baselines for human pose estimation", "year": "2022" }, { "authors": "Changqian Yu; Bin Xiao; Changxin Gao; Lu Yuan; Lei Zhang; Nong Sang; Jingdong Wang", "journal": "", "ref_id": "b49", "title": "Lite-hrnet: A lightweight high-resolution network", "year": "2021" }, { "authors": "Fisher Yu; Vladlen Koltun", "journal": "", "ref_id": "b50", "title": "Multi-scale context aggregation by dilated convolutions", "year": "2015" }, { "authors": "Yuhui Yuan; Rao Fu; Lang Huang; Weihong Lin; Chao Zhang; Xilin Chen; Jingdong Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b51", "title": "Hrformer: Highresolution vision transformer for dense predict", "year": "2021" }, { "authors": "Hang Zhang; Chongruo Wu; Zhongyue Zhang; Yi Zhu; Haibin Lin; Zhi Zhang; Yue Sun; Tong He; Jonas Mueller; R Manmatha", "journal": "", "ref_id": "b52", "title": "Resnest: Split-attention networks", "year": "2022" }, { "authors": "Xiangyu Zhang; Xinyu Zhou; Mengxiao Lin; Jian Sun", "journal": "", "ref_id": "b53", "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 310.06, 480.67, 206.33, 13.47 ], "formula_id": "formula_0", "formula_text": "H 4 × W 4 × C 1 , H 8 × W 8 × C 2 , and H 16 × W 16 × C 3 ." }, { "formula_coordinates": [ 5, 308.86, 678.71, 236.25, 23.18 ], "formula_id": "formula_2", "formula_text": "X ∈ R C×H×W into G groups: X = [X 1 , X 2 , ..., X G ], where X g ∈ R C G ×H×W" }, { "formula_coordinates": [ 6, 79.67, 347.65, 206.69, 26.39 ], "formula_id": "formula_3", "formula_text": "Xg = L(X g , r), g = 1 L(X g + Xg-1 , r), g ∈ (2, 3, • • • , G)(6)" }, { "formula_coordinates": [ 6, 88.47, 551.08, 197.9, 35.65 ], "formula_id": "formula_4", "formula_text": "C G × L to C G × H × W . X = Linear(Concat( X1 , X2 , ..., XG )),(8)" }, { "formula_coordinates": [ 6, 309.42, 366.35, 235.13, 45.52 ], "formula_id": "formula_5", "formula_text": "{C 1 , C 2 , C 3 } {64,128,192} {128,192,224}{128,256,384} {G 1 , G 2 , G 3 } {1,2,3} {1,2,3} {1,2,3} {L 1 , L 2 , L 3 } {4,4,4} {4,3,2} {4,4,4} {R 1 , R 2 , R 3 } {8,4,2}{8" } ]
2023-11-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b37", "b29", "b32", "b1", "b14", "b13", "b16", "b21", "b38" ], "table_ref": [], "text": "Estimating 3D model from only one input image is challenging for the ambiguity and the complexity of real world objects. Many previous works [11,28,29,38,42] focus on only some particular category, such as human, due to the promising application. 3D human dataset are collected first to train a 3D network. Whereas, these kinds of methods are not applicable to open-vocabulary image-to-3D task due to the lack of diverse 3D datasets.\nTo solve the dataset problem, some previous works try to learn 3D structure from only 2D image collections [30,33]. 2D image collections such as ImageNet [3] contain diverse images with different view angles. And thus, 3D structures can be learned from these 2D images. Recent amazing progress in diffusion models makes it possible to generate diverse images with only text prompt. 2D diffusion models are trained using billions of images LAION 5B [31], which contain object photos taken from different views. Welltrained 2D diffusion models can thus be used to learn 3D structures for open-vocabulary objects.\nMany recent works [2, 21, 34, 37] use 2D diffusion prior for text-to-3D generation. 3D representation networks such as NeRF or DMtet [32] are trained using pretrained 2D diffusion models with SDS [21] or VDS loss [37]. Furthermore, some recent works use diffusion prior to solve openvocabulary image-to-3D task. Image-to-3D aims at estimating 3D structure given an input image. As the input image may be different from typical generated images from 2D diffusion model, it becomes more difficult to train than text-to-3D task. Zero-1-to-3 [15] trains a diffusion model using multiple views of images rendered from 3D dataset. This trained diffusion model using 3D dataset is more powerful than normal pretrained diffusion models in respect to 3D capability and is referred as 3D diffusion prior. Lots of recent methods [14,17,22,35,39] manage to train a 3D representation network with only one given image using 2D diffusion prior or 3D diffusion prior.\nAlthough amazing progress has been achieved by recent methods, we notice that it may fail when the input image containing uncommon objects with asymmetry structure, such as objects from video games. These kinds of irregular object are beyond the ability of normal 2D diffusion prior and 3D diffusion prior. Because of this, we propose Boost-ing3D to boost normal 2D diffusion prior to 3D diffusion prior with progressive learning.\nFirst, we optimize a coarse NeRF using the pretrained diffusion models. Simutaneously, we train a LoRA for the specific input object. Next we train the LoRA and NeRF in a progressive way. The LoRA and NeRF will boost each other while training. After this step, we obtain a refined NeRF and a well trained LoRA with object-level 3D prior. Finally, we extract a coarse surface mesh from the trained NeRF and finetune both surface geometry and appearance using the trained LoRA. Our method is able to obtain highquality and stable 3D object from one input image as shown in Fig. 1. In summary, we make the following three main contributions: • We present Boosting3D, a novel image-to-3D pipeline that uses three-stage optimization process, i.e. coarse NeRF, fine NeRF and mesh refinement, to generate a high-quality textured mesh. • We propose a novel 3D mesh optimization method that can explicitly optimize 3D model representation and texture using T2I model. The proposed method outperforms explicit 3D representation method DMtet in terms of mesh and texture quality. • We boost 2D diffusion prior to 3D prior in a bootstrap way by training object-level LoRA . Our method achieves state-of-the-art results in 3D reconstruction of single objects for both real-world photos and synthetic images." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Diffusion models", "publication_ref": [ "b12", "b7", "b6", "b14", "b15" ], "table_ref": [], "text": "Recently, large-scale diffusion models have shown great performance in text-to-image synthesis [7], which provides an opportunity to utilize it for zero-shot text-to-3D generation [6, 13,21,37]. LoRA [8] propose to use the low rank matrix to learn the generation information of a category or object, reducing the amount of trained parameters.\nDreambooth [27] propose a training method that uses a fixed prompt and a small number of samples to finetune the whole model. Both methods enable learning the specific objectlevel information at a low cost.\nTo acquire different views of the input image, Zero-1-to-3 [15] and syncdreamer [16] train a diffusion model using multiple views of images rendered from 3D dataset. The trained diffusion model can then be used to generated multiple views of the given image. For the capability of generating multiple views, this diffusion model is treated as 3D diffusion prior" }, { "figure_ref": [], "heading": "Text-to-3D generation", "publication_ref": [ "b17", "b12", "b0", "b9" ], "table_ref": [], "text": "The goal of text-to-3D task is to generate a 3D model that is consistent with the semantics of the input prompt. Dreamfusion [21] proposes score decomposition sampling (SDS) loss to generate 3D models, which aims to minimize the distribution difference between NeRF[19] rendering and pre-trained text-to-image (T2I) models. Latentnerf [18] improves the performance of 3D generation by optimizing NeRF in latent space. In addition to generating 3D objects, SDS loss can also work in scene generation [43]. Some works[2, 13,34,36] use other 3D representation methods but also used SDS loss for optimization. Prolificdreamer [37] propose variable score decomposition (VSD) loss, which can generate high-quality and high-fidelity results. Text-to-3d method [1,5,10,24,26] uses prompt to control views when generating 3D views, which may lead to multi-face problem. Dreamtime[9] controls the change of noise sampling level during the generation process to mitigate multi-face problem. As text prompt is not accurate enough to describe 3D model, some other methods using image guidance to generate 3D model." }, { "figure_ref": [], "heading": "Image-to-3D generation", "publication_ref": [ "b37", "b14", "b15", "b21", "b13", "b21" ], "table_ref": [], "text": "The image-to-3d task can be regarded as a task of 3D reconstruction from a single image [4,41]. Previous single image reconstruction works focus on fixed class reconstruction tasks[28, 29,38], which often require a large-scale 3D training data. The difficulty of obtaining 3D data makes it not applicable to open-vocabulary objects. The text-toimage model trained with large amount of images contains 3D related information, which is the key of single image zero-shot reconstruction. Make-it-3D [35] introduces SDS loss into image-to-3d task, and uses pre-trained diffusion model and clip [23] model to complete 3D generation. Models such as zero123 [15] and syncdreamer [16] can directly generate multi-view of the input image for multiview reconstruction. Limited by the training data, the multiview generated can not guarantee the complete 3D consistency for open-vocabulary inputs. Magic123 [22] uses zero123 and pre-trained diffusion model as priors, which can achieve high-quality single image guided 3D generation. Dreamgaussion[34] and one-2-3-45 [14] uses the new 3D representation combined with diffusion model to achieve rapid 3D generation.\nThe above methods [22,34,35] are optimized by SDS loss using pre-trained diffusion priors. We notice that these methods may fail when the input image containing uncommon objects with asymmetry structure, such as objects from video games. These kinds of irregular object are beyond the ability of normal 2D diffusion prior and 3D diffusion prior. To solve this, we introduce object-specific LoRA to boost 2D diffusion prior to 3D prior. Moreover, we optimize the texture and structure of the extracted mesh using the trained LoRA, generating high-quality 3d model." }, { "figure_ref": [ "fig_0" ], "heading": "Pipeline", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce Boosting3D, a three-stage pipeline for Image-to-3D task as illustrated in Fig. 2 and present preliminaries on score distillation sampling, variational score distillation and multi-views generation (Section 3.1). Firstly, we optimze a NeRF using pretrained model, and train a LoRA initialize the object-level information (Section 3.2). Next we train the LoRA and NeRF in a progressive way. The LoRA and NeRF boost each other during training. After this step, we obtain a refined NeRF and a well trained LoRA with object-level 3D prior. (Section 3.3). Finally, we extract a coarse surface mesh from trained NeRF and fine-tune both surface geometry and appearance using trained LoRA (Section 3.4)." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b14" ], "table_ref": [], "text": "Many text-to-3D and image-to-3D methods use largescale diffusion models as an optimization foundation. Dreamfusion[21] uses pretrained diffusion model ϵ ϕ to realize the conversion from text to 3D model, which proposes score distillation sampling (SDS) loss to use prompt y to guide 3D model θ generation. SDS loss encourages the trained 3D model to sample image information from the pretrained diffusion models, so that the 3D rendering results x are consistent with the diffusion models distribution mode. Specifically, the SDS loss computes the gradient:\n∇ θ L SDS = E t,ϵ,p w t (ϵ ϕ (x p t ; t, y) -ϵ) ∂x p ∂θ(1)\nwhere ϵ ϕ (•) is the predicted noise by the 2D diffusion prior ϕ, x p t is the render image x p t in view p add noise at the noise level t, w t is a weight about t. SDS loss can realize the conversion of text to 3D, but suffers from over-saturation, low-diversity, and smoothing problems. ProlificDreamer [37] proposed variational score distillation (VSD) loss to solve these problem, which can obtain more refined 3D representation and texture. Different from SDS in minimizing the image distribution, VSD uses LoRA to sample distribution in the pre-trained space, which can produce results with photorealistic rendering. The VSD loss computes the gradient:\n∇ θ L V SD = E t,ϵ,p w t (ϵ ϕ (x t ; t, y) -ϵ lora (x p t ; t, y, c)) ∂x p ∂θ (2)\nwhere ϵ lora estimates the score of the rendered images using a LoRA (Low-rank adaptation) model.\nIn addition to the text-to-image model, there are also some models specially trained to generate multi-views. Such models contain more accurate 3D information of objects, such as Zero123XL [15] used in this paper. For Zero123XL, input an image x 0 and the viewing angle difference with the input image to generate an image corresponding to the viewing angle. For Zero123XL, the gradient of SDS loss can be changed to the following form;\n∇ θ L 3D SDS = E t,ϵ,p w t ϵ ϕ x p t ; t, x 0 , ∆p -ϵ ∂x p ∂θ (3\n)\nwhere ∆p is the camera pose difference between the current view x p and the input view x 0 . " }, { "figure_ref": [ "fig_1" ], "heading": "Stage1: Coarse NeRF Generation", "publication_ref": [], "table_ref": [], "text": "In the first stage, we obtain a coarse NeRF model that can correspond to the objects in the input image. In the process of training the NeRF model, we divide the training views into two modes: the original view of input image, using the original image as supervision; the new views of the object, using pre-trained text-to-image model and pre-trained 3D priors (Zero123XL) as supervision.\nFor the original view of the input image I 0 , we obtain image I and corresponding mask M through NeRF rendering. Here we use the original image to calculate L1 loss for I, use MSE loss to calculate the loss of the original image corresponding to mask M 0 and M , and add corresponding weights to the two losses to obtain Loss:\nL ori = λ rgb ∥ I 0 -I∥ 1 + λ mask ∥ M 0 -M ∥ 2 2 (4)\nFor new view of the object, we render the current image through NeRF to obtain the image I n and normal map N n . We add noise to I n and then input it into the pre-trained 3D prior model and the pre-trained T2I model to obtain the SDS loss of both, and add the corresponding weights to the two losses. The gradient consists of Eq.1 and Eq.3:\n∇ θ L prior = λ sds ∇ θ L SDS + λ 3d ∇ θ L 3D SDS (5)\nThe model corresponding to NeRF at this stage will have a lot of noise, so we added 2D normal map smooth loss to make the overall NeRF smoother:\nL normal = λ normal ∥ N n -δ(N n )∥ 2 2 (6)\nwhere δ(•) represents the result of moving the normal map by 1 pixels to random direction.\nIn the first stage, we will train a LoRA in the process of training NeRF based on the original image and the render image of NeRF, which will use a higher noise level t lora when training LoRA, as shown in Fig. 3.\nL lora =∥ ϵ lora (x p t ; t lora , y, c) -ϵ∥ 2(7)\nIn practice, we parameterize ϵ lora by a LoRA of the pretrained model ϵ ϕ , and use camera parameter c as the class embeddings. The LoRA will serve as the initialization of LoRA in the second stage. Overall, the stage 1 is optimized by L s1 :\nL s1 = L ori + L prior + L normal + L lora(8)\nIn process of training, We alternately train NeRF using the original input image and the new view, while training LoRA using the rendering results of the NeRF. And we find that using a specific range of noise level can make the results more refined and fit the input image." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Stage2: ReFine NeRF", "publication_ref": [], "table_ref": [], "text": "In this stage, we continued to optimize based on the coarse NeRF. After the first stage, we get a coarse NeRF and a pretrained LoRA. We used the pre-trained LoRA to initialize LoRA in this stage. The training process is also divided into original view training and new view training.\nThe original view training part is consistent with the stage 1, Eq.4 is used as the loss function for optimization.\nIn the new view training, we obtain the image I and normal map N n through NeRF rendering. We use the noisy latent of the image I as the input of the LoRA model and the original pre-trained T2I model to obtain the corresponding view results respectively, and calculate the Variational Score Distillation loss using Eq.2.\nIn this stage, LoRA is still trained through the images by NeRF rendering using Eq.7. Different from stage 1, the noise level sampling range used by the LoRA model needs to be reduced as shown in Fig. 3. Therefore, in this stage, the loss function L s2 we use to optimization is:\nL p2 = λ vsd L V SD + λ 3d L 3D SDS (9) L s2 = L ori + L p2 + L normal + L lora (10\n)\nThe reason for training LoRA in advance in stage 1 is to make LoRA conform to the current object as much as possible. In the original VSD[37], only using prompt to sample 3D information from the T2I model makes it difficult to control the details of 3D generation. On the other hand, it will cause the model generated in the image-to-3D task to be too different from the original image. Therefore, we pre-train LoRA using object-level rendering data in stage 1 and control the optimization range of LoRA from promptlevel to object-level. After stage 2 training, LoRA will be able to generate multi-view image corresponding to the input image using image-to-image method as shown in Fig. 4, which shows that the trained LoRA already has object-level 3D prior." }, { "figure_ref": [], "heading": "Stage3: Refine 3D model", "publication_ref": [], "table_ref": [], "text": "After stage 2, we get a refined NeRF and a object-level LoRA model. NeRF can render high-quality image results, but the extracted mesh is coarse. In this stage, we will optimize the extracted mesh to achieve the same high-quality as NeRF rendering.\nWhen extracting a model from NeRF, we usually need to use a threshold to determine the position of the mesh extraction surface. After determining the vertices to extract the mesh, we can get the color of vertices through the vertices positions, and then we unwrap the UV coordinates of the mesh using Xatlas [40]. In this way, we get a 3D model with UVmap, mesh M = {V ec, F, U V }. We will optimize the UV-corresponding to mesh vertices V ec and U V , in order to obtain a high-quality mesh.\nDuring the 3D mesh rendering process, the camera intrinsics are aligned with the stage 2 to ensure that images of same views as the previous two stages can be obtained. We assign a trainable offset ∆v i to each vertex v i , and assign a texture offset ∆U V to the UVmap. During the rendering process:\nI c 3d = f (V ec + ∆V ec, U V + M LP (∆U V ′ ), F, c) (11)\nwhere f represents the differentiable renderer, F represents the faces of the mesh, c represents the camera extrinsics of rendering and M LP represents a multi-layer perceptron, which will calculate the real ∆U V . When using ∆U V directly without using M LP for mapping, the optimization effect is not ideal. During the optimization process, we will also divide it into the original view and the new view. The original view uses the original image to calculate the loss like Eq.4. In the new view, we use the LoRA model trained in previous two stages as our pre-trained model to optimize the parameters, the gradient of rendering image I c 3d can be computed as follow: The LoRA model is able to generate an image with better similarity to our current object than the original T2I model.\n∇L I3d = E t,ϵ,c [w t (ϵ lora (I c 3d ; t, y, c) -ϵ)](12)\nIn this stage, the LoRA model is no longer trained. To prevent abrupt geometry, we apply a normal smoothing loss Eq.6 on the rendering image and add an L2 loss to ∆v i .\nL of f set = i (∆v i ) 2(13)\nThese loss will prevent our vertex optimization from being too far away from the original position while ensuring the smoothness of the mesh." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b14" ], "table_ref": [], "text": "In all experiments, the basic model and optimizer used by all methods are same. We adopt the stable diffusion[7] v2.1-base version as pre-trained text-to-image model, and Zero123XL [15] as 3d prior diffusion model. We use Blip2 [12] to generate the prompt corresponding to the input image. During the training phase, Adam is used for optimization, and the learning rate is set to 0.0001. We use multi-scale hash encoding in Instant-NGP [20] as the basic model for NeRF in stages 1 and 2, and use pytorch3d[25] as differentiable renderer in stage 3.In stage 1, we trained 1500 steps. The rendering resolution was set to 64 in the first 500 steps and 128 in the last 1000 steps. In stage 2, the resolution of novel view is set to 256, the resolution of original view is set to 512, and 3500 steps are trained. In stage 3, the resolution is set to 800 for mesh optimization, and trained 2000 steps. At stage 3, the mesh is extracted at a resolution of 512 3 with a density threshold of 10 by marching cubes from NeRF trained in stage 2.\nλ SDS and λ 3d are set to 0.2 and 1 for stage 1 and λ vsd is set to 1 in stage 2, which reduces the oversaturation of the texture. The loss weights λ rgb for color are linearly in-creased from 100 to 1000 during training, λ mask linearly increased from 50 to 500 during training, and the λ normal is increased from 0 to 100 in the first two stages and reduced from 100 to 10 in stage 3. In the training process of NeRF, we use pure white as the background.\nIn the training process, we assume that the input image is shot from the front view, that is, the initial polar angle is 90°and the azimuth angle is 0°. During the new view training, we will randomly sample the azimuth angle within 360°and the camera polar angle between 60 and 150, but keep the distance from the camera to the center of the object unchanged throughout the training process. At the same time, the intrinsics parameters of camera are all fixed during the training process. In the training process, it is only necessary to ensure that the rendering range of NeRF is within the range of the camera, and the intrinsics parameters of camera does not need to use a specific value." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Results and Comparisons", "publication_ref": [ "b21" ], "table_ref": [], "text": "Qualitative Comparisons. Our method will be compared with the state-of-the-art Zero123XL and Magic123. For Zero123XL, we use 3D-SDS loss to optimize a NeRF with the same parameters as our method. For Magic123, we use the original code, but replace the pre-trained diffusion model from v1.5 to v2.1-base version, and replace 3d prior from Zero123 to Zero123XL with higher performance, which yields better quality than the original implementation.\nIn Fig. 5, we show the comparison results of our method with Zero123XL and Magic123. Our method achieves the best effect in texture performance and 3D structure. It is worth noting that our method can still generate very reasonable structure and fine texture in the case of rare objects, such as the monster related images in the last two lines, which also shows the robustness of our method. Quantitative Evaluation.\nWe used the indicators employed in previous studies [22]: PSNR and CLIP-Similarity [23]. We used a self-built dataset for evaluation, which contains real images similar to the input image shown in Fig. 5. PSNR is measured in the original view of results to measure the reconstruction quality. Clip-similarity calculates the average clip distance between the rendered image and the input image, and measures the 3D consistency through the appearance similarity between the new view and the original view.\nAs shown in Table .1, compared with previous methods, our method achieves first-class performance in all metrics. Among them, ZeroXL-DMtet represents the result of refinement using DMtet, Ours-DMtet represents the result of optimization using DMtet and original diffusion in stage 3, and Ours-mesh represents the result of the final mesh of our method. PSNR results show that our method can restore the input better than other methods. The improvement of CLIP-Similarity reflects that our results have better 3D consistency." }, { "figure_ref": [ "fig_4" ], "heading": "Ablation Study", "publication_ref": [ "b1" ], "table_ref": [], "text": "The effect of pre-training LoRA in the stage 1. In Fig. 6, we study the impact of LoRA training process on the results. It is obvious that without the pre-train of LoRA, the directly combination of VSD loss and 3D SDS will not generate a reasonable structure, and there may be a multi-face effect, as shown in the last line. Therefore, in our method, LoRA pre training in stage 1 is a necessary process. Table 1. We show the quantitative results based on CLIP-Similarity/PSNR. The bold is the best. ZeroXL-DMtet represents the result of refinement using DMtet, Ours-DMtet represents the result of optimization using DMtet and original diffusion in stage 3, and Ours-mesh represents the result of the final mesh of our method. Effect of stage 3. We show the effect of stage 3 on the results in Fig. 7. (a) is the input image, (b) is the mesh extracted from the trained NeRF, and (c) is the effect of using Deep Marching Tetrahedra (DMTet) [32] and original SDS loss to replace stage 3. It can be seen that the texture of (c) is relatively fuzzy. The texture generated after stage 3 (d) is more detailed, and the rendering result will be more consistent with the original input image. It can be observed an intuitive improvement in the quality of the final mesh using the proposed method." }, { "figure_ref": [], "heading": "Algorithms", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although our method can achieve precise and robust 3D content generation, the overall time consumption is relatively high, requiring about than an hour of training time.\nWe will optimize the speed using faster 3D representation in future work." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have presented Boosting3D, a multi-stage pipeline, for 3D generation tasks guided by a single image. Benefiting from the boosted 3D prior (object-specific LoRA) , Boosting3D can produce reasonably fine results in different data domains and has high robustness for zero-shot images. Boosting3D outperforms previous technologies in terms of structural rationality and texture details, as demonstrated by experiments based on real and synthetic images. By optimizing the mesh, Boosting3D can obtain high-precision mesh results with high-quality texture representation. We believe that this work can effectively promote the development of universal 3D generation and has great potential in future applications." } ]
Figure 1. The results of Boosting3D for the image-to-3d generation task. Our method can reconstruct reasonably detailed 3D mesh from a single image in different data domains.
Boosting3D: High-Fidelity Image-to-3D by Boosting 2D Diffusion Prior to 3D Prior with Progressive Learning
[ { "figure_caption": "Figure 2 .2Figure 2. The pipeline of Boosting3D. Boosting3D is a three-stage framework for high quality 3D generation from a reference image. In stage 1, we optimized a course NeRF and a object-level LoRA. In stage 2, we refined the NeRF using the pre-trained model and the LoRA trained in stage 1. In stage 3, we extracted the 3D mesh from the trained NeRF and refined the 3D model using the pre-trained LoRA.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. The proposed noise level for training. We use a higher noise level to train in stage 1 and use a lower noise level in stage 2&3. N represents the training steps in stage 1, and M represents the training steps in total.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Results from LoRA after stage 2. Different images are obtained using different camera parameters as class embeddings and using no-texture rendering as base image.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Qualitative comparisons of different methods. Compared with Magic123[22] and Zero123XL[15], our method performs better on both texture and 3D structure. The last column is the no-texture rendering results of the mesh obtained by our pipeline.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Effect of stage 3. The rendering result (d) using our stage 3 refinement strategy is more consistent with the original input image than using DMtet (c).", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" } ]
Kai Yu; Jinlin Liu; Mengyang Feng; Miaomiao Cui; Xuansong Xie; Alibaba Group
[ { "authors": "Tianshi Cao; Karsten Kreis; Sanja Fidler; Nicholas Sharp; Kangxue Yin", "journal": "", "ref_id": "b0", "title": "Texfusion: Synthesizing 3d textures with text-guided image diffusion models", "year": "2023" }, { "authors": "Rui Chen; Yongwei Chen; Ningxin Jiao; Kui Jia", "journal": "", "ref_id": "b1", "title": "Fantasia3d: Disentangling geometry and appearance for high-quality text-to-3d content creation", "year": "2023" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "Ieee", "ref_id": "b2", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Shivam Duggal; Deepak Pathak", "journal": "", "ref_id": "b3", "title": "Topologically-aware deformation fields for single-view 3d reconstruction", "year": "2022" }, { "authors": "Jun Gao; Tianchang Shen; Zian Wang; Wenzheng Chen; Kangxue Yin; Daiqing Li; Or Litany; Zan Gojcic; Sanja Fidler", "journal": "Advances In Neural Information Processing Systems", "ref_id": "b4", "title": "Get3d: A generative model of high quality 3d textured shapes learned from images", "year": "2022" }, { "authors": "Ying-Tian Yuan-Chen Guo; Chen Liu; Zi-Xin Wang; Guan Zou; Chia-Hao Luo; Yan-Pei Chen; Song-Hai Cao; Zhang", "journal": "", "ref_id": "b5", "title": "threestudio: A unified framework for 3d content generation", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in neural information processing systems", "ref_id": "b6", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b7", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Yukun Huang; Jianan Wang; Yukai Shi; Xianbiao Qi; Zheng-Jun Zha; Lei Zhang", "journal": "", "ref_id": "b8", "title": "Dreamtime: An improved optimization strategy for text-to-3d content creation", "year": "2023" }, { "authors": "Heewoo Jun; Alex Nichol", "journal": "", "ref_id": "b9", "title": "Shap-e: Generating conditional 3d implicit functions", "year": "2023" }, { "authors": "Nikos Kolotouros; Georgios Pavlakos; Kostas Daniilidis", "journal": "", "ref_id": "b10", "title": "Convolutional mesh regression for single-image human shape reconstruction", "year": "2019" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b11", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Chen-Hsuan Lin; Jun Gao; Luming Tang; Towaki Takikawa; Xiaohui Zeng; Xun Huang; Karsten Kreis; Sanja Fidler; Ming-Yu Liu; Tsung-Yi Lin", "journal": "", "ref_id": "b12", "title": "Magic3d: High-resolution text-to-3d content creation", "year": "2023" }, { "authors": "Minghua Liu; Chao Xu; Haian Jin; Linghao Chen; Zexiang Xu; Hao Su", "journal": "", "ref_id": "b13", "title": "One-2-3-45: Any single image to 3d mesh in 45 seconds without per-shape optimization", "year": "2023" }, { "authors": "Ruoshi Liu; Rundi Wu; Basile Van Hoorick; Pavel Tokmakov; Sergey Zakharov; Carl Vondrick", "journal": "", "ref_id": "b14", "title": "Zero-1-to-3: Zero-shot one image to 3d object", "year": "2023" }, { "authors": "Yuan Liu; Cheng Lin; Zijiao Zeng; Xiaoxiao Long; Lingjie Liu; Taku Komura; Wenping Wang", "journal": "", "ref_id": "b15", "title": "Syncdreamer: Generating multiview-consistent images from a single-view image", "year": "2023" }, { "authors": "Luke Melas-Kyriazi; Iro Laina; Christian Rupprecht; Andrea Vedaldi", "journal": "", "ref_id": "b16", "title": "Realfusion: 360deg reconstruction of any object from a single image", "year": "2023" }, { "authors": "Gal Metzer; Elad Richardson; Or Patashnik; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b17", "title": "Latent-nerf for shape-guided generation of 3d shapes and textures", "year": "2023" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Communications of the ACM", "ref_id": "b18", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2021" }, { "authors": "Thomas Müller; Alex Evans; Christoph Schied; Alexander Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b19", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "Ben Poole; Ajay Jain; Jonathan T Barron; Ben Mildenhall", "journal": "", "ref_id": "b20", "title": "Dreamfusion: Text-to-3d using 2d diffusion", "year": "2022" }, { "authors": "Guocheng Qian; Jinjie Mai; Abdullah Hamdi; Jian Ren; Aliaksandr Siarohin; Bing Li; Hsin-Ying Lee; Ivan Skorokhodov; Peter Wonka; Sergey Tulyakov", "journal": "", "ref_id": "b21", "title": "Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b22", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Amit Raj; Srinivas Kaza; Ben Poole; Michael Niemeyer; Nataniel Ruiz; Ben Mildenhall; Shiran Zada; Kfir Aberman; Michael Rubinstein; Jonathan Barron", "journal": "", "ref_id": "b23", "title": "Dreambooth3d: Subject-driven text-to-3d generation", "year": "2023" }, { "authors": "Nikhila Ravi; Jeremy Reizenstein; David Novotny; Taylor Gordon; Wan-Yen Lo; Justin Johnson; Georgia Gkioxari", "journal": "", "ref_id": "b24", "title": "Accelerating 3d deep learning with pytorch3d", "year": "2020" }, { "authors": "Elad Richardson; Gal Metzer; Yuval Alaluf; Raja Giryes; Daniel Cohen-Or", "journal": "", "ref_id": "b25", "title": "Texture: Text-guided texturing of 3d shapes", "year": "2023" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b26", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Shunsuke Saito; Zeng Huang; Ryota Natsume; Shigeo Morishima; Angjoo Kanazawa; Hao Li", "journal": "", "ref_id": "b27", "title": "Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization", "year": "2019" }, { "authors": "Shunsuke Saito; Tomas Simon; Jason Saragih; Hanbyul Joo", "journal": "", "ref_id": "b28", "title": "Pifuhd: Multi-level pixel-aligned implicit function for high-resolution 3d human digitization", "year": "2020" }, { "authors": "Kyle Sargent; Jing Yu Koh; Han Zhang; Huiwen Chang; Charles Herrmann; Pratul Srinivasan; Jiajun Wu; Deqing Sun", "journal": "", "ref_id": "b29", "title": "Vq3d: Learning a 3d-aware generative model on imagenet", "year": "2023" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Tianchang Shen; Jun Gao; Kangxue Yin; Ming-Yu Liu; Sanja Fidler", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Deep marching tetrahedra: a hybrid representation for high-resolution 3d shape synthesis", "year": "2021" }, { "authors": "Ivan Skorokhodov; Aliaksandr Siarohin; Yinghao Xu; Jian Ren; Hsin-Ying Lee; Peter Wonka; Sergey Tulyakov", "journal": "", "ref_id": "b32", "title": "3d generation on imagenet", "year": "2023" }, { "authors": "Jiaxiang Tang; Jiawei Ren; Hang Zhou; Ziwei Liu; Gang Zeng", "journal": "", "ref_id": "b33", "title": "Dreamgaussian: Generative gaussian splatting for efficient 3d content creation", "year": "2023" }, { "authors": "Junshu Tang; Tengfei Wang; Bo Zhang; Ting Zhang; Ran Yi; Lizhuang Ma; Dong Chen", "journal": "", "ref_id": "b34", "title": "Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior", "year": "2023" }, { "authors": "Christina Tsalicoglou; Fabian Manhardt; Alessio Tonioni; Michael Niemeyer; Federico Tombari", "journal": "", "ref_id": "b35", "title": "Textmesh: Generation of realistic 3d meshes from text prompts", "year": "2023" }, { "authors": "Zhengyi Wang; Cheng Lu; Yikai Wang; Fan Bao; Chongxuan Li; Hang Su; Jun Zhu", "journal": "", "ref_id": "b36", "title": "Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation", "year": "2023" }, { "authors": "Yuliang Xiu; Jinlong Yang; Xu Cao; Dimitrios Tzionas; Michael J Black", "journal": "", "ref_id": "b37", "title": "Econ: Explicit clothed humans optimized via normal integration", "year": "2023" }, { "authors": "Dejia Xu; Yifan Jiang; Peihao Wang; Zhiwen Fan; Yi Wang; Zhangyang Wang", "journal": "", "ref_id": "b38", "title": "Neurallift-360: Lifting an in-the-wild 2d photo to a 3d object with 360deg views", "year": "2023" }, { "authors": "Jonathan Young", "journal": "", "ref_id": "b39", "title": "xatlas: Mesh parameterization / uv unwrapping library", "year": "2021" }, { "authors": "Alex Yu; Vickie Ye; Matthew Tancik; Angjoo Kanazawa", "journal": "", "ref_id": "b40", "title": "pixelnerf: Neural radiance fields from one or few images", "year": "2021" }, { "authors": "Zerong Zheng; Tao Yu; Yixuan Wei; Qionghai Dai; Yebin Liu", "journal": "", "ref_id": "b41", "title": "Deephuman: 3d human reconstruction from a single image", "year": "2019" }, { "authors": "Jingyu Zhuang; Chen Wang; Lingjie Liu; Liang Lin; Guanbin Li", "journal": "", "ref_id": "b42", "title": "Dreameditor: Text-driven 3d scene editing with neural fields", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 332.82, 269.89, 212.29, 23.89 ], "formula_id": "formula_0", "formula_text": "∇ θ L SDS = E t,ϵ,p w t (ϵ ϕ (x p t ; t, y) -ϵ) ∂x p ∂θ(1)" }, { "formula_coordinates": [ 3, 308.86, 473.07, 242.8, 34.53 ], "formula_id": "formula_1", "formula_text": "∇ θ L V SD = E t,ϵ,p w t (ϵ ϕ (x t ; t, y) -ϵ lora (x p t ; t, y, c)) ∂x p ∂θ (2)" }, { "formula_coordinates": [ 3, 314.3, 653.15, 226.94, 23.89 ], "formula_id": "formula_2", "formula_text": "∇ θ L 3D SDS = E t,ϵ,p w t ϵ ϕ x p t ; t, x 0 , ∆p -ϵ ∂x p ∂θ (3" }, { "formula_coordinates": [ 3, 541.24, 661.78, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 75.85, 644.63, 210.51, 14.11 ], "formula_id": "formula_4", "formula_text": "L ori = λ rgb ∥ I 0 -I∥ 1 + λ mask ∥ M 0 -M ∥ 2 2 (4)" }, { "formula_coordinates": [ 4, 341.31, 487.32, 203.81, 12.69 ], "formula_id": "formula_5", "formula_text": "∇ θ L prior = λ sds ∇ θ L SDS + λ 3d ∇ θ L 3D SDS (5)" }, { "formula_coordinates": [ 4, 352.44, 550.05, 192.68, 14.11 ], "formula_id": "formula_6", "formula_text": "L normal = λ normal ∥ N n -δ(N n )∥ 2 2 (6)" }, { "formula_coordinates": [ 4, 353.7, 649.77, 191.41, 13.89 ], "formula_id": "formula_7", "formula_text": "L lora =∥ ϵ lora (x p t ; t lora , y, c) -ϵ∥ 2(7)" }, { "formula_coordinates": [ 5, 85.87, 347.71, 200.49, 9.65 ], "formula_id": "formula_8", "formula_text": "L s1 = L ori + L prior + L normal + L lora(8)" }, { "formula_coordinates": [ 5, 91.17, 673.8, 195.19, 40.05 ], "formula_id": "formula_9", "formula_text": "L p2 = λ vsd L V SD + λ 3d L 3D SDS (9) L s2 = L ori + L p2 + L normal + L lora (10" }, { "formula_coordinates": [ 5, 282.21, 704.51, 4.15, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 5, 315.26, 522.21, 229.85, 12.69 ], "formula_id": "formula_11", "formula_text": "I c 3d = f (V ec + ∆V ec, U V + M LP (∆U V ′ ), F, c) (11)" }, { "formula_coordinates": [ 5, 331.49, 702.12, 213.62, 12.69 ], "formula_id": "formula_12", "formula_text": "∇L I3d = E t,ϵ,c [w t (ϵ lora (I c 3d ; t, y, c) -ϵ)](12)" }, { "formula_coordinates": [ 6, 124.52, 348.75, 161.84, 21.98 ], "formula_id": "formula_13", "formula_text": "L of f set = i (∆v i ) 2(13)" } ]